Select Page

Original & Concise Bullet Point Briefs

Actual AI Text-To-Video is Finally Here!

Text-To-Video Technology Advances, Open Source Model Now Available

  • Text to video technology has advanced significantly
  • Meta, Google, and other companies have released demos and tools for text to video
  • Deforum and Plasma Punk are two of these tools that merge images together to create animations
  • Stable Diffusion allows users to type in what they want to see and get a video
  • Hugging Face’s Model Scope Text-to-Video Synthesis is the first open source text-to-video model available today
  • It was trained on Shutterstock videos which is why there are watermarks on some of the videos it produces
  • Users can use this tool for free or duplicate the space for a small fee.

Text to Video Tech: Making Strides in a Short Timeframe

  • Text to video technology is finally available and progressing quickly
  • Recent text to video tech, such as mid-Journey Version 5, has improved drastically in a short amount of time
  • Generating videos using this technology requires trial and error, but with patience one can get the desired outcome
  • Videos created by text to video are still being improved upon, but they do not yet match the quality of hand-crafted ones.

Uncovering the Latest in AI: Futuretools.io Curates the Coolest Tools and Keeps You Up-to-Date

  • Emerging technology in the AI space is moving quickly
  • Futuretools.io curates all of the coolest tools and has begun to remove any junk tools
  • Join the free newsletter for a TLDR of the week and to learn about how to make money with AI
  • Thumbs up and subscribe for more updates on emerging technology in the AI space
  • Appreciates viewers who nerd out on AI tech.

Original & Concise Bullet Point Briefs

With VidCatter’s AI technology, you can get original briefs in easy-to-read bullet points within seconds. Our platform is also highly customizable, making it perfect for students, executives, and anyone who needs to extract important information from video or audio content quickly.

  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness
  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness

Unlock the Power of Efficiency: Get Briefed, Don’t Skim or Watch!

Experience the power of instant video insights with VidCatter! Don’t waste valuable time watching lengthy videos. Our AI-powered platform generates concise summaries that let you read, not watch. Stay informed, save time, and extract key information effortlessly.

so up until now we haven't really seenreal text to video we've seen some demosfrom companies like meta and companieslike Google showing off what text tovideo is coming and we've had somereally cool tools like deforum and thisplasma Punk tool and this decoherencetool which sort of merge image to imageto image and give kind of a coolanimation effect but we haven't reallyhad true text to image in the way wewould use stable diffusion ormid-journey to type in what we want tosee and actually get a video of thatthing until now in fact here's a cooldemo of what some people have done thatI came across on Reddit you gotmountains and water in a Chinesepainting a beautiful painting of aBuddhist temple and a Serene landscapetraditional Chinese painting landscapewith bridge and waterfall you gotfireworks you got campfire at night inthe snowy forest with Starry Sky in thebackground you've got a mountain riverso I'm gonna go ahead and play this soyou can see what all these various demoslook like and then we're gonna go playaround with it ourselves because you canuse it right now today you can see themountain and water you can see all thesewaterfalls you can see the fireworkshere's the mountain river The StarryNight with the fire going we've got aclown fish swimming through the coralreef we got Ducks swimming in a pond alitter of puppies running through theyard a panda bear is eating bamboo on arock a horse is chewing a Knight ridingon a horse animation all of these weredone with text to image an orange catwearing a leather jacket and sunglassessings in a metal band on stage a monkeylearning to play the piano two kangaroosare busy cooking dinner in a kitchenthis is from a Reddit post that I foundin the stable diffusion subreddit herecalled the first open sourcetext-to-video 1.7 billion parameterdiffusion model is out now you can playwith this right now in this hugging facespace here called Model scope text tovideo synthesis now there is kind of onelittle catch to it it seems as though alot of the videos that are inside ofthis model that it was trained on wereactually videos that seem to have havebeen taken from Shutterstock so a lot ofthe videos that it produces hasShutterstock watermarks across it forexample Victor M here on Twitter who isthe head of product design at huggingface had this post go somewhat viralwhere he generated his own Little StarWars clip using AI now if we watch thisdemo video that they made you'll noticekind of across the bottom of the videothere's sort of a Shutterstock Watermarkthat kind of shows up throughout thewhole thing which I think proves that alot of the video is trained onShutterstockforeign[Music]if you want to play with this yourselfhere's what you can do you can go to thehugging face model scope text to videosynthesis I'll make sure it's linkedbelow this video here and there's reallytwo ways that you can do this you can doit for free right now in hugging spacewhere we enter a prompt and it willprobably take a little bit because a lotof people are playing with this rightnow so let's do like an alien eating ataco and if I click run it says thisapplication is too busy keep tryinglet's go ahead and try it againapplication too busy keep trying so youmight be able to run it after some timebut a lot of people are messing withthis right now because this is freshthis is new this is the hottest thing atthe moment however you can duplicate thespace yourself but you will need to havea credit card on file inside of huggingface and it will probably cost you a fewcents like probably less than twodollars or something to run so for thesake of this example I'm going toduplicate the space so that you can seewhat it does but keep in mind that ifyou really want to do it for free youcould you just got to keep on tryingwhile their servers are really boggeddown so let's go ahead and duplicatethis space here and it's going to take alittle bit of time to get the space upand running okay so I've duplicated thespace but it's showing me that I've gota runtime error and the reason for thatis because it duplicated it onto a freeGPU which isn't going to be powerfulenough to actually run this model so ifI come over here to settings you can seeit's got it on this CPU basic and we'regoing to want to do something a littlebit stronger than that so I'm actuallygoing to upgrade it to this T4 mediumhere with 30 gigabytes of RAM and thatshould be enough to actually run this solet's go ahead and swap to this I'mgoing to need to add my payment methodin here and I'm going to set a sleeptimer of just one hour of inactivity soif I accidentally walk away it doesn'tcontinue to charge me so we'll go aheadand click confirm new hardware here andnow it's going to try to boot up this T4medium and that should get rid of ourruntime error once that's all booted upall right and now we are running on ourown T4 system so I should be able togenerate whatever I want and not have towait in any sort of cues and let's goahead and try a green alien eating ataco and let's click run you can seeit's actually processing this time I'mnot getting any errors because I am nothaving to deal with everybody else usingthe exact same server as me I've got myown server now and our video is ready ittook about 60 seconds to process now itis only a two second clip and you cankind of see a little bit of a watermarkrunning through it but here's our greenalien eating a taco maybe I can see ataco in there maybe if we give it somemore detail here if we do the standardthing that you might do on like stablediffusion a detailed green alienstanding on a red Mars landscape let'sadd some of these other words likeUnreal Engine trending on Art stationrealistic realistic as an alien can be Iguess HD 4K let's see a detailed greenalien standing on a red Mars landscapeeating a yellow crunchy taco so nowwe've got a little more detail let's seewhat happens when we run it this timeall right it looks like we have a littlemore detail I'm not seeing a taco yetbut let's go ahead and see what we getwell I'm getting my alien on Mars butI'm not really seeing the taco partwe're still getting that ShutterstockWatermark that seems to be on everysingle video which I think just sort ofproves that all the training materialthat they use was probably Shutterstockvideos let's try a different subjectmatter here maybe something a littlemore realistic let's do a penguinkicking a soccer ball all right I'mseeing a soccer field hereoh you see like a penguin flash on thescreen real quick and then fly away sofar I'm not able to get anything nearlyas detailed as what we're seeing in someof these videos here let's try some thatwere actually in this demo and see if Ican get a similar result from the demolike clownfish swimming through a coralreef let's see if we can get somethingsimilar there all right so we gotclownfish swimming through a coral reeflet's go ahead and run that and see whatwe get all right this is looking alittle bit better now you can tell thisis supposed to be a clown fish swimmingthrough a coral reef let's try a monkeyon roller skates all right I see themonkey I'm not seeing roller skates ohokay okay okay yeah that's a monkey onroller skates sort of towards the end ofthe video you can see it I really wishyou could generate longer than twoseconds but you know it is what it isright now this is obviously very veryearly tech let's try a cat learning toplay the piano all right let's watch ourcat playing the piano here kind of morelooks like a cat sniffing a piano nowhere's the model scope page for thistext Generation video mock-up and youcan see some of the ones that theyshared here a giraffe underneath amicrowave it kind of looks like agiraffe in a microwave a golden doodleplaying in a park by a lake a panda beardriving a car teddy bear running in NewYork City drone fly through of a fastfood restaurant on a dystopian AlienPlanet a dog wearing a superhero outfitwith red cape flying through the sky nowI gotta be honest I think these are someof the cherry-picked ones because when Itry stuff I'm not getting the greatestresults so I think you know theyprobably did a thousand generations andthey're showing you the nine best onesthat they came up with because so farit's you know not that close to what I'mtrying to generate let's go ahead andtry a dog wearing a superhero outfitwith red cape flying through the sky seeif I get something similar see when Iuse that exact same prompt that's thewhat I get it's like a dog with maybe acape wrapped around it and that's theexact same prompt that they're showinghere as a dog wearing a superhero outfitwith red cape flying through the sky andI'm sure I can use a different seed andget a completely different output solet's go ahead and change the seed get adifferent output but they definitely arecherry picking so all the stuff you'reseeing online is probably after hundredsand hundreds if not thousands ofgenerations and them going all righthere's the best ones we've come up withall right let's try again a little bitcloser no it looks like a dog kind ofrunning with a cape and it maybe flieshovers for a second there but definitelynothing close to what we're seeing inthis image right here where it actuallylooks like a dog flying around with acape let's see if we try a teddy bearrunning in New York City let's see if itlooks anything like what thirdgeneration here looks like all right sohere's our version of a teddy bearrunning through New York Citynot too bad actually that's actually oneof the more impressive ones I've seenjust keep in mind that a lot of theimages that you're seeing from this textto video you know the ones that I showedyou on Reddit here that look reallyreally cool the ones that they'reshowing off here on their model scopepage here these are the Cherry Pickedones these are the ones where theyprobably tried hundreds of times bunchof different seeds until finally theycame to a video that looked exactly likewhat they envisioned in their head andso while this is really cool you'reprobably going to need to do tons andtons and tons of prompts until youfinally get one that looks like what youwant it to look like and every time youprompt one using your own T4 server likeI am it takes about one minute so ifyou're trying to generate 20 differentvideos to finally get the video you'relooking for it might take you 20 minutesto generate 20 videos but you likely canget to this level of quality that we seehere and that we see here it's justgoing to be a lot of trial and erroruntil you do but I want to remind you ofsomething this is super super early Techif we look back to when Dolly one wasfirst becoming available back towardsthe end of 21 beginning of 2022 theseare the types of images Dolly one wasgenerating less than a year ago andhere's what we can create to date withthings like mid-journey version 5.here's some more early images from dollywhen text image was just breakingthrough of an armchair that looks likean avocado and here's the type of imageswe're generating today with somethinglike mid Journey version 5. so in lessthan a year we went from this to this soif we're able to generate videos likethis today just imagine where thistechnology is going to be a year fromnow text to video is finally herethere's a version that you can play withyes it may feel a bit underwhelmingright now and you might have to try ahundred times to get the exact videoyou're looking for but it's here it'savailable and we're basically at day oneof having access to this once again I'llmake sure the link to where this huggingface spaces is in the comments below soyou can go and use it you probably willneed to duplicate the space and upgradeyour server in order to use it right nowbut if you're one of the lucky ones thatmanages to get in and play around withit when the server isn't completelybogged down you might be able togenerate some images for free right nowhopefully enjoyed this quick video of anemerging Tech that is brand new as ofright now this is fresh this is thehottest breaking thing right now in theAI world if you're like most people andyou feel like this AI space is justmoving super super fast and you want tojust kind of stay in the loop a littlebit head on over to futuretools.io thisis where I curate all of the coolesttools that I come across I'm actuallystarting to remove some tools to make itless overwhelming there's some junktools on there if I'm honest and I'mstarting to kind of get rid of some ofthem so that only the cream of the cropis being able to stay on the site socheck it out at futuretools.io and ifwhat's on here is just still toooverwhelming still too many tools tokind of look through and you just wantthe tldr of the week click here to jointhe free newsletter and every singleFriday I'll just send you the fivecoolest tools that I came across andI'll give you the tldr of the news andthe coolest videos of the week as wellas one cool way to make money with AI Isend it every Friday all you got to dois go to feature tools dot I oh so thankyou so much for tuning in I'm gonna tryto keep you up to date with all thelatest and greatest in emergingtechnology in the AI space so if youlike this kind of video give it a thumbsup that'll make sure that you see morevideos like this in your news feed ifyou haven't already subscribe to thechannel that'll make sure you see morevideos from me I'm just so happy thatthere's so many other people that lovenerding out over this AI Tech like I doand I just really appreciate youwatching my videos to learn more aboutit so thanks again for tuning in reallyappreciate you see you in the next onebye[Music]thank you