Select Page

Original & Concise Bullet Point Briefs

AI News: What You Missed This Week

AI Revolution: Open Source Models, Music Generation, and Google Products Lead the Way

  • AI news has been active this week
  • Meta released an AI music generation model
  • Adobe updated Adobe Express with AI features
  • OpenAI, Deepmind, and Anthropic agreed to open their AI models to the UK government
  • Google products have AI features built in
  • Meta announced an AI model based on Jan lacun’s vision for human-like AI.

AMD, OpenAI and Paul McCartney Collide in the Race for AI Innovation

  • AMD is partnering with Hugging Face to provide computing power
  • AMD is building hardware specifically tailored for AI to compete with Nvidia
  • OpenAI has made updates including the 16,000 context version of GPT3 which is cheaper to use with the API
  • There may be tensions between Microsoft and OpenAI as they both have chat models integrated with GPT4
  • Sir Paul McCartney says AI enabled a final Beatles song.

Exploring the Benefits of AI, Shopping Graphs, QR Codes and Google Lens

  • AI is being used to create a virtual tryon for apparel
  • Google unveiled their shopping graph which is helping Shopify build tomorrow’s economy
  • QR codes are trending on Twitter and are being used to make amazing images but they are still hard to get working properly
  • Google Lens can help with medically searching for skin conditions.

Revolutionary Research to Make Video Animation Easier: Mid-journey and 11 Labs Introduce New Technologies

  • The recently announced research called re-render a video Zeroshot text guided video to video translation has the potential to make taking real videos and turning them into animations much easier, without the Flicker effects of stable diffusion
  • Mid-journey is expected to release version 5.2 any day now, with limited Discord compatible outpainting features
  • 11 Labs has introduced a speech classifier that can identify if an audio sample was generated with its AI system, in order to prevent malicious use of AI.

Leonardo Explores AI Music and YouTube Popularity

  • Leonardo is working on AI music generation tutorials and warp fusion
  • He has a YouTube channel that covers all things AI
  • He’s new to the popular YouTube life and is figuring out what his audience enjoys watching
  • There are lots of places to follow Leonardo like futuretools.io, Twitter, and a newsletter sent every Friday
  • Thank you for tuning in.

Original & Concise Bullet Point Briefs

With VidCatter’s AI technology, you can get original briefs in easy-to-read bullet points within seconds. Our platform is also highly customizable, making it perfect for students, executives, and anyone who needs to extract important information from video or audio content quickly.

  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness
  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness

Unlock the Power of Efficiency: Get Briefed, Don’t Skim or Watch!

Experience the power of instant video insights with VidCatter! Don’t waste valuable time watching lengthy videos. Our AI-powered platform generates concise summaries that let you read, not watch. Stay informed, save time, and extract key information effortlessly.

despite it feeling to the outside worldlike the AI news is kind of starting toslow down there's been a ton of AI newsthis week no there hasn't been any hugeannouncements like a GPT 5 or amid-journey version 6 or anything justabsolutely insane but there's been a lotof little advancements that when youstart to really compile all of thisstuff it adds up to a lot going on inthe AI space now before I get into it Iwant to thank this week's sponsor thisnews breakdown is brought to you byShopify and I'm going to talk about thema little bit more in a minute but let'sjust get right into the news so thisactually happened last week but itdidn't make last week's news video and Ididn't want to miss talking about it butmeta actually released an AI musicgeneration model that's open source andfreely available for anybody to use onhugging face and in my opinion it soundsa lot better than the other models thatwe've seen like refusion or music LMfrom Google for example one of thesamples here 90s rock song with electricguitar and heavy drums and we getsomething that sounds like this[Music]or we can do an early2000s pop punk Anthem and see what weget now there are a lot of peopleplaying with this as you can see it'sgoing to take actually a few minutes togenerate this but do keep in mind thatthis is open source it is available onGitHub you can install it locally if youhave a graphics card with at least 16gigabytes of memory which you know ismostly the higher end graphics cards butit is possible there's also a Googlecollab notebook that you could playaround with this as well and here's ourpop punk anthem[Music][Applause]pretty spot on if you ask me now also inlast week's news but it didn't quitemake my last week video Adobe made someupdates to Adobe Express if you're notfamiliar with Adobe Express it's kind oflike adobe's version of canva I guess isprobably the best way to describe itwell among the announcements about AdobeExpress they announced that adobeFirefly generative AI comes to AdobeExpress so you can prompt images and addtext effects using AI directly inside ofadobe Express now and this was announcedthis week but since we're on the topicof adobe let's bring it up now Adobealso added generative AI Tools in AdobeIllustrator as well the main thing thatthey added is called generative recolorwhich allows you to do things likeupload a black and white image and havecolor added to it or upload a designlike this box on the left and have itregenerated in these other color schemeslike the one on the left and the rightit uses AI to find color palettes thatlook good together essentially and thenrecolor your additional image in thesevery serious new color variations so ifyou have Adobe Creative Cloud or justAdobe Illustrator you can update to thelatest version of Adobe Illustrator andgenerative recolor will be available nownow up until recently one of the biggestcomplaints about open Ai and GPT 3.5 andgpt4 is that the model has been closedopenai hasn't opened up the source codefor anybody to view as the variousgovernments start to worry about thelonger term implications of AI companieslike open Ai and deepmind have decidedthat they will open up these models whenthe governments start asking for accessto them for example they agreed to openup the models to the UK governmentaccording to this article in Politicohere Google deepmind openai andanthropic have all agreed to open uptheir AI models to the UK government forresearch and safety purposes also manypeople including myself have noticedthat AI features have started to rollinto some of the Google products in factnow inside of my Gmail I have thislittle button down here that says helpme write if I click on this button I canclick help me write again and then giveit a prompt for what I wanted to writean email about write a promotional emailtelling someone to go watch my latestYouTube video make it persuasive I'mwriting to you today to tell you aboutmy latest YouTube video which I thinkyou'll really enjoy it's a video aboutthe AI news I've also noticed thisfeature is now available at least forsome people inside of Google docs aswell if I create a new blank Google docimmediately a button pops up that sayshelp me write once again I can add aprompt here write an article about howAI is changing the world and just likethat I've got an article written abouthow AI is changing the world I canrecreate it I can refine it byformalizing it shortening it elaboratingon it or rephrasing it let's go aheadand elaborate and now we've got aslightly longer article about why AI ischanging the world now yes we alreadycould have done this by using thingslike chat GPT or Bard or one of theexisting large language models that areout there and just copying and pastingbut it's cool to see that Google is justadding it into these tools and it'ssaving us a few extra steps of having todo that also this week meta announcedthat the first AI model that's based onJan lacun's vision for more human-likeAI is being open sourced now basicallywhat this model is it can take a pieceof an image for context and then figureout the rest of the image by applyingwhat it already knows about the subjectmatter of the image now this is veryoversimplified but essentially you cangive it a tiny piece of a dog's headbased on its training data it knows whatthe rest of a dog looks like and it canfill in the gaps you can give it a tinypiece of a bird's leg and based on whatit knows with its training data it couldfill in the rest of the bird you can seehere's an example with a wolf here's anexample with a building here where mostof these AI generation models try torecreate images at a pixel by pixellevel this actually thinks more like ahuman where it says that looks like it'sa part of a dog I know what the rest ofa dog looks like I will take what I knowabout the rest of a dog and draw therest in again this is anoversimplification of the model butthat's probably the best way I know howto explain it is that it's trained likea human and that it knows the contextand it can look at a piece of somethingsmall and then fill in the rest of thecontext based on its data set and whatit knows about that smaller piece ofinformation that it's seeing and onceagain as is the trend with meta latelythey are open sourcing this model and itis available right now on GitHub Alsoearlier this week AMD put on a liveevent up in the Bay Area if you're notfamiliar with AMD they create chipssimilar to Nvidia they're reallynvidia's biggest and kind of onlycompetitor right now and during thisevent AMD had really two bigannouncements that were relevant to theAI world so the first big announcementis that AMD is partnering with huggingface now I've talked about about huggingface a lot in past videos it's a placewhere people can upload their machinelearning models and their code similarto like a GitHub but you can also testthe models directly on the site and playaround with them yourself andessentially stress test them when wewere playing around with music gen justa few minutes ago in this video that wasa hugging face space with a lot ofpeople building these machine learningproducts and using hugging face it'sactually a pretty big deal that AMD isgoing to be the company that's providingthe compute power behind hugging faceinstead of Nvidia in the future today wehave 15 000 companies using our softwareand they have shared over half a millionopen models data sets and the most likesome you might have heard of like stablediffusion Falcon Bloom talk coder musicgen that has just been released by metala few days ago we will optimize all ofthat for AMD platforms the goal is toreally have the best combo betweenhardware and software hopefully thiscollaboration will be one step a greatstep to democratize AI even further andimprove everyone's life now the otherbig announcement that they made at thisevent was that they are starting tobuild Hardware specifically tailored forAI and they even called out Nvidiasaying that they're trying to make chipsthat are more powerful than nvidia'schips we call this Mi 300X now they douse a lot of acronyms and techielanguage that's just totally over myhead but really the point being AMD istrying to go head to head with Nvidiaand create chips that are great fortraining large language models toaddress the larger memory requirement oflarge language models we actually addfour gigabytes of HI am super excited to show you for thevery first time Mi 300X we trulydesigned this product for generative AIit combines cna3 with anindustry-leading192 gigabytes of hbm3 I love this chipby the way when you compare Mi 300X tothe competition Mi 300X offers 2.4 timesmore memory and 1.6 times more memorybandwidth so you'll notice that in theirscreenshot they mentioned thecompetition but on the screenshotthey're saying compared to nvidia'sh100s it's 2.4 times the hbm density at1.6 times the hbm bed with basicallythey're saying they're better chips nowthese chips likely aren't going to bechips that you would get in a consumerPC at least not anytime real soon seeingas they're comparing them to the h100which could cost thirty thousand dollarsor more and when you compare this chipto nvidia's h100 that has 120 gigabytesof memory while this one has 192gigabytes of memory again a lot oftechnical stuff most of it doesn't havehuge implications on us the consumer yetbut it basically means that thistechnology is getting better and betterand Nvidia and AMD are sort of goinghead to head to create these gpus sothat large language models can gettrained faster bigger better andeventually build things like the nextgpt5 or the next model of Google's Palmnow speaking of GPT openai made anannouncement this week about a handfulof updates that they've made includingthe new 16 000 context version of GPT3.5 that's the standard version of chatGPT before it had 4 000 the old versionallowed you to use roughly 3 000 wordsbetween input and output of what youplugged into GPT and what you got out ofchatgpt this new 16 000 context model isfour times larger so you should expectto get about 12 000 words between yourinput and your output for chat GPT theyalso announced that it was going to becheaper to use GPT 3.5 with the API sothe developers that are building withthe GPT 3.5 API in the back end theircosts are actually going to go downwhether or not those companies actuallypass the cost along to the end consumerswho are using these tools that's yet tobe seen it didn't seem to happen lasttime the apis got cheaper most of thecompanies kept their prices the same andjust pocketed more profit but thecompanies that are developing with GPT3.5 API they are going to get even lowercosts now also in open AI news this weekit came out that maybe Microsoft andopenai although they seem to have anamazing partnership maybe they don'talways see eye to eye the news that'sbeen coming out recently is that openaireportedly warned Microsoft to moveslowly on integrating gpt4 into Bingsearch engine to avoid the inaccurateand unpredictable responses that itlaunched with however Microsoft wentahead despite warnings that it mighttake time to minimize the inaccurate andstrange responses now Microsoft to theircredit did kind of fix this quickly andfix a lot of the weirdness that wascoming out of the chat bot early on butthe Wall Street Journal reported thatthere's tensions between the twocompanies as they work together andcompete on AI features and you know inthe past I made videos about why wouldsomebody purchase chat GPT if they canjust use Bing for free and it has gpt4built in and it sounds like that's alsokind of a concern that's going on insideof Microsoft and open AI right now youcan see in this article right here itsays Microsoft and openai have a ratherunique partnership that has led to someconflict behind the scenes as the twocompanies simultaneously support andcompete with each other that's becausethey both have chat models you've gotchat GPT plus and you've got Bing chatand both of them have gpt4 built intothem anyway moving right along it wasannounced this week that Sir PaulMcCartney says artificial intelligencehas enabled a final Beatles songMcCartney told BBC Radio that technologyG had been used to extricate JohnLennon's voice from old demos so he cancomplete the song we just finished it upand it'll be released this year heexplained we don't know the name of thissong or necessarily when it's going tobe released we just know that we mightget a new Beatles song pretty soonthanks to AI also this week Googleannounced a new generative AI model fora virtual tryons users can generate tryon images with AI so for instance youcan be selling a top online and upload apicture of yourself and see what youwould look like wearing that top it usesdiffusion models so similar to what weget out of stable diffusion Leonardomid-journey tools like that to add thetop onto the person wearing it thisisn't some existing model that they justtook and applied it to this use casethey actually created a new model usingGoogle's shopping graph which has allsorts of images of people wearingdifferent types of clothes and chainedit on that imagery they say thatstarting today you can use Virtual tryon for apparel on women's tops fromBrands across Google's shopping graphincluding anthropology Loft h m andeverlane and over time it will becomemore precise and expand to more Brandsnow speaking of the aforementionedGoogle shopping graph the Googleshopping graph actually works withtoday's sponsor which of course isShopify now I know I talk a lot abouttech and Ai and all of this stuff thatmay sound very complex but when it comesto setting up an online store you reallydon't need to be a tech genius literallyanybody can set one up and I know a lotof the people that watch these videoshave their own online presence and ourcontent creators yourself and a greatway to earn some extra income as acontent creator whether you're using AIor not is to sell some of your own merchnow Shopify as many of you probablyalready know is a Commerce platformwhere anybody can set up their own storeto sell physical products they're makingthe complexities of running a businesssimpler so that anyone anywhere can nowbecome an entrepreneur now next week Ihave a call all with a team who's goingto help me set up a line of merch forthe Future tools brand so that we canstart selling shirts and hats and andcool AI themed goodies and of coursewhen I set that up I'm turning toShopify to set up the store becausewhile I love Tech and I love learningabout all the latest AI technology Istill want things to be easy to set upand easy to build and Shopify just makessetting up these stores easy that's thepower that Shopify provides theirdemocratizing technology forentrepreneurs and they're helping buildtomorrow's economy today and the bestpart if you go to shopify.com mattwolfyou can get a free trial and startsetting up your own online store now I'mgoing to get back to the AI news butremember you don't need to be some sortof tech engineer to create an onlinestore sometimes all you need issomething really simple and easy to uselike Shopify so thanks again to Shopifyfor sponsoring this video and allow meto live my dream by keeping up to datewith all the AI research and sharing itwith you for a living now if you aregoing to set up an e-commerce store atsome point a really cool way to do itand drive people to it could be usingthis new trend that has been absolutelyflooding Twitter lately which is thetrend of using control net and stablediffusion with QR codes to make reallyreally amazing images and when I sayit's a trend I mean literally everybodyis talking about it this thread fromRowan Chung here got 9.6 million viewsmy good buddy Linus eckenstam has beentalking about this trend I came acrossthis blog post over onstabledefusionart.com which breaks downhow to set this up copy Sutra made athree-step tutorial on how to do it hereMichael GAO created a tutorial on how todo this here AK on Twitter showed offthat there's actually a model that youcan download directly from hugging faceand do it yourself inside of stablediffusion and most recently Rowan Chungagain made a tutorial with 1.3 millionviews and he published this tutorial onthe day that I'm recording this videonow this is a really a hot Trend and theimages that are coming out of it lookreally really cool but if I'm beingtotally honest with you I've tried togenerate these myself and it seems likeright now you can either get somethingthat looks really cool or a QR code thatworks so far I've had a really reallyhard time actually getting somethingthat looks this good and the QR codeactually works when I scan it with mycamera so that's a nut that still needsto be cracked but I think somebody'sgonna figure it out because some of themdid work it seemed like when I wasplaying with it earlier I did get a fewof them to work but the ones that I gotto work were usually the ones thatlooked the least cool the ones that hadthe most detail and really looked cooldidn't seem to scan so while I love thistrend and I love the art that's comingout of it and I do think it's going tobe a way to send people to places likeyour stores in the future it's not quitethere yet but as of right now it's stillsomething that I love seeing all overTwitter and it's also something that Ihad on my list of to make tutorials onceI crack this nut and figure out how tomake images that both look good and scanmost of the time I'm going to turnaround and make sure I make a YouTubetutorial about that for you but for nowI'm going to link to all of theseTwitter threads that show you how to doit in various methods Below in thedescription and you can try any one ofthem and see what you come up with alsothis week Google released this blog postabout eight ways Google lens can helpmake your life easier and one of thethings that they mentioned is it canactually help you search for skinconditions you can literally take apicture or upload a photo through lensand you can find Visual matches toinform your search it says this featurealso works if you're not sure how todescribe something else on your bodylike a bump on your lip a line on yournails or hair loss on your head soGoogle lenses computer vision technologycan actually help you essentiallydiagnose skin conditions now I wouldn'tuse this instead of a medical doctor butif you're curious or worried aboutsomething unusual it might be a good wayto double check and ease your mind nowthis is some technology that I'm reallyreally excited about we've all seenvideos like this one that I made instable diffusion where it's got a lot offlickering and a lot of little likeextra artifacts going on well this newresearch called re-render a video Zeroshot text guided video to videotranslation was just announced and itlooks to solve that you can see withthis input video they were able togenerate these videos and they do nothave that same flickering effect to thevideos in fact they have this littledemo here where you can see what atypical stable diffusion video lookslike with the Flicker and the change andyou skim across it and you can see whattheir model does and how clean and howclear this looks they also have somedemos down here comparing it to othervideo models that are out there likehere's this input of this statue andhere's the various other models that areout there and how they generate the samevideo to video here's another one wherethey have this girl input and theygenerate it in three different videoshere and all three of them just lookexcellent without the flicker anotherone here all of them just look soamazing you don't have that flicker thisis going to make taking real videos andturning them into animation somethingthat anybody can do without all theflickering without all the weirdnesswithout all of the coherency issues thatwe've seen out of other models so farlike stable diffusion and even likemodels like gen 1 that are out there wedon't actually know when this one'sgonna be available for the Normal publicto use but it's one that I'm reallyreally excited that this research evenexists because it means that we'llprobably see it in places like huggingface and being added into other tools ifan API is ever made available for thiskind of tool or if this is open sourcedand made available for anybody to usealso every single Wednesday mid Journeydoes their office hours where theyessentially break down everything that'sgoing on with mid-journey all the newseverything that you can expect from itand I try to catch as many of thoseoffice hours as I can I can't catch themevery week this week I didn't catch thembut Ali Jules here on Twitter did andshe made a nice little tldr for us aboutwhat we can expect from mid Journeyafter their recent June 14 office hourscall and here's her quick recap she saysthat version 5.2 of mid-journey isexpected any day now I mean it literallycould be out the day that I'm droppingthis video because I do record these onThursdays and release them on Fridays soby the time you're seeing this it mightbe live already a few months ago theydid say that version 6 they thoughtwould be ready in six to eight weeks butnow it looks like they're estimating arelease in July it's a little bitdelayed and here's what they said we canexpect about version 5.2 they said it'scoming in a matter of days it's going tobe similar to version 5 and version 5.1but it said it'll have limited Discordcompatible outpainting so you'll be ableto zoom out change the aspect ratio orchange the prompt between zooms now Idon't totally know what this meansexactly because I wasn't on the officehours call myself but it sounds likewe're gonna get some new toys to playwith inside of mid-journey they alsotalked about their prompt analyzer whichright now you use by typing slashdescribe and then uploading an imagethey said they don't know if it's comingwith 5.2 or not but it is going toreduce the word barf prompts where itshows you prompts that don't seem tohave a whole lot to do with the imagethey are working on a web and mobileStandalone version to finally get it outof Discord but it's going slower thanthey'd like and they're trying toimprove moderation so less words thatyou enter get denied and slapped downwhen you try to use certain promptsthank you so much Ali Jules for sharingthat little recap that's super helpfulfor those of us that miss themid-journey call on Wednesday and thenfinally wrapping up the news for theweek 11 Labs who is known for generatingultra realistic text-to-speech voicesI've used them a lot in past videos theyjust introduced a speech classifierwhich is essentially a tool where youcan upload audio into 11 labs and itwill tell you whether or not it wasgenerated with 11 Labs 11 Labs hasgotten some heat in the past becausethey're probably the easiest tool to goand take anybody else in the world'svoice plug it into their system and getthat person's sort of deep faked versionof the voice out of the system but theyalso seem to be leading the charge infiguring out how to fight against thatas well by creating technology wherethey can tell you whether or not that AIvoice was generated with AI to sort ofhelp cut down on the Deep faking that isgoing to be inevitable as we enter thisworld of AI being more and more prolifictoday we're thrilled to announce ourauthentication tool the AI speechclassifier this first of its kindverification mechanism lets you uploadany audio sample to identify if itcontains 11 Labs AI generated audio theygo on to talk about how they're taking aproactive stand against the malicioususe of AI and I for one am definitelydown with that because although I lovethis AI stuff the implications towardsdeep faking and fake information and thespread of spam and scams that's the partthat scares me the most so I love itwhen these AI companies take moves inthe right direction to help prevent thatkind of stuff and there you have itthat's really the breakdown of all ofthe news that happened in AI this weekup through Thursday there probably ismore news that's coming out on Fridaythat I missed in this video but anythingI missed I'll make sure it comes out innext week's weekly news breakdown I'vegot some really really excited videos inthe works for you guys there's so manyvideos that I want to make includingsome tutorials on some really cool chatGPT prompts that use some of the pluginsnow some really cool new mid-journeyideas some cool Tech that's coming withLeonardo I'm working on some AI musicgeneration tutorials some tutorialsaround warp fusion so much cool stuff ifyou're not subscribed to this channelnow's a good time to do it I've got alot of stuff that if you're in the AIspace and you want to keep up with thenews or if you want to learn how to doall of the things that you're seeingspread all over the internet all of theviral AI videos that are going all overTick Tock and Instagram and Twitter ifyou want want to know how to do that Ihope my channel is a channel that cankeep you in the know on how to do all ofthat stuff I'm new to this whole havinga popular YouTube channel and so I'mstill trying to figure all this out sosometimes my videos are hit sometimesthey're misses I'm trying to figure outwhat you like to watch and make more ofthat I'm also trying to figure out whatI like to make and also make more ofthat and I hope you guys are here for itbecause I'm having a blast doing thisChannel and I hope you're learning a lotI hope I'm keeping you informed and alsohelping you understand how a lot of thisstuff is working because that's reallymy goal and once again thank you againto Shopify for sponsoring this video Ireally really appreciate it and if youhaven't already check out futuretools.ioI update the news on this site everysingle day so if waiting till Friday toget a breakdown of the news isn't fastenough for you I'm sharing the news innear real time over at futuretools.ioand over on my Twitter account at Mreflow so check out those places join thenewsletter I send a newsletter everyFriday and uh yeah lots of calls toaction probably too many but there'splenty of places you can follow me I'mall over the place online these days I'mdone we're vomiting at you once againreally appreciate you thanks so much fortuning into these videos I hope you keepcoming back checking out more of themsubscribing liking them pressing thethumbs up pressing the Bell doing allthat stuff love you guys see you in thenext one bye[Music]thank you