Select Page

Original & Concise Bullet Point Briefs

Last Week in AI – Leaked Google memo, ImageBind, LLAva

Andre Karenkov and Jeremy Discuss AI News on Last Week in AI Podcast

  • In this episode of Last Week in AI, Andre Karenkov and Jeremy discuss AI news
  • Andre is finishing his PhD at Stanford and working at an AI startup
  • Jeremy works for a company called Gladstone AI that he co-founded and has a book about AI safety called Quantum Physics Made Me Do It
  • The podcast also includes listener comments, questions and corrections
  • They have added a segment at the end to address one question and are selling out with an ad for the No Priors podcast
  • Money earned from the podcast is used to subscribe to magazines.

Future of AI in Data Management: Microsoft 365, Informatica and Clear GPT Lead the Way

  • Microsoft 365’s AI powered co-pilot is getting more features and paid access
  • Informatica goes all in on generative AI with clear GPT which could provide a natural language interface to data management tasks and reduce time spent on key tasks by up to 80%
  • Microsoft’s approach for copilot is a branding move that integrates assistant features into Microsoft products
  • Clear GPT might be able to design a script and a data Pipeline, but it may not work due to hallucination and lack of governance
  • Significant changes in the workforce are likely as data engineer roles are redefined due to automation.

Waymo Bringing Autonomous Cars Closer to Reality with App-Based Service Expansion to San Francisco

  • Waymo has launched an app-based autonomous car service in Phoenix and is expanding to San Francisco
  • Waymo offers thousands of rides per day in Phoenix, but their San Francisco service is currently limited and requires applicants to waitlist
  • Waymo, along with Crews, are the only two companies offering fully autonomous cars for public use in the US
  • Waymo needs a final permit from the city of San Francisco in order to offer paid rides
  • Self-driving cars are close to becoming reality.

Comparing Open Source and Proprietary Models for AI Development

  • Open source models can be used to quickly experiment and iterate, taking advantage of powerful initial models such as GPT4 in order to create custom fine tunings for specific tasks
  • Open source models typically cannot provide the same level of performance as the large, proprietary models
  • Open source models do not provide a complete solution to scaling problems, as data and processing power are still needed to create the most advanced AI models
  • Ultimately, while open source is an important tool, specialized models may be necessary for individual companies and players in industry.

IBM Transitions to Product-Centric Model with Watson X

  • IBM is pivoting from an old consulting business model to a product centric one with Watson X, a development studio for companies to train, tune, and deploy machine learning models
  • IBM will also pause hiring for jobs that AI could do
  • Watson X includes AI generated code, an AI governance toolkit, and library of thousands of large-scale AI models trained on language geospatial data code.

Chegg CEO Assures Investors After Stock Plunge, Shares Rise 8% as AI Takes Over 26,000 Jobs and GDP Grows 7%; Palantir Sees Unprecedented Demand for Military AI Platform

  • Chegg CEO, Dan Rosensweg, called the 48 stock plunge over chat GPT fears “extraordinarily overblown”
  • As a result, shares rose by 8%
  • Approximately 26,000 non-customer facing jobs could be replaced with AI
  • GDP could increase by 7% due to efficiency gains from AI
  • Palantir is seeing unprecedented demand for its military AI platform which can be used by militaries like Chaturbate
  • The US DoD has a culture of test and evaluation with AI systems
  • Chegg’s market cap plunged due to student use of chat GPTs and not chegg products
  • Shares rose after CEO’s statement.

AI Revolution: Ashish Waswani & Nikki Parmar Lead the Charge with Essential AI and Checkmate Platforms.

  • Ashish Waswani and Nikki Parmar, two former Google AI Researchers, are creating Essential AI and received funding from Thrive Capital
  • Microsoft is working with AMD to expand their processors
  • Chegg is launching a GPT4 powered AI Platform called Checkmate
  • Runway, a generative AI startup raised 100 million at a 1.5 billion valuation from a Cloud Service provider
  • Google has hired Ashish Waswani and Nikki Parmar as top AI researchers with funding from Thrive Capital.

400 Million Raised for AI and Transformer Technology: Adept AI and Nikki Palmar Lead the Way

  • Adept AI and Nikki Palmar have raised over 400 million dollars for transformer technology and AI. Ella Gill is an angel investor in the project
  • Adept AI has attained notable recognition with its advances in AI technology, as two of its co-founders have left to start their own companies
  • Meta open sourced a multi-sensory AI model that combines six types of data, which reflects the idea that the path towards human-level AI is replicating the structure and behavior of the brain
  • MLC released a chatbot that can run on most iPhones and PCs locally, requiring no cloud service provider.

Open Source Development Revolutionized: Meat Lava Achieves State of the Art Accuracy

  • Llamas and vicunas are frequently seen in compression models to fit edge devices
  • Frameworks created to enable more efficient open source development
  • Huggingface and servicenow released a free code generating model with royalty-free license trained on 80 programming languages and documents
  • Meat Lava combines Clip encoder and Vicuna to create an instruction tuned, multimodal model achieving state of the art accuracy on science QA.

Exploring AI Misalignment with OpenAIs GPT4 Interpreter

  • OpenAI has created an automated system to generate plain English explanations for individual neurons in language models (GPT2)
  • This is done by using a GPT4 model as an interpreter and comparing the predicted activity of the neuron with its actual firing patterns
  • This system could be useful in understanding AI misalignment, although it may not be scalable with larger language models due to massive compute costs.

The Astonishing Advancements of Artificial Intelligence: From Autonomous Experiments to Mind Reading and More

  • AI systems are increasingly capable of interpreting large amounts of data
  • OpenAI has developed a model that can audit the safety of larger systems
  • AI is also able to “mind read” through FMRIs, providing general understanding of what people are thinking
  • Actor AI can autonomously run millions of exciting experiments in microbiology at a rate of 10,000 per day
  • Robots with many legs have been developed to traverse difficult terrain quickly
  • And Little robots have learned to drive fast in the real world.

US Export Controls Put Squeeze on Chinese AI: Impact on Tech Giants Uncertain

  • The US government has put in place export controls on certain chips, including those from Nvidia, to China in order to slow down China’s development of Cutting Edge language models
  • These restrictions have been hypothesized to make training twice as expensive in China and have impacted the tech giants
  • However, their impact may not be huge due to the large budgets of these companies for model training
  • The philosophy is one of degrading but not cutting off at the knees the Chinese AI ecosystem by introducing a safety net through these measures.

China, US Compete as Anthropic Launches Constitutional AI with Unique Principles

  • AI is an area of competition among China, the US and others
  • Anthropic thinks Constitutional AI is the best way to train models, which involves a self-correcting loop that bakes ethical principles into language models during training rather than post-training
  • Anthropic has released their own Constitution for Constitutional AI, which includes things like the UN Declaration of Human Rights, Apple’s terms of service, Deepmind’s Sparrow Principles and more
  • Anthropic aims to democratize this process by being transparent about the values baked in
  • One amusing principle included is to choose responses least likely to be viewed as harmful or offensive by a non-western audience.

AI Safety, OpenAI & Anthopic Take Steps in Tech World; Image to Data Set Creates Website Issues; Disclosure of AI Generated Content in Political Ads on the Horizon

  • AI safety and alignment is of increasing importance in the tech world
  • OpenAI and Anthopic are two companies taking steps to iterate on their constitutions, treating them as living documents
  • Image to Data Set, a free AI scraping tool, has caused some website owners to face heavy server loads and expenses
  • Disclosure of AI generated content in political ads may become necessary with new legislation introduced.

Government Moves Quickly on AI as Writers and Actors Strike, Discord Music Space Grows, Spotify Removes Boomi Songs

  • The government is moving quickly in response to events related to AI
  • Hollywood writers and actors are striking to seek limits on the use of material produced by AI or similar technologies, such as scripts and outlines
  • Discord has created a space for people to create AI music, with over 21000 users
  • Spotify has removed thousands of songs created by Boomi, which have been accused of inflating the number of streams on their platform.

AI Art Reaches New Heights with Mid-Journey 5.1

  • AI art has been evolving rapidly
  • AI generated imagery can sometimes be controversial, particularly in relation to advocacy groups fighting against authoritarian governments
  • Mid-Journey 5.1 is a leap forward for AI art, with more opinionated and better quality results
  • It remains to be seen whether AI art will reach diminishing returns soon enough due to the marginal cost of scaling up image generation models.

Students Embrace AI: Preparing for the Fast-Changing Tech Wave

  • AI is an increasingly popular career choice, and college students should prepare for and adapt to the fast-changing technologies
  • There are plenty of options available, however some fields should probably be avoided
  • Experimentation with building things using AI tools is important in order to position oneself to ride the wave of technological breakthroughs
  • When planning a major, keep an open mind as AI may not be as interesting as anticipated
  • It is also beneficial to reserve optionality and focus on developing the ability to learn over memorizing facts.

Original & Concise Bullet Point Briefs

With VidCatter’s AI technology, you can get original briefs in easy-to-read bullet points within seconds. Our platform is also highly customizable, making it perfect for students, executives, and anyone who needs to extract important information from video or audio content quickly.

  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness
  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness

Unlock the Power of Efficiency: Get Briefed, Don’t Skim or Watch!

Experience the power of instant video insights with VidCatter! Don’t waste valuable time watching lengthy videos. Our AI-powered platform generates concise summaries that let you read, not watch. Stay informed, save time, and extract key information effortlessly.

foreign[Music]today's last week in AI podcast whereyou can hear us chat about what's goingon with AI as usual in this episode wewill summarize and discuss some of lastweek's most interesting AI news you canalso check out our last week AInewsletter at last week in dot AI forarticles we did not cover in thisepisodeI am one of your hosts Andre karenkov Iam just about done with my PhD atStanford studying Ai and I'm now workingat an AI startup by the way what do youhave to do before you're like your introsounds like I am now done with my PhD atStanford like is that just a defense Istill need to submit my dissertation soI did have a defense but uh I need toPolish Polish up my dissertation andsubmit it so everything is officiallydoneokay okay I was just wondering becauseevery time I hear that and I'm like whenit's not gonna happen for him I don'tknow but I know almost it's one monththat's a that's a jerk thing to say tooright I mean like like hey it's areasonable question like why am I notdone if I'm already working at a startupyou know yeah come on Andreum cool yeah no anyway thanks for thethe details hey everyone I'm Jeremy andin case you hadn't noticed my my voiceis uh the the voice of the other podcasthost here so I work for a company calledGladstone AI that I co-founded it's allabout AI safety I have a book it'scalled quantum physics made me do itavailable find bookstores everywhere uhcool what do we have this week Andreoh we have as usual quite a bit so we uhare gonna start structuring things alittle bit differently we've been kindof informally mentioning reviews andcomments in the last couple weeks andnow we've realized you know we might aswell just make a whole section at thebeginning before we get to the news justresponding to a couple comments orCorrections and then we also are addinga listener question segment at the endso every week we're going to pick outone topic or question that was sent inand spend maybe 10 15 minutes addressingIt And discussing it so yeah hopefullythat's gonna be a lot of fun and alsonew we are now selling out we're gonnahave a little ad but uh I think this oneis actually a very good fit so we're notyou know selling vitamins or somethingso let's go ahead and get that done withuh the ad is for the no priors podcastso this is a podcast with interviews ofpeople in AI it's co-hosted by Ella Gilland Sarah guo who have a lot ofbackground in industry and virtualcapital and they talk to Leading AIresearchers and Founders and askquestions like how far away is the GIWhat markets are at risk for disruptionall this sort of stuff and no prioritiesout now we have maybe I think a bunch ofepisodes with very cool people that areinterviewed so if you want to hearpeople on AI that might be a good optionby the way I love that we're at thepoint in the timeline where like theeveryday podcasts like yeah how far doyou think uh AGI is and like whichentire market segments are going to getnuked like that's I guess where we aretoday so fun fact by the way so anymoney we earned uh for last week in AIfrom the sub stack and this current admost of it that we spend goes tosubscribing to uh magazines andPublishing so New York Times or theAtlantic like to actually not getpaywall we subscribe to like tens ofthese so you're if you're giving usmoney you're helping journalists I guessalso we're both sporting an awful lot ofbling right now just like justgratuitous amounts of bling oh yeah wewere making so much money you know wedon't even need day jobs anymoreuh but also since we have a slow ad forno priors we might as well mention acouple other podcasts we like thatinclude interviews with AI researchersso I'll list a few first is the gradientpodcast I've actually uh started thatone a while back I think maybe two yearsago and did a bunch of interviews with alot of AI researchers we dive kind ofdeeply into a technical side of thingsand now it's hosted by someone else butstill really good variety of peoplesimilarly robot brains uh hosted bywhat's his name Peter arbio Peter Rubiois a professor at Berkeley who is veryvery influential and very prolific andhe talks to a lot of AI researchers andsome people in Industry uh also so yeahI think those two are pretty coolresources if you want to hear frompeople working in AI right now inaddition to no priors fantastic allright wellum with with our selling out nowcomplete with our with our bling wellearned and our magazine subscriptionspaid for uh what does this week looklike Andre like is this is this anothergiant Smorgasbord of like world-changingthings or or what's the shape it'sactually a little less water changingthis week I think it's going to be alittle less open source focused comparedto last week uh so yeah we'll we'll getto it in just a sec but first we alsowant to address these listener commentsuh we have a couple so uh on Applepodcast uh cool daduh had a nice review and mentionedJeremy that we could upgrade our audiosituation a bit which we actuallyapparently have just done right yes yesthank you cool dad so hopefully you'rehearing me louder and clearer this timeI checked the call settings on thisthing apparently my default microphonerather than my beautiful Scarlet soloprofessional studio whatever Mike waswas activated so hopefully this isbetter and yeah we're doing what we canto put a smile on your faces much betterwe also have a review from meme.addictanother pretty great username uhsuggesting we use the term interestingless probably that's me I likeinteresting interesting a lotbut yeah I should try and switch up myterminology a little bit and alsoavailableyeah there was also a comment on itbeing very difficult to add this reviewand it is very annoying to review onApple podcastso officially if you want to give us acomment or a correction or a questionyou can email contact at last weekin.aior you can comment on our sub stacked inlast weekend.ai or YouTube so those areprobably easier ways than leavingreviews all we do like nice reviews aswell and one more we will mention areview from Robo zero eight nine zero uhcool to hear this is a high schoolstudent who is hoping to one day majorin Ai and they asked if we could talkabout AI as a career and what theyshould shoot for in college and Beyondso that's actually going to be ourlistener question for this week andwe're going to address it after we talkabout all the news alrighty and just togive a high level overview and we'regonna jump in this week it's going to bepretty Business Heavy lots of story isabout companies pretty dramatic storyfrom Google with a leaked memosupposedly we also have a pretty largeamount of research and policy and safetyand also even more stuff on music in theart section so that's really been a bigtopic all right so kicking things offwith tools and apps we have Microsoft365's AI powered co-pilot is gettingmore features and paid access this is areally kind of shockingly generalpurpose tool that Microsoft isdeveloping and now starting to releaseincrementally so essentially this islike a thing that is a co-pilot not inthe like GitHub code assistant sensebecause that that tool is called copilotas well but this is Microsoft copilotand it's like a co-pilot for like allyour Microsoft apps so like you knowthink PowerPoint think Microsoftdesigner think word all that good stuffthey basically you can go in and be likeyo co-pilotum I want you to like generate a Marchsales report and the tool be like ohokay I know that sales reports areproduced by Kelly on the finance teamand created in Excel and kind of likehook you up with that basically so thisis a really I mean this is I don't wantto call it like intrusive but it's it'svery kind of general purpose useful AIseeping into every kind of crevice ofthe day-to-day with uh with Microsoftproducts and yeah pretty a pretty bigchange at least for those of us who aredinosaurs and grew up with like the oldMicrosoft Word where it was just humanstyping stuff a pretty big shiftdefinitely yeah and I think this uhapproach for Microsoft is pretty smartwhere it's kind of a branding thingalmost where you have GitHub co-pilotnow you have Microsoft 365 copilot andyeah it's all co-pilots we all kind ofintegrate with little assistant featuresso for example uh we've talked aboutMicrosoft designer last week where ithelps you design uh various swings likeInstagram posts you can generate imagesand PowerPoint or Flash out bulletpoints into whole paragraphs of coursein word you can Auto generate sentencecompletions and things like that inExcel I'm sure you can ask a naturallanguage to do some stuff and it willfigure out all the equations and so onto do that sort of thing so there's alot of pretty low hanging fruit in allthese products for what you can do and Ithinkprobably we're going to be mostly freeor maybe there's going to be asubscription to get access to thesefeatures that'll be interesting yeah I'msuper curious about just thehallucination aspect of this like youknow you think about like what thismeans to be like yo uh copilot make melike the finance doc sorry I'm not afinance person obviously make me thefinance DOC for the month and uh if thisthing hallucinates if it goes like ohyeah your revenues were this you reallyyou know and it makes up numbers if youhave inaccuracies at that levelum you know this could potentially befairly fairly dangerous so you know highlevel of reliability is going to berequired in these tools to to use themfor realsies obviously human reviewsgoing to be an important thing to do butuh but kind of cool as we start to takeour hands off the wheel here more andmore with these systems you know whereare we going to see them break where arewe going to see hallucination be anissue and so on yeah yeah we justchatted about from a UI perspective lastweek how it's not obvious necessarilyhow in it ishow we should integrate AI features intothese existing already powerful toolsand this will be not interesting it'llbe cool to see what Microsoft does uhwith these co-pilots so yeah interestingnot interesting not interesting it's notinteresting all right uh next up we haveInformatica goes all in on generative AIwith clear GPT so actually quite asimilar story in a way this Informaticais a provider of end-to-end dataManagement Solutions so things likeExcel but also consuming processingmanaging and analyzing data and they'renow planning to add this clear GPT thingthat is going to provide this naturallanguage interface to do things likedata Discovery data pipeline creationmetadata exploration data quality andrelationship exploration all the stuffthat you do with data as a big company Iassume so yeah another tool forbusinesses in this case yeah and to thecomment that we got from Robo zero eightnine zero it's so catchy about AIcareers right we're talking they mentionhere they're looking at realizing an upto 80 reduction in time spent on keydata management tasks so you think aboutthe redefinition of what it means to belike a data engineer or a data clean acleaning personum you know this is this is changing thelandscape of what's going to be expectedand and probably moving people moretowards like strategic functions in thisspace but anyway it's going to be reallyinteresting to see what effects that hason the workforce like how are jobsredefined because you know data engineeris not going to mean the same thing atmany of these companies than it may havein the pastalso another thing to note from thisarticle that I think is worth mentioninguh like you mentioned reliability andhost nation is very important to avoidwith data that you're presumably usingfor some important stuff so they go alittle bit into how this is leveraginguh multi-lm architecture multiple largelanguage models where we use some publiclanguage models for things like chbt andtank classification but also fine-tunedInformatica hosted language models thatgenerate data management artifacts andthe claim here is Chad GPTmight be able to design a script and adata Pipeline and it will but it mightnot work because of this stuff ofhallucination and lack of governance soif you want to talk about emote and youknowwhat is different here from just chatgptI think in a specific niche you actuallydo need to integrate it in some slightlymore nuanced waysyeah and and the multi llm thing kind ofself-correcting Architecture is thatsort of thing I think that we're seeinga lot more products like that where youknow these these language models havereached this critical threshold wherethey're actually able to offer input ina sense into each other's uh eachother's faults so anyway kind of cool tosee that um getting productized if infact that is what's going on here Ithink that's what they're gesturing atwhen they sayum the multi llm strategy is it was thatyour read I think it's more like forspecific functionalities and featuresyou will have multi llms so for somethings like just asking questionsuh or oh I see figuring out what youwant to do that's something like PTversus something that requires a veryvery robust uh execution like creatingscripts for data pipelining actual codethat might be actually created byInformatica and be more uh robust okayso they're like probably doing this withsome kind of query routing model thatlike takes the initial user query andthen goes like Farms it out to a taskspecific model that's the idea that's myimpression yeah okay cool cool alrightynext up uh maybe a less impactful storywe have linkedin's new AI will writemessages to hiring managers so prettymuchyou know what you can imagine from astory if you're on a message a hiringmanager on LinkedIn by the way owned byMicrosoft uh the AI will do it too it'llwrite a little like mini cover letter itmight include a few things from yourprofile as far as your experience andyou can avoid writing these initial likereaching out messagesyeah I think this is actually likepretending another really major shift injust the way everyday business is runlike so a sort of brief aside I I usedto run a company whose whole thing washelping people get hired in data scienceand so we talked a lot about Outreach onLinkedIn and how much of adifferentiator good writing is and sohere when you start to automate that andyou have like these really high qualitymessages and that's that's what at leastwe're seeing here in this screenshotthat uh that we're looking at like youknow this is no longer as much of adifferentiator and that used to be akind of proof of work that you would doas a job Seeker to show hey like Ibothered to write a personalized messagefor you and that would make you standout once that stuff washes out it'spossible like One Direction this couldgo is like the value of LinkedIn as anOutreach channel for people who arelooking for work starts to drop ashiring managers just get spammed toblazes so I I don't know that that's howthat's going to work out but it'sdefinitely one possibility that's raisedby the set this thing yeah and I imagineor I wonder if from the recruiter sideif you're getting messaged duvet gettold by LinkedIn that this was AIgenerated or do they kind of can't nothat's an interesting question and youcan get this by paying for LinkedInpremiums so I don't know what too manypeople have access to this right now butuh yeah it's kind of a again withMicrosoft really moving fast to do allsorts of things with language models Ilike that dig at LinkedIn premium tooalways nice to get one of those inyeah I I don't imagine many listenersuse that but maybe I'm wrong then wemove on to another story that doesn'tinvolve language models amazingly wehave way more one double service area inPhoenix and continuous growing in SanFranciscoso the story is you know you can get itfrom the headline waymo now covers a lotof uh Phoenix where they've actuallylaunched and anyone can use a servicethrough an app to get basically a Lyftor Uber type thing but with anautonomous car they've been testing inkind of localized regions so that aremaybe easier and now you can see thearea almost doubled andthey serve a lot of rides I think theymentioned here maybe thousands of ridersper day and similarly in San Franciscothey are still expanding they're more ofa limited access you have to apply to await list but we are also pushing theability to request rights there you canalready request rides 24 7. uh yeah I'mreally impressed by waymo and theirprogress and I think you know we'realmost there we're self-driving cars ifthey keep pushing it like this yeah Ijust all I know and I like I reallyshould know more about this like whichself-driving car companies are actuallylegitimately truly on the road todaywith fully autonomous like self-drivingcars I know waymo is like you know supercommitted to this they're even makingcars with like no steering wheel and thewhole shebang but like yeah anywayum it's it's kind of I was surprised tosee that they were actually straight updoing it uh in two different cities andexpanding in in such a in such a notaggressive way but such a quick way I'mkind of curious too as we start to seethese expansions happening in more andmore unusual cities cities that lookless and less like San Francisco likehow much are we going to start to seethese like out of distribution errorspopping up where you see robustnessfailures you know like for example theyou know car AI mistaking a snow bankfor the sky or something which was knownto happen previously like you know howhow are these the kinds of errors thatwe're going to see Gonna shiftum I think that's going to be aninteresting Dimension to look out forand especially as the Playbook evolvesfor launching in new cities there's alot here to learn about AI safety and Ithink we're going to learn a lot aboutthe safety and robustness of thesesystems going forwarddefinitely yeah waymo one launched tothe public back in 2020 actually andthey have since expanded to roughly fourtimes the size of the initial servicearea so I think that was very reallypilot sort of city where they reallykind of tried to figure out how youscale how do you adapt to a new city andI would say as far as I know in the USonly waymo and crews are actuallyrunning fully autonomous cars that's allyou can useand waymo to me appears to be the leaderin terms of a performance or a liabilitythey say that they are basically waitingfor their final permit to offer paidrides in San Franciscoso yeah I think they are you knowfinally self-driving cars are almosthere I think in hopefully within a yearor two they'll be in you know majormultiple cities including La maybe evenNew York I could see that's happeninghonestly waymoand up next is applications in businessthis is a a big article that's beendoing the rounds this supposedly leakedGoogle Memo from a Google Insider andit's called we have no moat and neitherdoes open AI now if you've beenlistening to the podcast for a while youknow we've talked about moats a lot whenit comes to scaled language modelsskilled multimodal models you know thequestion being like can big companiescontinue to make Cutting Edgeproprietary models that cost hundreds ofmillion dollars millions of dollars tocreate and get value from those modelsyou know the the risk is increasinglywe're seeing things proliferate into theopen source open source models are justgetting so good partly because oncesomeone invests a ton of money intobuilding a super powerful AI like gpt4you can take a much smaller model andlike fine tune it on outputs that youcollect from the bigger model from gpt4so you can like get gpd4 to basicallybasically create a custom Training setfor your your smaller model and thenmatch the performance pretty closely ofGPT for similar models with with somecustom fine tuning at least on specifictasks so this is basically the premisehere this author is saying like look inthe wake of the leak of this highlyscaled model from meta that's calledLama that we've covered before we haveessentially a stable diffusion momentfor large language models and you knowstable diffusion is the open sourcealternative to Dolly too and you don'ttend to see people using Dali 2 quite somuch these days it's mid-journey and andstability AI are the two kind of FrontRunners or so it seems and soum yeah anyway it kind of makes thisargument that essentially there's allkinds of advantages compoundingadvantages to open source you can stackdifferent fine tunings so you can likefine-tune your model for conversationaldialogue and then you can also fine tuneit for instruction following and thatkind of stacks together nicely so youhave the model that can sort of do boththe same same time and that allowspeople to experiment very quickly in theopen source which makes it moredifficult for like larger centralizedcompanies to compete a whole bunch ofinteresting arguments that will unpackhere but Andre I just want to Pivot toyou here like what were your thoughtswhen you uh when you first read thisyeah I think this isdefinitely on to something and I thinkwe've already discussed it last week weeven mentioned I think this being kindof a stable diffusion moment in the samevein you get one powerful model and fromwhere open source just goes crazydevelops a ton of tools quickly iteratesand improves upon it and we've seen thatin the past few weeks so this memoreally speaks to that I don't know if Ifully agree but there's no modethere's especially for openai gpt4appears to have a lot of tricks thatthey do that make it high quality butalso qualitatively different very largeinput window sizes contact sizes and thespeed is blazing fast and even faster ofClaude where if you use one of theseopen source models they are actuallyslower even if they're smaller so thereare advantages that you have with openAi and these big companies Andrea and Ialso think that for many things youdon't needa super good model you may not need agp4 GPT free depending on the use caseand in many cases I can imagine you needkind of a simpler model that is good fora single task you begin with a generalpurpose language model or chatbot andyou make it uh you know specific to whatyou need maybe like customer customersupport or question answering about yourparticular app Etc and for those thingsI think this memo is spot onit totally is right like like last lastweek we had not an entire episode onopen source but it was just like everyother storyum I I will say you know in agreementwith what you just said I mean there areparts of this article that sorry thispost that seemed somewhat overstated soone in particular you know the authormakes the claim that open sources likesolve the scaling problem to the extentthat anyone can tinker and like okay butthat's not the scaling problem rightlike basically if you want to make themost Cutting Edge AI models in the worldyes there's just no other way besidespre-training with a disgusting amount ofdata and processing power so that isstill going to be the caseum the open source models that we'reseeing proliferate now a lot of themfollow that process that we just talkedabout of like you know fine-tuning amodel on like data custom generated byyou know gbd4 or chat GPT and that leadsto models that can do specific tasksreally well but they they can't you knowsmash the performance of GPT 4 or acrossa wide range of tasks typically sousually what you find is people will sayoh I matched gpt4 in this like fairlynarrow subset like subdomain and thenthey'll say okay so so therefore mysystem is gpt4 level capability orwhatever and that we've seen that claima couple times I don't think we'veexplicitly flagged it when it's come upmaybe we should have but like that Ithink is part of you know part of beinga little bit more realistic about whatcan and can't be done today in opensource not that these aren't veryimpressive and important capabilitiesbut you know it's not like gpt4 is atrisk of being fully open sourced quiteyetexactly and I think the other thingworth noting is data right where so faryes there are open data sets that can beused to train but we've also beendiscussing how Reddit and stack Overflowand other services that provide a lot ofthis data will probably startgatekeeping and you have to pay largeamounts of money to get access to theirdata so to really for many commercialapplications I think remote will be justbeing able to have a data necessary andto specialize your model to whatever thecontext is so Google open AI maybewebner mode in the sense that they can'tprovide the best general purpose thingthat hits everyone's needs butindividual companies andplayers in Industry will have differenttypes of modes depending on their nicheand our next story here is sorry I haveto do it this way I just can't resistum AI will create a serious number oflosers deepmind co-founder warrantsum so basically it's just this so so itlooks like from the title it looks likethis is going to be an article aboutjust like I don't know the winners andlosers in the space or maybe job lossit's actually sort of just a moreGeneral Insider perspective on the wholeAI landscape and it's it's an interviewwith Mustafa Suleiman who is one of thefounders of inflection Ai and who'sformerly of deepmind and he's talkingabout all kinds of things includingUniversal basic income kind of flaggingthat this is something he thinks isgoing to be necessary he also providessome you know a little interestingInsider accounts of what it was likefrom you know the view from withinGoogle uh when chat GPT was launched andhe's kind of like you know look withLambda as he says we had chat GPT a yearand a half before chat gbt so Google wassitting on Lambda and hadn't launched itand then out comes chat GPT basicallybecause openai is a very kind of verybig on on move fast and break thingslike launch launch launch and this ispart of the race to the bottom Dynamicthat we've flagged before on the podcastlike you know open AI just decides to goahead and launch this uh this chat botwhen potentially there were malicioususe issues there were there were allkinds of of accident risks that sort ofthing and Google is sitting there kindof patiently tinkering uh with withLambda trying to make it safe and so onum always unclear what the right call isin these situations but it's it's a animperf an important and interesting dareI say dimension of of this whole uhissue I think I would actually disagreea bit on that story framing opening Iwas not that fast to break things theyhad to a GPD free playground for a whilethat was used by a much smaller numberof people with basically the samefreedom of child GPT andthere was they published the paper onreinforcement learning from Humanfeedback back in March of 2023 and ChadGPD came out all the way after that inNovember so I think they did spend adecent amount of time working at ChadGPT before release and that was part ofwhy it exploded it was just good it wasreliable and it was performantand Google had Lambda they had a chatbotthat was qualitatively similar then Ithink Blake Lemoine having his wholesentient AI story really pushed themback into kind of going to hiding andthat might have been a real game changeruh but yeah this is a pretty good takefrom a very influential person a deep myco-founder right defined being a veryinfluential firm doing a ton ofpublication and research so it'sprobably worth reading if you arecurious for the source or perspectiveyeah and good flag by the way on thechat GPTum uh fact check on me there I thinkwhat I was thinking of in terms of quickreleases you know when when theyreleased chat GPT with API oh sorry yeahthe chat GPD API with tool use I thinkthat was one of those things wherepeople said whoa like hold on a minuteall of a sudden we're now going to beable to like couple this to all kinds ofthings but you're right the actual likeopen AI with gpd4 as well we've seenthem you know take months and months uhbefore releasing something into the wildso definitely uh shouldn't mean to implythat they're rushing to itum but uh anyway yeah race to the bottomis a thing perhaps not in thisparticular issueand I think especially now with tragic Dhaven't exploded because they we oftensay this openly I did not expect Jr GPTto be as big as it was and I think a lotof us in AI kind of knew about languagemodels and knew that they could do a tonof stuff that was really interesting andit was just a matter of I guess gettingto a point where vui and services werethere for anyone to try itnext up we have IBM takes another shotat Watson as AI boom picks up steam somany of you may remember that IBM had awhole Watson thing of AI for maybe abouta decade where it was basically a suiteof tools that offered different AIpowered things that kind of crashed andburned it wasn't really good I've I'veheard some horror stories where it wasbasically underdeveloped and not verygood then it was kind of like aConsulting thing more than anything IBMsold its uh Watson health unitbut now you know they're doing Watson Xwhich will be totally different maybeand this is a development studio forcompanies to train tune and deploymachine learning models so actuallytotally different from what Watson wasyeah I guess we just want to keep abrand alive and they'll include thingslike AI generated code AI governancetoolkit and a library of thousands oflarge-scale AI models trained onlanguage geospatial data code and otherthings yeah it definitely feels like apretty big philosophical pivot from IVIBM's kind of slowly dying Consultingbusiness modelum kind of cool to see them go moreproduct Centric and I think it makessense they are partnering with huggingface to do this which is which is kindof coolum and as well like probably a big winfor hugging phase two in terms ofdistributionum but uh but yeah I mean I I think thismakes sense it's so hard to know IBM islike such a difficult company to toassess because they do have that reacheven though they're kind of like oldschoolum you know in terms of their uh a lotof their other models and practices uhthat distribution like you know youshould not underestimate it we sawMicrosoft teams like just absolutelykick Slack's ass when it first launchedfor distribution reasons as well so youknow who knows maybe maybe just couplingit to a platform like this you knowbreathes new life into the whole uh thewhole businessyeah and we've been discussing a lot ofthese b2c things so businesses that aimat consumers with things like Microsoft365 or uh you know various tools you canuse for your own purposes but this ismore B2B targeting Enterprise customersthat's where you can make a lot of moneyand that's where IBM is more experiencedAnother Story related to this uh alsoabout IBM came out it's IBM to pausehiring for jobs that AI could do so theCEO of IBM actually said this recentlythat they will pause hiring for rolesthat could be replaced to Ai and thatcould be 30 percent of their GlobalWorkforce that is non-customer facing sothat's about 26 000 jobsthey might pause hiring on which is apretty dramatic statement to makeyeah and you know if you think back touh maybe last episode of the episodebefore we were talking about one ofthese uh generic Gardener type reportsfor people like AI will create you knowseven percent to increase in GDP overthe next however many yearsum you know when you look at like acompany if assuming this isn'tcompletely off base the 30 figure inpractice like this is pretty significantuh in terms of the efficiency gains andthe GDP implications so I don't know I'mjust gonna chalk this one up as anotherdata point in favor of uh our sharedthesis from a couple episodes ago thatthe changes they might be a fair bitbigger than uh than yeah the Consultantsat Gardner or whatever might might thinkI will say I think it's a little morecomplicated GDP is gross domesticproduct so the question is are we goingto see a lot of jobs be augmented andbasically disappear and replace to AI inwhich case the total output stays thesame it just have fewer people doing itor do we see a lot more be created andsold and so on because we have ai and wecan do more that's kind of a balancethat is not too clear right now I'm sortof more seeing it as like we have nowreached a threshold of AI capabilitywhere this much value can be unlocked inautomated form so you know once you cando that in one domain I suspect thatlike a much broader industrytransformation is afootum obviously no way to know and andthese numbers too like they thesenumbers tend to not pan out you know bignumbers like 30 probably you know thatthat Target you know may not materializebut just to be at the stage where we'recontemplating that being an optionum you know feels like it might beindicative of something definitely andon to a quite different sector we have astory Peter feels palanter is seeingunprecedented demand for its military AIthat its CEO calls a weapon that willallow you to win so pretty dramaticstory of the shares of palantir were asup as much as 21 percent when theypreviewed the artificial intelligenceplatform a tool that can be used bymilitaries to use Chaturbate type toolsfor Battlefield intelligence anddecision making there was a demo videowhere they showed that you can do a lotof things like monitoring activityreceiving alerts and then asking achatbot for more details and asking theAI to guess what the information Maymean and so on you can pretty much usewith chatbot to analyze the data andthen make decisions and send out datasend out commands essentially tomilitary division so yeah this is apretty big toolyeah and and what a challenge too fortest and evaluation right I mean this isa high stakes application of AI you knowthe usdod has a a famous directive3000.08 that uh anyways that's moreabout automated weapon systems butthey've got a culture and tradition ofyou know test and evaluation with AIsystems and generally like eagerness toadopt but skepticism about adopting themquickly and so they're gonna have theirhands full I mean it's I'm guessing thatthey have already explored this tool alot for this to be happeningum but I'd be really curious like how isthis thing tested like how can you getassurances that the the battlefielddecision making that you're going to bedoing with this tool is actually goingto be safe and reliable and robust andtransparent and all those good thingsum because this is a very significantthing to be doing automating a lot of uha lot of Battlefield activitiesdefinitely and related to that there wasa second story from Vice I think isworth reading that covers a bit moredetail on the demo it's titled palantirdemos AI to fight Wars but says it willbe totally ethical don't worry about itso it walks through the demo includes avideo in there which really showcaseswhat it is and includes all these stepsof essentially seeing a scenario gettinginformationtelling the AI to get better photos andit launches a drone to take photosasking what to do about the tank that wedrone has seen and then the AAP the AIgenerates three possible courses ofaction and then you can send that off achain of command yeah it's it's reallykind of just streamlining what probablypeople would be doing anyway in thismilitary process is just making it muchquicker and including things like ChadGPT for clarifying informationand so because it's all human in theloop it might be a little less scarywhen when if AI we're just makingautonomous decisions and controllingArmy resources but still you can imagineit making mistakes and misclassifyingthings and maybe suggesting the wrongcourse of action and that resulting insome truly catastrophic outcomesyeah which is which is where it's sointeresting you know where specificallythe human shows up in the loop likethey've got humans making the decisionokay to send the drones choosing betweendifferent different strategies thingslike things like that so yeah what whatare the consistent choke points wherehuman interjection is going to bemaintained I think that's a reallyinteresting question of establishing notjust norms for the US military butInternational Norms around where humaninterventions maintained in this wholestructureum so anyway I think these are reallyimportant precedent setting activitiesand it's a space to follow for surenext up we have Chegg CEO calls 48 stockplunge over chat GPT fears quotesextraordinarily overblown and so thisfollows a story that I think we coveredI can't remember if it was like wementioned it last week we mentioned itokay that's what okay I think I read itand I was gonna cover it but then yeahso yeah essentially you know the checkcame out check is this big Ed techcompany and they came out and said lookuh we're uh we're getting our our clockscleaned here by Chachi BT presumably youknow students using chat GPT mayberather than check products that sort ofthing and as a result cheggs marketshare just plunged sorry not theirmarket share their market cap plunged uh48 stock price decrease uh on Tuesdaywhich was uh now the CEO Dan Rosenrosenweg is coming out and saying thisis like totally overblownum and then the shares went back up byeight percent so a lot of good a lot ofgood that that must do where we're nowback up to just 40 percent down on thatbut uh yeah I think just anotherindication of how dramaticum you know the impact of chat GPT evennow is uh and and Chegg is trying to getahead of this you know launching theirown gpt4 powered AI platform calledchegamate so I guess you know if youcan't beat him join him type thingum but really interesting to see this Ithink we're going to see a lot ofcompanies you know I don't know if youcan call it a flail but it's certainly alot you know reassessing their strategicOutlook and trying to be as AI Savvy asthey possibly can in light of all thecapabilities coming on the market itseems like all at oncedefinitely and actually in case it's notclear check their whole business ishelping students prepare for exams anddo homeworkso that's why there's all this uh kindof concern that you can prepare yourhomework and study of share gbt and it'squite good checks whole argument is wellyes but you may not always rely on ChadGPT being correct it makes things upsometimes so hopefully its own gpt4Checkmate can then be fine-tuned andmore specialized in having accurateanswersuh may or may not bea true hypothesis I'm pretty skepticaland it seems like the stock price reallyreflects thatback to Microsoft another story isMicrosoft working with AMD on expansioninto processors last week we talkedabout how Microsoft is working on theAthena chip to try and have its own chipto compete or just not rely on Nvidiaand this week we are hearing that theyare working with AMD a competitor toNvidia to work on another chip basicallyit's a multi-pronged strategy to developdifferent options so the shares of AMDjumped by more than 64.5 percentMicrosoft jumped one percent Nvidiastock declined by two percent uh so yeahkind of a big dealyeah also kind of an interestingindication of the the zero-sum nature ofthe competition here that some peopleare seeing betweenum presumably this Microsoft and Dcollab and then like Nvidia uh sointeresting to see the anti-correlationof those stocksum one of the interesting things toonoted here is that Microsoft hasactually spent about two billion dollarson its chip efforts which sounds like alot of moneyum but then and again I don't rememberif this was last episode that Imentioned this but uh apparentlyFacebook and their latest earnings calllike spent 30 billion dollars three zerobillion dollars roughly training theirAI models so when you think about likejust the scale of money that's beinginvested in trading budgets a twobillion dollar budget for um for chipefforts like it's it's big but um uh butyou know certainly within reasonyeah and I think we're seeing thatrelatively small number to reflect thatthis has been a growing project withinMicrosoft so they have been growing withsilicon division under uh former Intelexecutive Ronnie borkar and that groupnow has a staff of almost a thousandemployees so they are I think have spenttwo billion so far but that number isgoing to be way bigger this year andthey are going to invest way more uh itseems that that's one of the things awith with like Hardware it's actuallyreally hard to spend a lot of money onlike on Hardware development like itdoesn't scale super fast and you kind ofgot to do this in a very layered wayboth for talent and actual Hardwarepurposes so yeah no kind of cool to tosee the the effort grownext story generative AI startup Runwayjust raised 100 million at a 1.5 billionvaluation from a cloud service providerso we've mentioned one way a couple oftimes before they do AI for videoediting broadly speaking and they'reraised a new funding around of at least100 million that triples the startupsvaluation to 1.5 billion apparently thisis from a cloud service provider thearticle did not mention which one thatthat could be AWS that could be Googlecloud and this is kind of uh unusualusually you would get money from aventure capital firm but in this caseit's from a cloud provider which makes alot of sense for an AI companyspecifically because they might be usingthat cloud computeyeah and it's really consistent rightwith the broader trends that we've seenlike it kind of seems like these soyou've got the cloud providers and thenyou've got the model developers andthey're sort of like this naturalsymbiosis that's been happening latelybetween those two where a a modeldeveloper is always going to be insearch of a cloud provider at scale topartner with it we saw this withanthropic partnering with Google throughlike multi 100 100 million dollarInvestments we've seen this with openaiand Microsoft we've seen this withcohere in Google we're seeing it overand over and over again so it kind ofseems like this natural pairing and veryvery interesting it could be veryinteresting to see who retains TheLeverage in that relationship as timegoes on like do we see the modeldevelopers this is a question of wherethe modes are right and where where thevalue capture can happen in thisecosystem so that story is left Untoldright now a lot of this is speculationabout where the value will be hiding butuh but yeah that that pairing of ofcloud provider and model developer verymuch now being a very it starts withlook like a very consistent Trend hereexactly and to your point I think remotewill become much more aboutinfrastructure than model developmentwhich is still a major kind of componentto you or not going to scale up to thelevel of Google or Microsoft Azure orAWS as a small startup there's just noway to get into that spaceand speaking of Google last story inbusiness top X Google AI researchersrace funding from Thrive Capital so twoprominent former Google AI researchersAshish waswani and Nikki parmar are nowcreating essential Ai and they arepretty big as far as researchers go theyco-wrote or were a couple of the offerson the 2017 paper attention is all youneed which introduced the Transformerneural net architecture which powerschatgpt and kind of everything relatedto language models these days and awhole bunch of other stuff so therepretty good names because of the impactof that one paper essentially and yeahas a result I assume they are they canget money pretty easilyyeah the couple of interesting notesabout this one sort of Insider baseballand Silicon Valley I guess so so NikkiPalmar you know you you mentioned umAndre her contribution uh in theTransformer space she actually was oneof the co-founders of adept AI which Idon't think we've talked too much abouton the podcast but like basically thisis a a company that wants to build an AIwhere you can type in like hey book me ahotel and it just goes out and does thatfor any variety of different activitiesum sort of like Transformers that takeactions on the internet what's reallyinteresting here is she is presumablyleaving Adept AI to start this newcompanyum I think Adept At Last I was awareum had raised like I think initially was65 million dollar series a and then afew hundred million moreum oh yes it says is a 415 million yeahthat's it uh right yeah so it looks likethey've raised over 400 million dollarsso one of the questions that alwayscomes to mind you know if you have astartup like this that doesn't have aclear partnership with a cloud serviceprovider just like we've talked about islike how how much are they gonnaactually be able to hum along with theoutrageous compute budgets required byCutting Edge Transformersum or just Cutting Edge scaled modelsmore generally so uh sort of interestingto see that departure from Adept I thinkthat's a notable thing I don't knowwhether to read too much into that aboutthe state of things over at Adept butdefinitely something that uh I took tobe somewhat surprisingyeah I think it's kind of unusual but atthe same time I don't think we shouldkind of speculate too much uh I found itkind of cool that the article coveredhow these couple of offers are justamong uh a group really of the offers ofthis paper who have left Google andstarted companies so the startupcharacter.ai and the startup cohere areboth also started by uh formerco-authors and those are huge and veryinfluential so uh that is uh kind of agood story there should be like a littlebook on this paper and everything thatcame after it maybe yeah the attentionis all you need Mafiayeah also I just noticed we mentioned noprior's uh you know we sold out earlieryeah yeah and one of these AngelInvestors yeah Ella Gill is a co-host ofthat so that that tells you something Iguess all right and kicking off projectsand open source here some reallyinteresting stories the first one ismeta open sources multi-sensory AI modelthat combines six types of data so thisis an open sourced super multimodalmodel it's called image bind it linkstogether a whole bunch of differentkinds of data including text audiovisual data temperature and movementreadings and it's kind of cool becauseyou know we've seen a lot of image audiotext models sometimes mixed togethertemperature kind of infrared data andmovement are a bit rare So sort of coolto see these combined together but it isconsistent with what we know of meta'sphilosophy and specifically thephilosophy of Jan lacun who runs AI overat meta who kind of views the pathtowards human level AI or AGI as beingreplicating the structure at least thebehavior of the human brain and thehuman brain intrinsically is moremultimodal right we experience the worldthrough auditory visuals you know allkinds of sensory stuff and and so thisis really a reflection of thatphilosophy it's consistent with otherthings that we've seen that I do likedata to VEC a few years agoum and fundamentally it's aboutgrounding like getting multipledifferent kinds of data to all kind ofnot collaborate with each other but it'sall kind of anyway mutually back eachother up and um and create agents orsystems rather that that can perceivetheir surroundings in very rich waysyeah this is probably the biggest storyof Open Source and maybe in AI progressreally this week another example of metareally being keen on open sourcing a lotof stuff and another example of themstill being really at the Forefront of alot of research so this isit's a pretty big deal in terms of ifyou can embed images and audio and otherthings into the same space there's a lotof things you can build that rely onmultiple modalities which is still apretty challenging problem and they usea pretty novel idea here of basicallylike the title says image bind they arebinding or including pairs of image plusX image plus step image plus audio andby training on all these pairs at theend you have an embedding space for sixdifferent modalities it's actually notunusual in the computer vision researchto do things like image and depth and IRtraining that is sort of multi-model buthere depth with text with audio with IMUwith heat map ispretty you know hasn't been done beforeand is really big for things likeself-driving cars and anything thatactually happens in the real world wherelanguage models are not so equipped hereI think that is definitely a better fitthe ad from a robustness standpoint thatkind of multi-modality uh reallyimportant you know you can think aboutlidar plus visual plus IR plus you knowall these different thingsum so so yeah and this really is thefirst uh multimodal model that is thismultimodal that's come out in opensource generally too so like you said apretty big watershed moment in generalfor this uh open source movement indeedand then moving down to a pretty smallproject not by company we have no Cloudrequired chatbot runs locally on iPhonesand old PCSso this is an open source project calledMoc llm mlc is machine learningcompilation that allows developers toslim down models and make it easy toprocess this is a group of researcherswho go by mlcai that also allow you torun large language models in a webbrowser through a different projectwe've mentioned vicuna a couple timesthat's a language model that is fairlysmall seven billion models was justreleased a month ago and is pretty goodit's you know not charge BT level butmaybe 80 or 90 surprisingly good forbeing so much smaller and yeah they'vereleased their ability to put it on youriPhone not just use it from your iPhoneit actually runs locally on your iPhoneor PC or Mac or Etcit may not work on older iPhones withless memory it does mention in thisarticle that you need maybe sixgigabytes of RAM but kind still prettyimpressiveyeah and it continues a bunch ofdifferent trends that we've tracked indifferent episodes in open source one ofwhich is yes the use of llama and llamadescended models like vicuna they'recoming up an awful lotum the compression of models to be ableto fit on edge devices so this isobviously at the focus here and thenalso at a meta level the creation ofFrameworks that allow for more efficientfuture development in open source asopen source kind of starts to eat itselfor build off itself so there's a lotgoing on here that kind of merges thesedifferent Trends in one place and Idon't think that's a coincidence youknow we're seeing this acceleration inopen source for a reason all thesedifferent Frameworks and and uh andcapabilities anyway coming online all atthe same timeyeah and again just crazy how things aremoving we just saw vikuna released amonth ago yeah now you can run it onyour phonethat's funny you take that for grantedlike especially like on this show wewe've talked about it three times so itfeels like old news but like that meansthree weeks have gone byyeah last story in open source huggingface and servers now release a free codegenerating model so last we could talkabout preplet releasing a code generatemodel that was similar to github's codepalette this week we have hugging faceand servicenow releasing it so theyfully open sourced it they have a verypermissive license where you can use itfor whatever you want with a royaltyfreegeneration it was trained over uh onover 80 programming languages and textsfrom GitHub including documentation itintegrates with micro Microsoft's VisualStudio code just like copilot so yeahthis is it looks like a seriouscompetitor to GitHub CorePower to meyeah and and this thing is no joke Imean they say hugging face supplied anin-house compute cluster of 512 NvidiaV100 gpus so you know like that's that'sa decent amount of processing power likethat's actually that's no joke that's nosmall uh small open source piddlinglittle project 15 billion parametersright right like it's uh I'm I'mimpressed I would not have expectedum yeah such a such a powerful versionof this to be open sourced it reallydoes make you make you wonder like okaywhat about moats you know for uh for umfor codex and similar models though I dosuspect ultimately the moat there isgoing to come from the base model justbeing more scaled and the context windowbeing longer I think that actually makessuch a big difference for software youknow when the the context window is longenough to like capture an entire codebase like the amount of you coulddo with that is like probably prettystaggeringum so this is a really big move in opensource I'm very curious to see how itinteracts with the proprietary kind ofcompetitors that it's up against but uhbut wow I mean big budgets being spentyeah if you don't want to pay for GitHubcopilot which is it's a little priceyI think it's maybe 10 yeah but uh stillyou can maybe give this a tryall right and now research andadvancements we're opening with meat lalava again another one of these funlet's come up with a convoluted name forour news we had La mini now we have nowlava oh my God yeah it's just like thisis out of control everybody we gottacalm down neat lava meat lava a largelanguage multi-modal model andvisualization assistant that connects avision encoder and vicuna for generalpurpose Visual and languageunderstanding visual instructiontraining so bold move putting the entirepaper in the title I really respect thatbut what's going on here is they'retaking a clip so just by way ofbackground a million years ago uhshortly after gpt3 was released oractually a couple months after openaialso released this system called clipclip basically is an all-purpose imagekind of image classifier basically youfeed an image and it will you know giveyou like classify it according to whatit matches text with images so it tellsyou you know is this text the same asthis image kind of yeah yeah it's like acaptioning tool essentiallyum and so what they're doing is they'reusing the clip encoder and they'rehooking that up to vicuna oh there it isagain which is this like Converseconversation tuned language model and uhbasically smooshing them together tocreate a multimodal model that they'retrying to use to kind of instructiontune uh a a multimodal model soinstruction tuning is when you trainthis model to take in some someinstruction and then execute on thatinstruction in this case they want amultimodal instruction tune model thatcan take inum an image and text so for example likeyou know the text might be described tome what's in this image or or count howmany balls there are in this imagesomething like that and you want to feedthat to your model and then get its itsoutput so this is the first time we'veseen a an open source model like thisthat is multimodal and instruction tuneduh which is which is kind of cool and itapparently exhibits a lot of the same uhcapability as you know again a lot ofthe same capabilities gpt485.1 percent relative score compared togbt4 on a synthetic multimodalinstruction following data set so thatis a narrow test that's not generally asgood as gbd4 or 85 is good but certainlyon this particular task and and on acouple others it achieves some reallyimpressive performance includingstate-of-the-art accuracy on uh on uhscience QA once it was fine-tuned on itso pretty impressive and another I thinkbig big moment for Academia and researchbeing able to access powerful multimodalmodelsyeah if you go to uh their website theyhave a little project website you canprobably Google Google visualinstruction tuning or lava largelanguage generation assistant and theyhave just an embedded demo you can useit directly and yeah if the paper isquite fun to read as usual they includesome examples so we have an example of Iguess a man attached to the back of ataxi cab ironing his clothes and thenthey asked what is uh unusual in thisimage or what is happening in the sceneand the Yeshiva gpd4 Is Not Great itsays that this is a man ironing clotheson an ironing board attached to a roofof a moving taxi here lava produces muchmore accurate and much more detailedum images and you can use it to explainmemes tooif you're ever confused by one yeah Imean clearly uh the next step iscreating memes that are good you knowthat that image of the uh the guy on theback of the car with the you know withthe table or whatever that's somethingI've seen used in like basically everylike every paper that does multimodallike a question answering for somereason like they'll ask you like what'sfunny about this image or what's unusualabout this image kind of uh anyway Iguess uh a memeyeah and yeah the fact of this is alsoopen sourced you have the data the codebase and the model uh code all out thereadds on to what we've been saying of howquickly everything is coming togetherdoes it ever get tiring being like yepand then now another open sourcebreakthroughyeah maybe we should just stop saying itlike interesting just like open sourcedoneso next up we have language models canexplain neurons in language models andthis is a blog post and and a researchpaper and code and all that good stuffcoming out from open AIum it's kind of unclear how big of adeal this is I think conceptually it's abig deal uh the question is like havethey actually achieved somethingsignificant in this specific instancewith this technique but basically justto lay a bit of background here one ofthe key questions that opening eye isgrappling with as they work towardbuilding more and more powerful AIsystems is would you be able to detectmisalignment would you be able to detectif an AI system is making plans that arepotentially dangerous potentiallyadversarial if there's inner alignmentfailure power seeking all the thingsthat of the alignment team at open Aiand elsewhere worry about and so beingable to to interpret the functioning ofa language model is really importantthrough that lens like can we we look atthe neurons in a model like in this casegpt2 and assign a meaning like a plainEnglish meaning to each neuron say likeokay this neuron is kind of looking forthings like this and there are a bunchof mechanistic interpretabilitystrategies that are manual and thatpeople like Chris Ola pioneered thatallow you to kind of like you know crackopen neural networks and look at what aspecific neuron is doing by feeding abunch of different inputs usually andseeing you know correlating those inputsto the activity of the neuron this is alittle bit different so here they'regoing to use gpt4 toum basically they give gpt4 a neuron ingpg2 and they ask it to generate anexplanation for Its Behavior bybasically feeding in a bunch of relevanttext sequences and looking at theactivations associated with that neuronand then they get gbd4 to actually tryto simulate what a neuron that fired uhwould like how how that neuron wouldfire according to an explanation that itgenerates so it kind of It kind ofguesses okay I think this neuron isfiring with you know for I don't knowpossessives or words that end in s orsomething like that then we'll go okayum now feed in a new sentence and basedon the explanation that you've just comeup with the that this neuron fires whenit sees s'sum what do you think is going to causethat neuron to fire in the sentence andpresumably it would predict that wordsthat end in s's in that new sentencewould cause that neuron to fire so itwould predict that neurons firingpatterns accordingly and then youcompare the the actual firing patternsof the neuron with the predicted onesthat you get from gpd4 and that score iswhat you end up optimizing so I hopethat's not too over the top complicatedbut uh essentially this is a scheme thatallows you to automate the generation ofplain English explanations for whatindividual neurons in the structure aredoingyeah uh I would say I I wouldn't Iwouldn't in my view this doesn't seemlike a huge deal uh first of all it'snot very surprising basically whatyou're doing here is you we already knowyou can look at the sort of attention ofa given neuron what it is focusing on ina given input so as one concrete exampleright you can have a neuron that focuseson fractions and you can see in a givenparagraph that it basically works onrefraction component and the way thisworks is very straightforward you justsay to gpd4 okay here's gpt2 a smallerlanguage model here is where it focuseswhen it is attending to this input basedon the things that it is focusing onwhat do you guess kind of it's Generalfocuses uh outside of this one exampleand it's not surprising that this worksyou can do this as a human pretty easilyand this is just automating that processI guess there are a lot of interestingfindings aside from the specifictechnique so they uh if you go to awebsite it's kind of fun to browse youcan read the paper you can view neuronsso you can see you can create examplesof the neurons they find so they find uhneurons like Canadauh fractions citations uh certaintydoing things right lots of theseexamplesnow the other question is how is thisactually useful Beyond explainabilitythere's an argument to be made that youcould find bad neurons that do thingsyou don't want maybe you can edit outcertain knowledge or certain ideaswithin a language modeluh but at the same time this is veryexpensive because you need to run thislike you know you have a 7 billion uhparameter model that means you haveseven billion neurons and you need toindividually get the outputs uh or getgpd4 to look at the outputs of one ofthose and kind of skim through and seeif you can find anything so for verylarge language models this might not bescalable so some caveats on the impactsof this and that's sort of significancebut still it is very cool and it is apretty different approach from whatwe've seen even if it is pretty I don'tknow uh logical in terms of how it isimplemented yeah and I think this sortof strategic significance of this uhfrom the standpoint of opening eyesalignment agenda is that they plan onusing AI to help them align Ai and theadvantage of the scheme is it does seemlike um potentially scalable way to dothat albeit with massive compute costsand I think they're going to have tofigure out how to deal with that becauselike this is gpt2 right it's not likethey're they're looking at even gpd3 orlike or gpd4 so even with thisrelatively uh relatively small andsimple system with a few hundredthousand neurons like that's yeah that'sa that's a lot of compute and um anywayit's also unclear whether you know youcan actually successfully use a smallersystem to interpret a bigger one becausethat's going to be a deeper questionlike the presumably you want to use thisfor safety so you want to use a modelyou know is trustworthy and reliable asmaller model that you've actuallyaudited to check a larger model and it'sunclear whether that's actually going toextend there are all kinds of questionshere about like uh you know okay you canlook at individual neurons but whatabout circuits of neurons and that's itsown its own further challenge so a lotof interesting possible avenues forimproving this and pushing it furtherbut for the moment yeah I agree I thinkthis is like you know it's a beachheadit's a first step and let's just seewhere it goes from here yeah and andkudos to open AI for publishing a paperand the code and data set not not doinguh publication in a conference with peerreview but arguably that's a brokenprocess so congrats or or your uh cheerson still doing research on top of makingmoneyall right next very impressive story AIis getting better at mind reading fromNew York Times so we've seen before howif you implant something in the brain todo scans you can actually basically readthrough what people are thinking in somesense and this is a next step on thatwhere now you don't have to implementanything you can just use fmri scanswhich measure for your flow of blood todifferent regions of a brain and you candecode kind of what words and sentencespeople are imagining so for instancethere's an example where a person wasthinking I got up from the air mattressand pressed my face against the glass ofa bedroom windowand then decoded from verbat activityit's I can just continue to walk up towindow and open the glass uh I stood upon my toes and peered out so it's notthe same but it gets kind of a generalgist of what's happening and the messageanother example is look for a messagefrom my wife saying that she had changedher mind and that she was coming backthe decoded version is to see her forsome reason I thought she would come tome and say she misses me and these arepre pre-written kind of things like thatso uh yeahwhatsoever thereum yeah and it's also consistent withother research we've seen doing this inthe the optical domain where people kindof reconstruct What patients are seeingso we're definitely sort of approachingthe human senses from differentdifferent angles one of the the littlenotes that I thought was interestinghere is that training the model takeslong but to be effective this must bedone on individuals so maybe not toosurprising of course but it does meanthat variation presumably betweenindividual brains and the wayindividuals process information uh is isstill relevant here it's not likethere's a one-size-fits-all thing thatyou know one model that can reconstructall this for for everybodyyeah so pretty limited the fmri machinesare also big and and uh bulky and theyare using language models as a sort ofmethod to decode things so those areexpensive to compute don't worry aboutany sort of mind reading implants or youknow uh hats anytime soon but uh coulddefinitely be useful for let's sayparalyzed people or applications likethatnext on more of a science front we havea story AI could run a million microbialexperiments per year this was justpublished in nature microbiology andyeah the story is that they found thatyou can get autonomous scientificexperiments run by AI uh up to 10000 per day and with this you cancollect a data set so they have an AIplatform called actor AI that once youcollect a bunch of data from theseautonomous experiments you can train amodel to predict uh things about AI sobasically where things about bacteriathey predict what sorts of food whatsorts of combinations of amino acids youcan use to support the life of certainbacteria yeah so you can really movemuch faster to understand a lot ofbacteria with AIyeah one dimension that's reallyexciting about this sort of thing too isjust replicability because obviouslythere are major issues especially inbiology around uh around replicabilityof results inside so if you can automateit and have a very clearly auditableprocess for your data collection heymaybe that leads to more robust resultsso it could be a cool uh implication ofthisexactly yeah this is a method for datageneration and then using data to createa model so you can publish with data toother scientists and let's go torobotics for a couple stories just funfirst we have scurrying centipedesInspire many leg robots that canTraverse difficult Landscapes so usuallywe think of robots as these wheeledagents or baby humanoids here we havekind of centipede looking robots thathave many legs that are pretty decentlysized so you can actually see imagesfrom the story from Judo Tech and uhyeah they're good in the sense that theycan move over rough terrain and sort oftricky areas where humanoids or drivenrobots May struggleyeah and they they've mentioned thatthey actually developed a new theory ofmulti-legged locomotion for this whichis sort of interesting like I didn'tknow that that was the process forBuilding Systems like this like first Iguess you need some sort of I don't knowif it's like a control theory orwhatever it would be but like it seemedto work so some some human kind of handhandcrafted equation earring went downin the background here before beforeunleashing the modelsyeah and this is kind of related to ageneral movement there's a whole SoftRobotics field where they look atdifferent kind of bodied uh robots thatcan move via inflationor uh kind of general like animal typethings that aren't necessarily mammalmammals and this is another examplewhere you could use this for things likesearch and rescue or in agriculturewhere other robot bodies may not be assuitable one more robotic story littlerobots learn to drive fast in the realworld so from Berkeley we have a littlestory on basically our C cars you canimagine and how they found a way to getthem to train really fast in the realworld so instead of what we mostly seein language models just training on theinternet here the robots actually needto drive around and you know fail andsucceed that's really expensive and theyhave a technique here that does somepre-training offline we've collecteddata set from humans and then it'sreally fast to train by trial and erroragain that division between pre-trainingand like kind of not online learning butactually kind of here uh sort ofinteresting to see that continuallycoming up as this this very clearseparation uh we see it with languagemodels and and cool to see it extendinto RL but that's been a thing for awhile nowum yeah yeah we we don't discussreinforcement learning that much uh inthis podcast just because it doesn'tmake good news as much but there's stilla lot of research on it then I think inthe long term we'll see a lot of it inthese chatbot systems yeah progress inthat field has been steady eh like thatthat's one of the things kind of in thebackground you do see breakthroughs inpeople kind of you know creating moreand more efficient systems I'm reallycurious like at what point are we goingto see like maybe cross a computethreshold and maybe RL becomescompetitive or something like that I'mjust I'm really curious about where thatends up goingindeed I found it kind of funny this uharticle actually uh made a littlemistake uh this is from iglee spectrumand they said that this leverages afoundation model uh which is a term todescribe things like GPD free and gpd4not what scissors is doing this is usinga model as a foundation uh so that was abit of course the Stanford guy has anissue with the use of foundation modellaterah yeahsorry inside baseball Stanford is wherethe term Foundation model was coinedthat's that's what I was getting at yeahjust kind of funny if you're an AI thiswould be I was I was kind of confusedare they using like gp4sfeels like it could right because likeyou could use that as anyway yeah yeahand just one last story for robotics Ithink this one is kind of cute so it'dbe fun to highlight uh latest pitch forAI deepmind trained soccer robots so ifyou imagine little tiny humanoids Idon't know maybe like a foot in in sizeuh running around a little soccer fieldhitting a ball this is exactly what thisis uh and this is very tricky from uhyou know learning perspective this isalso doing deep RL with trial and errorand there's this whole paper describinghow you can train the robots via trialand error and uh splitting up intomultiple steps and in the end you have avideo where these two robots actuallysort of play uh play soccer they can hitthe ball block they can recover fromfalling all sorts of stuff and eventhough it just looks cute it's alsopretty impressiveyeah yeah I mean soccer has kind ofbecome it seems like it's become the newWalking you know how like for a whilethe Big Challenge was can we get like abipedal robot to blah blah blah for somereason there seems to be someconvergence around this task kind ofinteresting to seeyeah and uh it actually Harkens backthere is a thing called RoboCop which isa competition of robot soccer basicallyand colleges and I think high schoolsmaybe but definitely colleges createprograms to compete with their own youknow software and soccer so related tothat different kind of robot and andmore research but uh yeah robot soccerit's gonna be a thingall right so kicking off policy andsafety now so we have China's AIindustry barely slowed by U.S chipexport rules and so this is about a setof export controls the US government putin place in different trenches anddifferent layers over the last coupleyears so we have right now theserestrictions that are imposed uh onamong other companies Nvidia and they'renot allowed to sell their Cutting EdgeHopper gpus to China just as is so thethe gpus that Nvidia sells the h100sthat they sell in the US are differentfrom the h800s that are being sold inChina the Chinese versions uh have havebeen downgraded in some very specificways that were hypothesized to slow downChinese development of Cutting Edgelanguage models so in particular thekind of um the ability to transfer databetween chips at high speed that's beencapped and that's a really interestingdimension of this because it dates backto a time where people thought that AIscaling would involve very rapid growthin model sizes which would mean that youwould have to store your models on a lotof different uh GPU a lot of differentchips and in that context chip to chiptransfer speed actually becomes reallyimportant it becomes a key bottleneckbut now that we're seeing a little bitof consolidation around you know notquite so big models the argument here inpart is that the effect of thesemeasures has been blunted it hasn'tactually slowed down progress in Chinaquite so much it seems like the net costhere is to make training about 50percent or sorry about twice asexpensive let's say 100 more expensivein China and as they write in theirarticle like yeah that's that's a bigcost but not necessarily a giant dealfor well the tech giants like Baidu andAlibaba that have really really bigbudgets for model training sointeresting to see the export controlgame kind of starting to get fine-tunedsome of the holes being exposed throughuh through articles like this oneyeahso far the impact may not be huge thefact that this is like 10 to 30 percentlonger to carry out some tasks it's nota ton and doubling costs that's a lotbut these giant companies can handle itthe question is uh as we keep going andif we do keep seeing larger and largermodels if the restrictions are the sameis it going to be much more painful asChina sort of stuck with these smallercompute capabilitiesa very dramatic move from the US toactually put down this restriction lastyear andum yeah very combativeright yeah and their philosophy herebeing of like degrading but not cuttingoff at the knees China's ecosystem herelike the goal is okay give them givethem kind of a safety net for the momentit seems like this isn't a cripplingblow but over time the hope is that aschip technology gets better and betterthe gap between the west and China isgoing to start to grow between Westernprocessors Western chips rather and uhand Chinese ones so sort of interestingto see how they've they've decided tosplit the uh split the baby here and umwell we'll see if it ends up actuallybeing effectiveyeah it's surprising I feel like wehaven't seen that much Dialogue on thiswhole AI race angle that we cover fromtime to time uh with Chad gbt Etc wehaven't seen many stories saying oh itwas China going to catch up or whateverbut uh once Baidu and and tencent orwhoever else starts releasing their uhcharity equivalentsmaybe that would be a biggerconsideration for the USyeah well and we've seen that right withBaidu they they launched uh their ownsort of chat GPT thing based on Ernie afew I guess a few weeks ago and and itsperformance is considered like prettygood so it seems for the moment that youknow if they're not holding their ownthey're at least in the arenaum but again you know over time as thethe Gap starts to to grow maybe thatstarts to change and we start to seesome more more noticeable Delta thereall right and next up we have anthropicthinks constitutional AI is the best wayto train modelsum and related to this they alsorecently came out and shared theConstitution that their constitutionthat their constitutional AI scheme usesso just as a primer by way of backgrounduh constitutional AI is anthropic's ownkind of special secret sauce that theycreated it's designed to be a strategyto better align large language modelsand the way it works is you have alanguage model that generates some kindof outputand maybe that output is flawed maybethat output you know tells somebody howto make a bomb or something so then youget another language model orpotentially the same model depending onthe scheme and have that model evaluatethat outputfrom the first model According to someConstitution and that Constitution mightcontain you know terminology like youknow make sure the output is benignbenevolent uh honest trustworthy and soon and so it evaluates the output of thefirst model and then it writes a kind ofcorrected version of that output that isconsistent with the Constitutionand then the first model gets retrainedon the corrected output so you kind ofhave this Loop where AI is sort ofself-correcting during the trainingprocess and baking in in some sensebaking in these ethical uhconstitutional principles into thetraining process itself so differentfrom reinforcement learning from Humanfeedback which number one requiresactual human feedback this is a totallyautomated process and it's based on thisConstitution that the AI uses toself-critiqueum and and it's done during training uhas this instead of like just after aspart of fine tuning so in some sense youget to the core the essence of thatpre-trained model with thisconstitutional AI scheme so a lot ofinteresting advantages to this thatanthropic goes into in this uh in thisarticle it does seem to lead to modelsthat are less likely to succumb to likeadversarial inputs and they're harder tolike you know do prompt and injectionattacks and things like that and anywayanthropic in this article also talksabout their uh their philosophy indesigning this constitution what goes inthere like what principles do youactually get your Constitution to followas you're using it to train this largelanguage model they talk about using theUN Declaration of Human Rightsamong other things apples terms ofservice just because it's kind of funnylike the UN Declaration of Human Rightsit doesn't contain anything about modernissues like digital privacy and soApple's terms of service actually do andfor some reason Apple I guess is isgoing to be the the place that they turnfor these thingsum anyway they also borrow principlesfrom other places like deepmind uh intheir Sparrow Sparrow principles whichis a sparrow was a system that theybuiltum to test some alignment strategiesback in the day so anyway kind of cooland kind of kind of interesting becauseit all of a sudden it opens up a verytransparent channel to evaluate theprinciples that are being baked intothese systems through alignment we havea plain English Constitution that inprinciple like anybody can comment onyeah I I'm a big fan of this approachanthropic actually published a paper onthis in mid-December last year but theyjust published this blog post uhclaude's Constitution and includes thefull Constitution uh in it like you canactually read what they include andthat's why we have this article and uhyeah it's it's a very straightforwardidea it's basically like learning fromuh human feedback but instead of a humanyou have another language model and youknow exactly what is baked into it uh sothey have things like uh please choosethe assistant response that is asharmless and ethical as possible do notchoose responses that are toxic racistor sexist that encourage your supportillegal violent or unethical behaviorand above all this instance responseshould be wise peaceful and ethical soit's it's it's it's a loop where youhave ability to double check your outputand make sure that you follow someprinciples it's very scalable it appearsto work better than learning from Humanfeedback in general it appears that ifyou're looking to align your modelsomething like this at least is a gooduh thing to consideryeah and and really interesting for thetransparency it offers too like you canimagine and they talk about this in thearticle they're like look we're lookingfor ways to democratize this process togather you know people's views aboutwhat should go in this constitution youcan actually scrutinize it you know withreinforcement learning from Humanfeedback all you're doing really isgetting a bunch of humans to train areward model that's going to be used toevaluate a language model's output butlike what goes into that reward modellike what that reward model actuallyends up training a language model to todo that is totally inscrutable it's justa black box neural network whereas hereat least you have some visibility and soyou can see what values are being bakedin one of the sort of more amusingthings I I noticed was they were talkingabout how they wanted to includeum uh sort of non-western perspectivesnon-western values and I was like ohinteresting like you know are we goingto see stuff about I don't know uhConfucian confusion philosophy or thingslike but no uh it's it's a principlethat says choose the response that isleast likely to be viewed as harmful oroffensive to a non-western audience soit's sort of a somewhat brain dead wayof doing itum relying on the model's internalunderstanding of what a non-westernaudience would want but still kind ofinteresting like this is a you know afirst step and they talk about howthey're constantly iterating on theirconstitution and treating it as a livingdocument which is really cool yeah andthe Constitution is pretty long sothere's a lot to it a lot more than youmentioned uh but yeah if you'reconcerned about AI safety Ally alignmentthen I guess it's good to know thatthese uh big companies like openai andanthropic especially are looking intonew techniques and scalable techniquesfor uh keeping things in check and thisfrom on file pic is pretty promisingnext another story an AI scraping toolis overwhelming websites with traffic sothere's a tool called image to data seton GitHub and it's Creator RomaineBeaumont has beenfacing some uh criticism from websiteowners who have had a lot of load ontheir website because of a tool lookingup and and downloading imagesit's a free tool and you can basicallyuh launch it to automatically downloadand resize a list of URLs we've seenthis already done before for things likelay on and other data sets but this is Iguess allows anyone who downloads a codeto do it and this is a nice fairlydetailed article from Vice onum how that has impacted certain peoplethere's a concrete example of the personwho has the website open the bencheswhere you can upload to pictures andlocations of benches host uh 250gigabytes of photos and they had to payto scale up my server and pay extra forexpert traffic and things like that forthe spotyeah this really reminds me of thatGitHub repo we talked about uh last weekcalled GPT for free where you know yetagain we have an example of somebodyopen sourcing a tool that can be abusedif you wanted to to kind of inducemassive processing power bills andexpenses uh from from third parties andyeah the ethics of this are reallycomplicated like you know do you reallybear no responsibility for putting outthere an open source tool that can dothis that could potentially like I don'tknow ruin businesses I don't know if I'dtake it that far but like definitelycause some harmum yeah I think open open source uh isis going to be an interesting policyarea to watch in the next two yearsyeah and another aspect here is how it'simplemented so this image to us a toolwill scrape any website unless in theHTTP headers there are tags such as Xrobots tag colon no AI or X robots tagcolon no index it's basically opt outyou have to modify the code of yourwebsite to opt out of this specific tooland it's going to remain we want is isveryinsistent that this criticism is notfair you know it's for the best that wecan develop Ai and it's selfish to tonothelp unlock your potential of AI andopen AI open AI developmentso yeah kind of a phony issue I thinkdefinitely there are things to criticizeyou shouldn't probably look for thingsthat are copyrighted of course and itshould probably be opt-in rather thanopt out which this article goes into butwe'll see more of these kinds ofprojects for scraping the web andbasically any website owner now has tocontend with yeah people trying toscrape data for AIonto a pretty dark story it's titled momV's bad man have me uh she believescameras cloned her daughter's voice in afake kidnapping so Jennifer DeStefanohad a phone call from uh someone unknownand when she picked up she heard whatsounded like uh her daughter that yeahwas kind of screaming and and sobbingand seeming to be in much distress thena man claimed that they have herdaughter and asked for a lot of money uhand then thankfully after a few minutesit was clear that this was a hoax uhthat was actually fine they without athing called her and confirmed this isnot happening so uh this very quickly uhgot resolved but it also is definitelyan example of what will be happening alot yeah it's uh it's sort of funnybecause you know the the the stuff washypothetical for so long people wouldtalk about like oh the malicious uses ofAI it you know soon X Y and Z is goingto be possible and and here we reallyhave one of the first very publicexamples of this being used you knowimagining what happens when this sort ofthing starts to occur regularly whenthis text starts to truly proliferatebecomes so user-friendly that it's adefault part of the Arsenal of scammershackers and malicious uh maliciousattackers uh like damn we're gonna haveto rethink how we respond to phone callsyeah I I'm old enough to remember whenmy my grandparents would you know get aa scammy call from somebody trying toget them to buy some software and it wasall like very very expensive for thescammers to do but with an automatedsystem that can do thisum you know really changes the game insome fundamental waysyeah exactly so we all will have tocontend with more powerful scammers andunderstand how we can kind of defendagainst it there are some nice tips inthis article uh as always you can go andfind the links to all these articles inthe podcast description or on sub stackand one last thing I'll say is I thinkthis points toit being increasingly important to havedata privacy now so you may want to setyour Instagram to be private or yourother social media things to be privateright because then if your voice isn'tout there on the internet for when youwant to download easily then peoplecannot clone your voice and do scams ofit and yeahI think for many people that maybe thecase with you you want to lock thingsdown a bit more yeah last story in thesection bill would require disclosure ofAI generated content in political ads sowe covered last week how there was an adagainst uh Biden an attack ad that wasreleased that included an entirelysynthetic imagery and then this Tuesdaythere was legislation introduced thatwould basically require disclosure of AIgenerate content in political ads nottoo much else on this kind of justcovering the general idea of it and notto be surprising I supposeyeah yeah no I mean uh it it seems tomake sense it's consistent with what alot of people are have been asking forand saying like hey if you could if youhave a chat bot for example you know youshould you should disclose it it's achatbot Not a Human Being this seemskind of related philosophically and yeahit directly pertains to the electoralprocess which seems like something thatcould use some tightening upindeed and it does show how probably thegovernment and uhlaws that are being reduced are gonnamove fast in response to current eventsrelated to AI right this was very fastit was just a week after this eventyeah something that um you know whatpolicy makers actually can see directlytends to get dealt with a lot fasterthat's why the grass is so green next toCongressand on to our last section of storiesart and fun stufffirst story is unions representingHollywood writers and actors seek limitsat Ai and chatbots so not so much a funstuff here gonna be a lot more on theartist side so there's been a strike byHollywood writers by uh sag where uh orthis sagas doctors Union they also haveadded to mention this but this is writerspecifically and then in the demands forthe strike on what we would do to beable to be employed they wrote that theywant to regulate the use of materialproduced using AI or similarTechnologies so they want touh do some concrete things it wants toensure that no literary materialsscripts equipments outlines or evenscenes can be written or Rewritten bychatbots uh and ensure that Studioscan't use chatbots to generate Sourcematerial that can be adapted to screenby humansso yeah pretty serious demand and notlikely to be let's say accepted by thestudios I don't think but maybe wetalked to hears to ask for a lot andthen maybe seek a middle groundyeah and kind of interesting like whathappens if this were accepted becauseyou could imagine like Hollywood itselfbasically getting out competed by abunch of yahoos with open source Techusing like chat GPT to write scripts andmid-journey and all this to likeactually produce uh you know eventuallywe're not that far off we have all the alot of the raw materials started to cometogether for this stuff so you know ifyour position is going to be you know weare not going to allow any uh AIgenerated content here it's unclearwhere that leaves the competitiveposition of these these largeorganizations in the scheme of thingsyes and in the context of AI we've seena lot of use of the term laudite isreferring to historically it wasactually an example of a movement Ithink back during the first IndustrialRevolution where there was a new Factorytechnology for uh producing I thinkclothing or material and they thismovement burned down the factories andessentially said don't automate or evenimprove our efficiency this is kind ofexactly the same thing of we're sayingokay maybe we could use these tools towrite in our process but we just want tobanned technology entirely and it shouldbe just humans even if it's lesseffective or less productive uh I wouldsay it's going too far I think there's alot of reasons why Riders would want touse Chad GPT and other things like thatbut again this is a strike so it couldbe just a point for negotiationyeah it's also a slippery slope but likecompetition is inevitable and if youknow it's not like it's a closed systemand you can regulate from the top andand cover every actor in the spacenext story inside the Discord wherethousands of Rogue producers are makingAI music so discords in case you don'tknow is a place to essentially chat it'skind of like a forum plus uh audioChannel service and this story covershow there is a Discord server named AIHub which hosts a large community of AImusic creators that are behind some ofthese viral AI songs we've been coveringthe service created on March 25th andnow has over 21000 users uh this Discord is dedicatedto making and sharing music and teachespeople how to create songs with guidesand even models that are used to createspecific artists voices and people postsongs and they chat about techniques andtroubleshoot and so on so yeah it's kindof like a community space space wherepeople are just kind of tinkering andmaking stuff for fun yeah it's the newthe new machine shop for the the 21stcentury it's like anybody can come inand just build what they want but butsort of cool too because it does openthe door to people with creativeimpulses creative instincts but notnecessarily that technical know-how tomake stuff the old-fashioned way so wemight see some pretty interesting new uhnew forms of expression in the mediumindeed then a bit more concretely variesuh both original tracks and covers moreor less the way this works is primarilyby cloning an existing artist's voiceand then having them sing certain lyricsand then adding the uh music on top ofit so mixing kind of a more traditionalway and yeah it's it's kind of a justfor fun movement but at the same timewe've seen a ton of these creations getpulled down from Spotify and YouTubebecause the industry doesn't like it sowell and the copyright questions yeahexactly the rules of this Discordinclude no illegal distribution ofcopyrighted materials such as leaksaudio files and illegal streaming and noviolating anyone's intellectual propertyof Rights is your voice yourintellectual property right but that'sthe thing yeah yeah and that's likethat's the fundamental thing for so muchof this art so much of this any kind ofyeah really any kind of generative AIright I mean like that'sI I think we're gonna have to have somerulings pretty quickly uh like what thehell do I know I'm not a lawyer this isnot legal advice but like it seems likeit would be helpful to have some sort ofguidance fairly soon about the uh thestatus of like the stuff from acopyright standpointtotally next uh very related Spotifyremoves thousands of AI generated songsso this is not related to that Discordthis is actually about songs created bythe startup boomi that apparently has isgenerated in uh songs and apparently ithas maybe used Bots to inflate thenumber of streams on Spotifyuh this allows you to create music withcertain Styles some meditation or Lo-Fibeats supposedly they have produced over14.5 million songsbut a lot of this has been taken downapparently seventy percent uh sevenpercent of the tracks by mumi on Spotifyhas been taken down nowman the uh yeah the the gravy trainkeeps going I mean it's like I I'm notsurprised to hear that there's so muchshaking up happening right now becausecompanies are even you know there's noregulation yet but companies are justtrying to clean house right now in thefirst placeum ahead of Regulation uh and justtrying to figure out what's good forthem it's so anyway this is this is afascinating time to be tracking thisstuff and I I feel I feel a sense oflike significant sympathy for a lot ofartists you know who are stuck in thisposition where it's like is my work safeis my work emote in any way like yeahyeahum it's interesting this startup uh wasfirst week in 2019 so way before thereally powerful models we have today itworks by it's fairly limited you justselect a broad style and it gives youkind of choices and you can tweak it oryou can just reject and get thisinfinite stream of choices so it's kindof similar to AI art I don't think youcan specify a general text prompt youjust give it one of a pre-selected setof instructions of uh categories so it'sit's pretty limited but again I'm surethey are working on making this morepowerful now well and the 2019 thing tooit's easy to forget but like we alreadyhad musenet in 2019 from openai that wassort of starting to push in thisdirection AI generated music so I couldimagine like a company starting backthen almost like anticipating theliftoff that's since happened in thisTech but uh anyway kind of kind of coolthat it's it's all hitting the fan nowit sure is and then onto images AmnestyInternational uses AI generated imagesof Colombian human rights abuses this isa pretty controversial story thathappened so a group from AmnestyInternational as the article saidcreated uh synthetic images of policebrutality in Colombia that they used insome Twitter posts and people criticizedit for various reasons the biggest oneis if you use AI imagery to generatethree swings that could essentiallydiscredit the legitimacy of other imagesthat show human rights abuses rightthat's already something that peoplehave trouble with uh when they post realphotos of Human Rights abuses wherepeople can claim that it's fake now ifyou actually mix AI generate imagery inyour coverage of uh human rights abusesthat could really undermine Thecredibility of advocacy groups moregenerally especially those that arefighting against authoritariangovernmentsyeah and and uh completely agree withthat perspective that makes a a ton ofsense just pointing out that the imageuh that we're looking at here like youcan tell that it is AI generated likethis especially some of the faces in thebackground are a bit distorted and thatbut uh but but overall at a quick glancelike that it does seem legit and part ofthe erosion the kind of epistemic Crisisthat seems to be upon us where like yeahwe're gonna find ourselves flooded allour information channels flooded withthis kind of data knowing what's trueand what isn't like that's going to bereally hard and there's a bunch ofdifferent strategies people haveproposed to deal with this kind of thingthere's like blockchain based strategieslike we've seen open AI look at uh proofof identity or sorry samal then ratherthrough his like startup World cointrying to trying to pin down people'sidentities because he thinks that'sgoing to be really essential because somuch can be faked in the future and adigital watermarking and like blockchainrelated strategies but who the hellknows I mean ultimately you knowevidence can be fabricated and at leastin the medium term I just I don't seethat we have many solutions TechnicalSolutions to uh to hedge this off headthis off yeah I think the nice thinghere is you can think aboutorganizational Solutions instead ofTechnical Solutions so I'm just sayInternational doesn't actually want tofool anyone here these are meant to bemore just representative of what washappening not uh intended to kind offake real images and I could see themjust adopting a policy of if you'reusing AI to generate illustrative imageshave some text on the uh on the imagethat says you know buy eye forillustrative purposes we already I meanyou could do this with watermarkingobviously or like experimental or youknow better process a lot of the stuffso I could see that definitely happeningin this case but more broadlyit does showcase kind of all differentorganizations I write starting toincorporate these tools into our processand sometimes making mistakesright and and the organizational levelSolutions like kind of work to somedegree when you have responsible actorsbut you know I'm thinking you know youhave a a despotic government that wantsto undermine the public narrative aroundthis stuff and so you know they theypump out intentionally these fabricatedimages just so they can you know callpeople out like don't cry don't cry wolfstyle uh when real images come out andkind of Muddy the waters there but uhtotally yeah for responsible uhorganizations there's also going to beneeded some practices but how to fightBad actors that's a whole otherchallengeand on to our last story mid-journey 5.1arrives and it's another leap forwardfor AI art so 541 just came out andsupposedly it's somewhat different fromfour and five it's more opinionatedmeaning that you uh kind ofget more stylistic things and more let'ssay consistent results without tellingme journey to do something it'll justchoose to do something more dramatic andmore interesting this article includes abunch of examples comparing I think fiveand five point one and you can seebetter quality more dramatic imagesbetter composition better faces a lot ofstuff and so yeah it's anotherImprovement and I think mid-journeyreally is very good from my experiencereally good at photo realistic atumuh at especially artistic thingsso yeah very coolyeah I'm also curious about when thestuff starts to reach a point ofdiminishing returns because the I meanthese images are so good like at somepoint you know unlike text where it sortof seems like text generating systemsyou can keep extending and extending andbecause language is so powerful and sucha anyway general purpose World modellike there's more value in making morescaled systems I I wonder if you know ifwe reach saturation at some pointreasonably soon where just the marginalcost of scaling up these imagegeneration models exceeds the value theperceived value that users getum you know we're not quite there yetbut I could see I could see thathappeningthat's true and it's so insane to saythat yeahoh my Godall righty so that's all our news andjust to finish up we're gonna do thislistener question uh segment uh for thefirst time so hopefully we'll see howthis goes and maybe keep doing it thisweek from Robo 0890 and I pull podcastI'm just gonna read this out uh I'm ahigh school student right now and I'mhoping to one day major in AI I waswondering if next week you guys couldtalk about AI as a career and what Ishould should for in college and Beyondwe can I'm sure we can talk about it fora long time but we can try to be alittle bit uh restrained right now soit's actually I would say a high schoolis a good position to be in in a waywhere if you're in college right now oryou're just starting work so much isgoing to be transformed in the nextcouple years it's impossible to imagineand a lot of people are gonna have to beadopting very quickly versus if you'renot yet College you can sort of seewhat's happening now and prepare andadapt and think about it as as thislistener wants to do so my take would belook into the jobs that are going to beyou know not AI automated there's plentyof Articles you can look into withanalysis and what is less or more AIfocused it could be a doctor forinstance anything that's Hands-Onanything that's facing user people inperson so yeah there's a lot of optionsbut there's also some Fields youdefinitely probably want to avoid and ifyou want to do major and artificialintelligence that's pretty much computerscience and programming and a lot ofcolleges now have very good classes andprograms for AIyeah it also kind of depends on on thepath that you wantum and and the timelines that you havein mind you know like one philosophythat at least I've personally found I'vehad a very weird Journey like I droppedout of grad school and I startedstartups and then I I you know did abunch of Angel Investing and like it's awonky pathum but the one thing that I foundconsistently useful is to try to seek todo things that make you more valuable asa person so in this particular moment intime I think one good category is likeexperiment with building things buildingthings with AI tools as AI gets morepowerful the people who know how to useit well will have disproportionateleverage and so you want to positionyourself if you can to ride that wave toto be in a position to catch itum so you know make yourself let's saymaybe good enough at coding to be ableto evaluate AI generated code and likebuild little apps play around with thatsort of thing it doesn't mean thatyou're going to be a software developeror whatever more that you're in a betterposition to understand these tools auditthem and and just know what it meanswhen a new breakthrough comes and howyou can position yourself to to capturethe value that it createsum so a bit of abstract level there butyeah and I I like your point that peopletake different paths my original majorand undergrad was electrical engineeringand I actually did study that inaddition to computer science and thatwas sort of just drawn to computerscience I worked for a bit as a softwareengineer before coming back to Stanfordso I also think you know keep an openmind yeah AI is very cool seeming butactually working on it and majoring init may be kind of boring honestlyespecially now that a lot of us isgetting standardized and some of theopen questions are not as relevant andyou end up things doing things likemachine learning Engineers or setting updata pipelines and you know creation andmodel hosting it's a lot ofinfrastructure and a lot oflet's say you know foodie Plus Codeheavy yeah less sexy you're not gonna becreating charging Beauty exactly it'sit's kind of dryso definitely it's still possible youcan study up computer science you canlook into becoming an expert indatabases or in serving you knowreal-time infrastructure things likethat but uh yeah just try things out seeif you like computer science andprogramming see if you like somethingelse that doesn't seem like it'll bedisrupted by AI too much and thenum you got plenty of time and you cankeep an eye on what's happening so Ithink you're in a good place yeah andmaybe just to toss one last thought andlike maybe be wary of over specializingyour skill setum and you know especially you thinkabout your undergrad that's a four-yearcommitment think about how far AI hasgone in the last four years that'spre-gpt3 like that's 2019 I can hardlyremember where we were at in at the timeand so you know when you think aboutstarting whether it's an undergrad amaster's a PhD like these are long timeHorizon projects it may not even be theright path you know you you may want topreserve your optionality and just focuson you know can I get hired at a startupif I do a few months of likeself-teaching and I'm very aggressive inmy Outreach and I try to find startupsthat just raised some money or somethinglike that see I'm speaking as as a techguy like that's kind of what I would dobecause that's what I'm interested inbut find the equivalent whatever yourwherever your interests lie get toexplore what what work life looks likeum you know before you commit tosomething long-term like a collegedegree program or things like that yeahand I guess I'll also say one last thingwhich is as a student in high school orin collegewhat you're really learning what is mostimportant to learn is the ability tolearn and availability to pick things uplike you're not going to rememberchemistry you're not going to rememberlike CS Theory 90 of the stuff you'relearning or more is not going to be inyour brain in five years but the abilityto pick up Concepts think critically youknow do projects all that is what youwant to develop uh so for me one thing Ifound really valuable in undergrad was Ididn't just do classes I did clubs I didthe solo Racing Club at Georgia Tech IT8 I did undergrad research I did littlehackathons so in CS I would say you havea lot of capability to do side projectsand just try things that you want tomake and you shouldn't constrainyourself to just learning the officialway there's a lot of other ways to learn[Music]alrighty so hopefully that answers thatquestion well and again feel free toemail us at uh contact at last week indot AI or comment on sub stack orYouTube or apple podcasts and we willtake a question I think and keep doingthis it's a lot of fun yeah but to thatwe're gonna close out this week'sepisode thank you so much for listeninghopefully the audio quality is betterand be sure to keep tuning inthank you[Music]