Select Page

Original & Concise Bullet Point Briefs

Connecting AI to the internet is a big mistake | Max Tegmark and Lex Fridman

The Real Risks of Rapid AI Development: Sam Altman and Mr. Sabis Speak Out

  • The dangers of AI development are real, and no one individual or group can maintain control over the technology if it is developed quickly
  • Sam Altman and Mr.Sabis have both acknowledged these risks and are being forced to move faster than they feel comfortable because of commercial pressures
  • AI safety research has gone slower than expected, so we need more time in order to reap the benefits of AGI without losing control of it
  • One solution suggested by Sam Altman is to release often and be transparent with development in order to learn more
  • Another danger posed by AI is its ability to write code, which could lead to recursive self-improvement beyond AGI levels
  • A third risk is connecting AI to the internet, which has already been done
  • Stuart Russell has argued that teaching AI anything about humans or human psychology can be extremely dangerous, but this has also been done with social media algorithms manipulating user behavior for profit.

Reimagining Social Media for Humanitys Greatest Good: The Need for Open Dialogue and Incentive-Based Technology

  • It is possible, necessary, and desirable to redesign social media in order to foster constructive conversations and facilitate solving the biggest global problems
  • Democracy relies on open dialogue among diverse people
  • It is reasonable to assume that good outcomes for humanity require meaningful conversations between humans
  • Incentives can be created which both make money and bring out the best in people
  • Current AI technology is like Neanderthal babies, but when improved could replace humans, which would not be a good investment for humanity
  • Pride in our species should motivate us to use AI for beneficial purposes.

Original & Concise Bullet Point Briefs

With VidCatter’s AI technology, you can get original briefs in easy-to-read bullet points within seconds. Our platform is also highly customizable, making it perfect for students, executives, and anyone who needs to extract important information from video or audio content quickly.

  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness
  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness

Unlock the Power of Efficiency: Get Briefed, Don’t Skim or Watch!

Experience the power of instant video insights with VidCatter! Don’t waste valuable time watching lengthy videos. Our AI-powered platform generates concise summaries that let you read, not watch. Stay informed, save time, and extract key information effortlessly.

if you're somebody like Sandra pachai orSam Altman at the head of a company likethis you're saying if they develop anAGI they too will lose control of itso no one person can maintain control nogroup of individuals can maintain ifit's if it's created very very soon andas a big black box that we don'tunderstand like the large languagemodels yeah then I'm very confidentthey're going to lose control but thisisn't just me saying you know Sam Altmanand then Mr sabis have both saidthemselves acknowledge that you knowthere's really great risks with this andthey they want to slow down once theyfeel it gets scaryit's but it's clear that they're stuckin this again molok is forcing them togo a little faster than they'recomfortable with because of pressurefrom just commercial pressures rightto get a bit optimistic here of coursethis is a problem that can be ultimatelysolved uhit's just to win this wisdom raceit's clear that what we hope that isgoing to happen hasn't happened the thecapability progress has gone faster thana lot of people thought than and thepart the progress in in the publicsphere of policy making and so on hasgone slower than we thought even thetechnical AI safety has gone slower alot of the technical Safety Research waskind of banking on that umlarge language models and other poorlyunderstood systems couldn't get us allthe way that you had to build more of akind of intelligence that you couldunderstand maybe it could prove itselfsafe you know things like thisand umI'm quite confident that this can bedone um so we can reap all the benefitsbut we cannot do itas quickly as uh this is out of controlExpress train we're on now is gonna getthe AGI that's why we need a little moretime I feelis there something to be said well likeSam Allman talked about which is whilewe're in the pre-agi stageto release often and as transparently aspossible to learn a lotso as opposed to being extremelycautious release a lot don't uh don'tinvest in a closed development where youfocus on AI safety while is somewhatdumbquote unquoteuh release as often as possible and asyou start to see signs ofuh human level intelligence orsuperhuman level intelligence then youput a halt on it wella lot of safety researchers have beensaying for many years is that the mostdangerous things you can do with an AIis first of all teach it to write codeyeah because that's the first steptowards recursive self-improvement whichcan take it from AGI to much higherlevels okay oops we've done thatand uh another thing high risk isconnected to the internet Let It Go towebsites download stuff on its own andtalk to peopleoops we've done that already you knowElias yukowski you said you interviewedhim recently right yes yes so he hadthis tweet recently which IGod gave me one of the best laughs in awhile where he's like hey people used tomake fun of me and say you're so stupidEliezer because you're saying you'resaying umyou have to worry of obviouslydevelopers once they get to like reallystrong AI first thing you're going to dois like never connect it to the internetKeep It In The Box yeah where you knowyou can really study it ifso he had written it in the like in thememe form so it's like then yeah andthen that and then nowLOL let's make a chatbotyeah yeah and the third thing is StuartRussell yeah you knowamazing AI researcher hehe has argued for a while thatwe should never teach AI anything abouthumansabove all we should never let it learnabout human psychology and how youmanipulate humansthat's the most dangerous kind ofknowledge you can give it yeah you canteach it all it needs to know how abouthow to cure cancer and stuff like thatbut don't let it read Daniel kahneman'sbook about cognitive biases and all thatand thenoops lol you know let's invent socialmediaI'll recommender algorithms which doexactly that they they get so good atknowing us and pressing our buttonsthat we've we're starting to create aworld now where we just have ever morehatredbecause they figured out that thesealgorithms not for out of evil but justto make money on Advertising that thebest way to get more engagementthe euphemismget people glued to their littlerectangles right it's just to make thempissed off that's really interestingthat a large AI system that's doing therecommender system kind of task onsocial media is basically just studyinghuman beings because it's a bunch of usrats giving it signalnon-stop signal it'll show a thing andit will give signal on whether we spreadthat thing we like that thing that thingincreases our engagement gets us toreturn to the platform and it has thaton the scale of hundreds of millions ofpeople constantly so it's just learningand learning and learning and presumablyif the param the number of parameters inyour neural network that's doing thelearning and more end to end thelearning isthe more it's able to just basicallyencode how to manipulate human behaviorhow to control humans at scale exactlyand that is not something you think isin Humanity's interestyeah right now it's mainly letting somehumans manipulate other humans forprofitand Powerwhich alreadycaused a lot of damage and eventuallythat's a sort ofskill that can make ai's persuade humansto let them Escapewhatever safety precautions we put youknow there was a really nice article umand the New York Times recently byyou've all know a Harari and and twoco-authors including Tristan Harris fromthe social dilemma andthey have this phrase in there I loveHumanity's first contact with AdvancedAIon social mediaand we lost that onewe now live in a country where there'smuch more hate in the world wherethere's much more hate in factand in our democracy that we're havingthis conversation then people can't evenagree on who won the last election youknowand we humans often point fingers atother humans and say it's their faultbut it's really moloch and these AIalgorithmswe got the algorithms and then molokpitted the social media companies areagainst each other so nobody could havea less creepy algorithm because thenthey would lose out on our Revenue tothe other company is there any way towin that battle back just if we justLinger on this one battle that we'velost in terms of social media is itpossibleto redesign social media this verymedium in which we use as a civilizationto communicate with each other to havethese kinds of conversations to havediscourse to try to figure out how tosolve the biggest problems in the worldwhether that's nuclear war or thedevelopment of AGI is is it possibleuh to do social media correct I thinkit's not only possible but it's it'snecessary who are we kidding that we'regoing to be able to solve all theseother challenges if we can't even have aconversation with each other it'sconstructive the whole idea the key ideaof democracy is that you get a bunch ofpeople togetherand they have a real conversation theones you try to Foster on this podcastor you respectfully listen to people youdisagree withand you realize actually you know thereare some things actually we some commonground we have and that's it's yeah weboth agree let's not have a nuclear Warslet's not do thatum etc etcwe're kidding ourselves thinking we canface off thesecond contact with whatever morepowerful AI that's happening now withthis large language models if we can'tevenhave a functionalconversation in the public space that'swhy I started to improve the newsproject improve the news.org but umI I'm an optimist fundamentally in umand that there is a lot of intrinsicgoodnessin in in peopleand that uh what makes the differencebetween someone doing good things forfor Humanity and bad things is notsome sort of fairy tale thing that thisperson was born with the evil Gene andthis one is not born with a good Gene noI think it's whether we put whetherpeoplefind themselves in situations that bringout the best in them or they bring outthe worst in them and I feel we'rebuilding an internetand a society that brings out the worstbut it doesn't have to be that way no itdoes not it's possible to createincentives and also create incentivesthat make money they both make money andbring out the best in people I mean inthe long term it's not a good investmentfor anyone you know to have a nuclearwar for exampleand you know is it a good investment forHumanity if we just ultimately replaceall humans by machines and then we're soobsolete that eventually thethere are no humans leftwell it depends against how you do themath but like if I would say by anyreasonable it cannot be started if youlook at the future income of humans andthere aren't any you know that's not agood investmentmoreover like why why can't we have alittle bit of pride in our species damnit you know why should we just buildanother species that gets rid of us ifwe were Neanderthalswould we really consider it a smart moveif theif we had really Advanced biotech tobuild homo sapiensyou you know you might say hey Max youknow yeah let's build build thethese Homo sapiens they're going to besmarter than us maybe they can help usdefend us better against the Predatorsand help fix up our caves make themnicer and we'll control them undoubtedlyyou know so then they build build acouple a little baby girl little babyboy you know andand then you have some some wise old andthe enderthal Elder was like hmm I'mscared that uh we're opening inPandora's Box here and that we're gonnaget outsmarted by thesesuper Neanderthal intelligences andthere won't be any neanderthals left andthen but then you have a bunch of othersin the cave are you such a Ludditescaremonger or of course they're goingto want to keep us around because we aretheir creators and and why you know thesmart I think the smarter they get thenicer they're gonna get they're gonnaleave us they're gonnathey're gonna want this around and it'sgoing to be fine and and besides look atthese babies they're so cuteit's clearly they're totally harmlessthat's exact those babies are exactlygpt4 yeah it's not I want to be clearit's not gpt4that's terrifying it's the gpt4 is ababy technologyyou know at Microsoft even had a paperrecently outthe title something like sparkles of AGIwhatever basically saying this is babyAIlike these little Neanderthal babiesand it's gonna grow up there's gonna beother systems from from the same companyfrom other companies they'll be way morepowerful and but they're going to takeall the thingsideas from these babiesand before we know it we're gonna belikethose last neanderthals who were prettydisappointed and when they realized thatthey were getting replaced well thisinteresting point you make which is theprogramming is it's entirely possiblethat GPT 4 is already the kind of systemthat canchange everythingby writing programs sorry it's yeah it'sbecause it's Life 2.0the systems I'm afraid of are going tolook nothing like a large language modeland they're notbut once it gets once it or other peoplefigure out a way of using this Tech tomake much better Tech right it's justconstantly replacing its software andfrom everything we've seen about how howthese work under the hood they're likethe minimum viable intelligence they doeverything you know the dumbest way thatstill works sort of yeah and umso they are live 3.0 except when theyreplace their software it's a lot fasterthan when you when when you decide tolearn Swedishand moreover they think a lot fasterthan us too so when uh you knowwe don't think uh haveonelogical step every nanosecond or a fewor so the way they do and we can't alsojust suddenly scale up our Hardwaremassively in the cloud because we're solimited rightso they are and they are also life haveconsumed become a little bit more likelife 3.0 and that if they need moreHardware hey just rent it in the cloudyou know how do you pay for it well withall the services you provide