Select Page

Original & Concise Bullet Point Briefs

Artificial Intelligence: Are machines poised to take control? | To the point

Exploring the Debate around Artificial Intelligence: A Look at the Pros and Cons

  • AI presents both risks and benefits
  • Raul Rojas believes the biggest risks come from displacement of employment, false information, isolation of people, and privacy invasion
  • Yudet Simons sees the balance as quite balanced and suggests focusing more on concrete dangers instead of long-term detriments
  • Jana Stelker believes the recent attention is due to its accessibility and potential to replace white collar workers
  • AI generated images can be used for lies and manipulation
  • OpenAI’s CEO is concerned about the models ability to manipulate and provide interactive disinformation
  • Professor Schmidt Huber is an optimist and does not believe in extinction.

The Dangers and Benefits of Artificial Intelligence: A Closer Look

  • AI is increasingly making humans’ lives easier and healthier
  • AI dystopia receives more attention than documentaries about the benefits of AI and healthcare
  • AI supported weapon systems can pose danger due to lack of human oversight
  • Artificial intelligence could outstrip human intelligence, but it is difficult to create an ethical compass
  • Job loss from technology could potentially lead to social destabilization, however jobs can be created with new technologies over time
  • It may not be possible to understand how algorithms arrive at their output in certain cases, yet regulation should still be put in place regardless.

Exploring the Growing Risks of AI: EUs Proposed AI Act, Chinas Different Perspective

  • AI is being used by Google, Facebook and Microsoft
  • AI is capable of processing language, images, sound inputs and more
  • AI produces plausible content but not necessarily truth
  • This can lead to Deep Fakes and other sources of propaganda and disinformation
  • AI literacy is needed among users to understand how AI works
  • Regulation will be needed to foster innovation while mitigating risk
  • The EU is negotiating an AI act which uses a risk-based approach
  • China’s perspective on risk may differ from the EU.

Exploring the Impact of AI Across Complex Value Chains: A Chinese Ecosystem of Social Points, Regulation, and Responsibility

  • AI technology is being successfully developed and used in multiple applications across complex value chains
  • The Chinese have developed an ecosystem of companies with the use of social points which watch people’s behavior
  • Regulation for AI needs to be sector specific
  • Responsibility for errors or misfunction needs to be allocated between the original creators and the applications
  • Debate and smart rules should be established to reap benefits, mitigate risks, and prevent AI from taking over.

Original & Concise Bullet Point Briefs

With VidCatter’s AI technology, you can get original briefs in easy-to-read bullet points within seconds. Our platform is also highly customizable, making it perfect for students, executives, and anyone who needs to extract important information from video or audio content quickly.

  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness
  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness

Unlock the Power of Efficiency: Get Briefed, Don’t Skim or Watch!

Experience the power of instant video insights with VidCatter! Don’t waste valuable time watching lengthy videos. Our AI-powered platform generates concise summaries that let you read, not watch. Stay informed, save time, and extract key information effortlessly.

since the launch of chat GPT it's clearthat what once seemed to be the stuff ofScience Fiction is here now with thepotential to transform Our Lives somesay artificial intelligence will enhancehuman capacity and well-being othersincluding some of the very researchersdeveloping the new technology or and itcould drive Humanity to Extinctionthey're calling on policy makers to actquickly and decisively to regulate risksbut is that even possible for machinescapable of learning at a pace that faroutstrips human intelligence do thebenefits of AI in education medicine andelsewhere outweigh the perils we'reasking artificial intelligence ourmachines poised to take control[Music][Music]thank youhello and welcome to to the point it isa great pleasure to introduce our guestsRaul Rojas is Professor for Intel ofartificial intelligence at the FreeUniversity of Berlin working onTechnologies including autonomousdriving bionics and brain computerinterfacesJana stelker is Deutsche vela's Chieftechnology correspondent and joining usvirtually from Hamburg is yudet Simonshe's Professor for ethics in I.T at theUniversity of Hamburg and she also sitson the German ethics Counciland it's a great honor to welcome aspecial guest a pioneering researcherwho's been referred to as the father ofmodern AI he's scientific director atthe renowned AI lab idsia Jurgen SmithHuber joins us virtually fromSwitzerlandand let me ask all of you to just giveus a very quick take on how you see thisbalance between uh between risks andbenefits ago the scientists warning ofpossible Extinction compare AI tosocietal scale risks such as pandemicsand even nuclear war would you agreewell I think that's a little exaggeratedthe the risk I see are more they comefrom everyday life I see risk regardingemployment what are we going to do withpeople who is being displaced by newinformation Technologies I see riskregarding information what is true orwhat is not true when we read somethingon the Internet I see risk regardingisolation of people privacy I see riskof this type and I think that regulationis required and I I think that theexamination of humankind is is furtherdownand is more related possibly to climatechange than to AI thank you and we'llcome back to regulation a little bitlater let me go over to unit if I mayand as an I.T ethicist what risks worryyou the most and do you see the balancebetween these and potential benefits asmore negative or more positiveI see it quite balanced because if youthink about it AI is a basic technologythat can be used for various purposeswe're talking about pattern recognitionmostly for for various purposes so ithas lots of advantages lots ofdisadvantages of course as welldepending on how it is used and I thinkmuch of the debate is to a certaindegree shirking a bit of responsibilityby portraying AI as this natural forceby focusing on this very long-termdetriments by while we should actuallylook at how AI is currently alreadybeing used and implemented and what thecurrent threats are both in terms ofbuyers and discrimination but alsoinfringement of privacy and and ofcourse you know not downplaying thepositive side but to focus more on thesevery concrete immediate dangers insteadof looking very far ahead into thefuture thank you very much and uh overto janosh and the warnings haveproliferated since the launch of thechatbot the dialogue technology chat GPTwhy is that this technology of coursehas been in development for a long timenow it's very through I think one youknow particular reason is that chat GPTand its web interface makes it very muchapproachable for everyone everybody canuse it you know you just enter a queryand then you get an answer and that issort of veryum close to the reality of people and Ithink that's one key aspect the otheraspect is that we're now seeing that AItechnology is also coming for the workof knowledge workers of white collarworkers if you will lawyers journalistsdiplomats for example the people at thistable yes very true very very true and Ithink those are the two key reasons whythere's so much attention on this issuenow and indeed many of us have becomeamateur I.T behavioral researchers inthe sense that we're going to Chachi PTand testing it and seeing if we can beatit and we'll come back to that a bitlater generative AI like that which isdriving chat GPT is still in its earlydays it's sometimes and erroneousmissteps have provoked as muchsnickering as worry the products of itscreativity may look harmless but theyare convincing and therein lies Dangera lot of people thought this photo ofPope Francis in a luxurious pufferjacket was real but you can tell by theDistortion of his fingers it was AIwhether it's Russian President VladimirPutin kneeling in front of Chineseleader Xi Jinpingor ex-us president Donald Trump gettingarrested by the policelies and manipulation are only a fewkeyboard clicks awayAI tools like mid-journey and doll ecreate photorealistic imagesthat can even fool professionalssometimes this image one photographerBoris eldogsen as Sony World photographyawardbut he turned down the prize because hecreated the image with AI to spark adebatewhat does it mean when you can no longertrust what you see in picturesand in fact you can find moreinformation on how to spot AI generatedimages on our dwcom Innovation site withfact checking and one of those who'sworried about precisely that risk is theCEO of the company at the center of thestorm the co-founder of open AI whosetechnology Powers chat GPTif this technology goes wrong it can goquite wrong and we want to be vocalabout that we want to work with thegovernment to prevent that fromhappening it's one of my areas ofgreatest concern the the the moreGeneral ability of these models tomanipulate to persuade uh to providesort of one-on-one uh you knowInteractive disinformationand let me go over now to ProfessorSchmidt Huber and ask how I can becertain that you yourself are real andnot an AI generated Avatarfor now you will have to take my wordfor it on the other hand what is thevalue of the word of an avataryou you didn't sign the open lettersthat have been floating around theinternet and I gather from what you saidelsewhere that you see more promise thanPeril in artificial intelligenceyes absolutely I'm an optimist and um Irefer to all the cases where AI isalready making human lives longer andhealthier and easier now many of thosewho are now warning they are mostlyseeking attention because they know thatum AI dystopiagrabbing more attention thandocumentaries about the benefits of AIand Healthcarewhat about AI supported weapon systemshave you no concern that there's dangerin an autonomous system that makessplit-second decisions where no humanoversight can adequately control orsecond guess what's going on in thatblack boxit is true that we have new types ofweapons you can buy yourself a drone for300 euros and maybe attach a littlegripper to it and then fly it over toyour neighbor's ground and then maybeput some poison into his coffee orsomething like that on the other handthe police is using the same technologyto track that and you we have anexisting regular regulatory framework oflaws that make you go to jail in caseyou get caught I'm much more worriedabout 60 year oldtruly existential threats and form ofhydrogen bombs which can wipe out 10million peoplewithin one Flash without any AIlet me ask you about another risk thatconcerns many people in the classicscience fiction dystopia robots uhessentially run a mock you yourselfcreated a form of artificial curiositythat definitely could outstrip humanintelligence in fact that's your aim arewe really able to implant in suchapplications an ethical Compass toensure that no matter how they developthey will do no harmwell as long as you know as a programmerwhat is that ethical and Compass you canprogram it on the other hand you put 10assists in one room and they have 10different opinions about what is ethicaland what's not so as long as humansdon't get a act together and agree onwhat is ethically correct and what isn'tum I see a little hope that you canbuild an AI that implements that thingwhich is not well defined and we will infact see whether human beings can gettogether to do the necessary policymaking and regulation a little bit laterin the discussion but let me ask you onelast question and it relates to theworld of work which Gaul mentioned asone of his key concerns how do we makesure that job loss doesn't producemassive social destabilizationwe have a long historyum that shows what happens when jobs arelost six well let me see 200 years agoum almost all jobs were in agriculture60 of all people were in agriculture andtoday it's maybe 1.5 percentum nevertheless the unemployment ratesare really low in the Western Worldmaybe five percent or something likethat because lots of new jobs werecreated and this is going on as we speakall the time new jobs are being createdbecause homologous the playing man likesto invent new ways of interacting withother people and making jobsprofessional activities out of that sothat's not going to stop and as long asyou keep learningum to to adapt to the new situation youyou wouldn't you won't have to worryProfessor Schmidt Hoover thank you verymuch for joining us on the programwas my pleasure thank you and let me nowget a take from the three of you on whatwas clearly a very techno optimistic uhperspective on AI and I'd like to beginif I may uh with Raul and pick up on anypoint uh that you wish to address butalso I definitely like to hear your takeon the Weapons Systems because clearlyAI could be used to augment and supportlarger as well as smaller WeaponsSystems and thereby shrinks the windowfor de-escalation in a confrontationalsituation yeah yes of course informationtechnology and artificial intelligenceor dual juice or dual technology you canuse them to improve the quality of lifeof people or you can use them for weaponsystems and many people have protestedin the past about the use of weapons ingeneral and and more to the point ofthis program about the use of artificialintelligence for weapons but I I seeproblems right now more in everyday lifeas I said before I see the problem injobs the current technologicalRevolution is much is going much fasterthan it was going in the in the SecondIndustrial Revolution or the firstuh it took 100 years for the telephonesto take over for example in the U.S sothat everybody had a telephone it took70 years for more than half of thepopulation to have a car it took also100 years for the electrical Network tobe distributed in the U.S now it takes10 years for smartphones to be more thanpeople it takes two weeks for chat GDPto have 10 million users so the theacceleration of the process is such thatwe have to be aware that this is notlike 200 years ago before it's not likethe Second Industrial Revolution or thefirst one and you did SimonI must say I've just been in Brusselstalking to some of the EU policy makerswho will be negotiating uh the new theEU legislation on AI and that speed thatreligious described is absolutely one ofthe major concerns so let me ask you ina technology moving this fast how hardis it to get under the hood as one mightsay and understand exactly how thealgorithms arrive at the outputs thatthey produce because if we want to tryto ensure accountability and alsoethical behavior on the part of AIclearly we need to understand what'sinsideyeah the problem is really that many ofthese large systems are notunderstandable you can't reallyunderstand how systems reach theirdecisions what how they make theirpredictions uh I don't think that wewill have to have explainability for allin order to ensureum accountability or regulation in allforms so I think there must be forms ofRegulation and accountability even if wedon't understand the systems right sofor certain systems we may requireexplainability but this can only go sofar right so we may say for instance inthe judicial system but also maybe inscience we may want to understand howsystem reached certain conclusions butthere is a price to pay very often interms of maybe accuracy soexplainability in terms of excitedoesn't does come as a cost so we willhave to decide where we need it andwhere we don't need it and for whatreasons but I think apart from that weneed to have regulation in placeirrespective of the question whether youcan always look under the hoodand I want to come back to theregulatory issue but let me ask youanother question about the technologyitself I recently uh had the interestingtask of uh of a dialogue with the chieftechnical officer of open Ai and shesaid that even as generative AI is nowmoving toward multimodality that meansprocessing not just language but imagessound inputs and and much more that itwill still remain prone to what shecalled delusionsuh what does that mean that we mustessentially because you talked aboutexplanatory capacity but if AI is readyto lie who can explainyeah I think what people must understandis the way these systems function whathappens is basically if you just thinkof chapterso this the system is analyzing largeamounts of data and is trying tounderstand how text is structured howcertain types of texts such as novels orcrime stories but also sort of likecommunication is structured and itproduces highly plausible content butwithout any relation to truth soeverything that is being generating isjust plausible patterns of content be itspeech patterns or beard pictures andthat explains why we have thesedelusions because the whole underlyingthing is not representing something thatis existing but making up something outof patterns and if you understand thisrationale then it becomes obvious thatof course it will continue to do so youmay be you know changing certain issuesin order to feed in sources for text Etcbut underlying the whole system is justa generation of plausible content butnot of Truth and um that takes us backessentially to the short report that wesaw earlier on the Deep fakes on thesepictures and images that look veryconvincing but in fact are simply alsoliesthere is certainly a double threat todemocracy here say many uh observersboth those kind of images and the degreeto which they can be used for propagandadisinformation manipulation but thenalso the whole other set of issuesaround the future of work and and jobloss how do we get out in front of thathow should we respond proactively yeahthat's true because the genie is out ofthe bottle and now it's about how toreact um I think there are two ways thatare important first it's important topromote what you could call AI literacyamong the general population people needto understand how AI works the basics ofit and how they can use it you know howthey can put it to good use in their ownlife that's important and then thesecondum the secondfield that is important is regulation wedo need smart regulation that FostersInnovation yes because there are vastbenefits as Jurgen Smith Uber pointedout but that also mitigates the risksfor people on the ground and we needregulation that make sure thatfundamental rights are protectedwhat that regulation could look like Iwant to discuss in just a moment withwith the three of you but let me justask a little bit about where theindustry stands today and uh Rowell aswe heard leading researchers have beenissuing warnings includingspokespeople for the leading companiesin this area whether it's Microsoft openAI or others but are they walking thetalk are they now slowing down andprioritizing safety over speed to MarketI don't think so so I was telling beforethat the big difference between AI inthe 90s and AI today is that AI in the90s was an academic project it was doneat the universities then many peoplemoved from the universities to thesecompanies and now a company like Googlellamas on Facebook and so on they areleaders in the field of AI and there'sno there's no way for a university tocompete now against these big companiesand it's a it's an existential problemfor these companies for example chat GTPis is owned in part by Microsoft andGoogle lost a part of his value in thestock market just because they don'thave an unequal alternative and so forthese companies it's an existentialthreat if they do not develop the AI atthe same speed that that other companiesare doing and also especiallyconsidering the Chinese companies whoare not going to abide by European lawsor by American laws again that's thepoint we'll come back to in just amoment but if I can go uh to unit andpick up on this point in fact somecritics say that the biggest everydayrisk as well has has called it earlieris actually not in the technology somuch as the business model that it willserve whether it's turbo capitalisticsearch engines that are expert atmanipulating us to buy things or whetherit's surveillance capitalism that putsuh facial surveillance at workplacesthat can measure everything from workersefficiency to their moodsyeah I would totally agree I mean AIdoesn't do anything on its own it'salways people deploying Technologies forcertain purposes and for certainutilities and that's the problem sowe're not talking about AI manipulatingus but about people using AI in order tomanipulate or control people and that isindeed the real danger so I think wemust be very careful about all thesenarratives about AI being sort of like aforce of Nature and we can't do nothingabout it it's really a lot aboutshirking responsibility in thesediscussions so I think it is importantto recognize that it's people doingthings and taking decisions and beingresponsible for what they're doing sosure King responsibility is somethingthat many accused politicians of forquite some time but it looks now likethey are Awakening from their torpor theEuropean Union will in fact soon beginnegotiations on the AI act that it'shoping will become a global goldstandard for risk-based regulation andinterestingly enough for the first timeit's also working on standards andTechnical Norms at the same time that itnegotiates the legislation which it hasnever done before Janice do you thinkthat Regulators can get out in front ofthe wavewell to be completely honest you knowthe regulation of technology is neverreally in front of the wave becausetechnology evolves fast and um you can'tfully predict what's going to happenum that being said I think the EU is ona good path because at the core of thislegislative package that's now beingnegotiated is a risk-based approach sothe idea to regulate artificialartificial intelligence and itsapplications according to the risks thatit poses to the safety and and to thefundamental rights of users and I thinkthat is the right approach and I thinkthat is sort of the approach that willallow lawmakers to adapt that regulationover the next couple of years astechnology evolves of course the eu'sperspective on risk is not necessarilythe perspective of say China whichcalled you mentioned a moment ago andthere is a lot of concern that we couldsee a race to the bottom in whichauthoritarian States develop and usethis technology use AI for purposes thatwe would consider off-limits massivesocial surveillance for exampleoptimists hope that Global standardsGlobal Technical Norms might be able toprevent that our countries like Chinataking part in standards are theyamenable to Global governance when itcomes to AI I don't think so I thinkthat China is more interested in copyingthe technology that's coming from the USor from Europe and in fact they werevery successful in developing anecosystem of companies which mirrors theecosystem in the U.S so instead ofhaving Google they have baido Facebookthey have other platforms and oneworrying aspect of that development isthat the Chinese were using so-calledsocial points they were they werewatching what people are doing and thenyou can get good points or bad pointsaccording to your behavior and that'sone example of the kind of misbehaviorthat if we can have in the future andyou just in terms of the success ofRegulation if we look at something likeGeneral AI That's AI technology thatFinds Its way into multiple applicationsacross a very complex value chain howcan and should responsibility beallocated for errors for misfunctionbecause this is a question where theoriginal creators may have part of theaccountability but shouldn't it also beon the part of the applicationsthat's a hard question and I really wantto resolve so basically to a certainlyagree what what is happening in the inthe EU is that they are trying toregulate sector-specific AI applicationsso that's one question you know youcan't regulate AI at once because it's abasic technology you need to look intovery specific applications and thesecond question you were addressing iswho's in charge if systems learn andevolve through usage to a certainlyagree this is not an entirely novelproblem because even also for otherproducts you have to disentangle what isa problem of usage and what was aproblem that was already in the productbefore it was sent out so there are someprecedents where you can you know drawupon but it will be an increasinglyprevalent problem and the question isreally what type of Regulation do weneed for what type of AI applicationsometimes we may need to have X enter aregulation where you basically need tosay well we need to regulate in advanceand only products which have survivedcertain scrutiny which comply to certainstandards quality standards freedom frombuyers Etc can be going to the marketand for others it will be X Plus so onlyinspection when something goes wrong andmaybe for some systems that you may haveto have real-time assessment if theycontinue to develop in real time butthat's includes a minority of this thankyou and very briefly if I may Janus comeback to our title our machines poised totake control what do you think can weget this under control we can but youknow this it's important to have thisdebate now and it's important to come upwith smart rules now to make sure thatwe reap the benefits of artificialintelligence and that we mitigate therisk and that we don't let it take overit thank you very much thanks to all ofyou for being with us thanks to ourviewers see you soon