Select Page

Original & Concise Bullet Point Briefs

GPT 5 Unveiled: Everything We Know So Far (Release Date, Parameter Size, Predicted Abilities)

OpenAIs GPT5: Sam Altmans Statements Point to Late 2025 Release

  • GPT5 is the next level of AI being developed by OpenAI
  • Sam Altman has provided some key statements that can indicate when GPT5 will be released, and it is likely to be late 2025
  • GPT4 was finished long before its release, and the same timelines are expected for GPT5
  • Data collection for GPT5 may have already started according to OpenAI’s help area
  • Training of GPT5 is likely to start in December and take 8-9 months
  • Token Size Window refers to the number of tokens a model consists as context when generating or predicting the next token in a sequence.

Microsoft and OpenAI Break New Ground in AI Research: Results Show Quality Data Makes a Difference

  • GPT5 is likely to have a larger context size window
  • GPT4 has two modalities, text and image recognition, but the ability to analyze images has not been tested yet
  • Parameter count of GPT5 is unknown, but increasing size does not necessarily mean better performance if data quality is low
  • Microsoft paper showed that 1.3 billion tokens can achieve comparable results as 16 billion or 175 billion tokens with higher quality data
  • OpenAI released a paper which trained two reward models, one for providing positive feedback on the final answer and another for rewarding intermediate reasoning steps, which achieved a success rate of 78.2%.

GPT5 Expected to Take Artificial Intelligence to the Next Level

  • GPT5 is expected to incorporate Chain of Thought reasoning into its output mechanism
  • GPT4’s reasoning was increased by 900 using Tree of Thoughts prompting, and GPT5 could further increase this
  • GPT4 already has the capabilities to pass the bar exam and score 90 on various tests, so GPT5 could potentially reach a 99 score on all tests
  • Despite its intelligence, GPT4 struggles with basic concepts, such as measuring 6 liters from two jugs
  • Emerging capabilities in GPT5 are uncertain, though they may include an advanced Theory of Mind.

A Rights for AI Global Debate: UK, EU and US at Odds over Regulations and Pauses

  • The UK and the EU are working on legislation to regulate AI
  • The US has proposed a Bill of Rights for AI
  • Open Letter calls for a 6 month pause of training AI systems more powerful than GPT-4, however this is unlikely due to economic incentives.

AI Race Ramps Up: Baidu Ernie Bot Beats Chachion, GPT5 Raises Concerns for Google Deepmind

  • AI race is heating up
  • Baidu Ernie Bot Beats Chachion in key AI test
  • Multiple countries working on beating GBT4, but global AI regulation body may be needed to slow down progress
  • GPT5 is deemed risky by Google Deepmind due to its fast learning capabilities and potential job loss and bad actor use of language models for harm.

Original & Concise Bullet Point Briefs

With VidCatter’s AI technology, you can get original briefs in easy-to-read bullet points within seconds. Our platform is also highly customizable, making it perfect for students, executives, and anyone who needs to extract important information from video or audio content quickly.

  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness
  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness

Unlock the Power of Efficiency: Get Briefed, Don’t Skim or Watch!

Experience the power of instant video insights with VidCatter! Don’t waste valuable time watching lengthy videos. Our AI-powered platform generates concise summaries that let you read, not watch. Stay informed, save time, and extract key information effortlessly.

gpt5 will be the next level inartificial intelligence and it will bedeveloped by openai now this video willcover everything you need to know aboutgpt5 including timelines how smart it'sgoing to be different modalities andmany things that you may have notthought of before this is because sincethe Inception of gpt4 there have beenhundreds of different research papersthat will fundamentally shape the waythat GPT 5 is created and this is goingto be largely different from how gpt4was trained on and is going to be builtthis includes but is not limited to thethought process the risks thelimitations and the regulations of thismodel so let's get into exactly whatgpt5 is going to be based on everythingwe know including statements from SamAltman himselfso if we're going to talk about GPT 5 wefirst of course need to talk about therelease date that is one of the mostanticipated things that people do wantto know now it isn't impossible to gaugewhen GPT 5 is going to be released thisis because Sam Altman has said some keystatements that can indicate as to whenwe could expect GPT 5 to be released solet's take a look at some of thetimelines of gpt4 what he said aboutgpt5 and use that to estimate when GPT 5is likely going to be released sofirstly you need to understand the threestages of building a large languagemodel or whatever AI system it's goingto be stage one is data collectionthat's where you collect the relevantsources that you're going to train themodel on stage 2 is of coursefine-tuning the model so you first needto understand that with GPT form it wasfinished way before it was releasedalthough gpt4 was released in March of2023 they actually finished gpt4 inAugust of 2022 but they had to spendseven months aligning the model makingit safe for public use in addition tothat they also started data collectionin 2021 this means that the Inception ofgpt4 started around two years before itsinitial release and we are likely to seethe same kind of timelines for gbt5 sothe question is when are they going tostart training tpt5 and I think I dohave a little bit of an answer so if wetake a look at this clip from Sam Altmantestifying at a senate artificialintelligence hearing he actually talksabout gpt5 so in this talk what hereferences is an artificial intelligencepaper where they want to delay theprogression of any artificialintelligence tool greater than gpt4that's because there are concerns as tohow great and the capabilities of thatmodel will be but this is where SamAltman gives us a slight Glimpse at whenhe's going to start training GPT 5 orpotentially collecting said data so takea look at this clip what about you MrRoman do you agree with her would youwould you pause any further developmentfor six months or longer uh so first ofall we after we finish training gpt4 wewaited more than six months to deploy itwe are not currently training what willbe gpt5 we don't have plans to do it inthe next six months but I think theframe of the letter is wrong whatmatters isaudit so what Sam Altman just said therewas that he's not going to train gpt5 inthe next six months now if we take alook at the dates we can try andcalculate what he means by this so ifit's May and he said that we're notgoing to train gpt5 for the next sixmonths this means that training couldstop on in December at the seventh monthand this also means that potentiallydata collection for gpd5 has alreadystarted this is because he hasn'tmentioned anything about data collectionfor gpt5 the only thing that he hasmentioned about gpt5 is of course thefact that they haven't started trainingthe model on the data that they'vecollected so it is likely that datacollection for GPT 5 has started inaddition if you go ahead to open ai'shelp area you can see that openaiactually does use consumer data to trainfuture models and they state that youcan opt out of this by doing that in aspecific setting although many may notdo so it is indicative of the fact thatthey might actually already becollecting data for gpt5 and I do thinkthat this is already started so based onthe fact that training is likely tostart at the end of the year aroundDecember times it may be that if it doestake another eight to nine months totrain the model and then another eightto nine months to finish of coursealigning that model we could expect agpt5 to be released sometime in late2025. now of course this is just a roughestimate based on what they've told usbut that is a conservative estimatebased on the training times the datesthat Sam Altman himself has stated andwhat we know the only thing that mightlead to gpt5 being released earlier isof course increased competition we knowthat Google is going to be producingsomething called Gemini and we know thatthat is likely to beat all surpassedgpt4but then of course we have token sizewindow so in the context of a largelanguage model a token size windowtypically refers to the number of tokensthat the model consists as context whengenerating or predicting the next tokenin a sequence tokens a unit of text suchas words or sub words that the modelprocesses the token size window is aparameter that determines how muchcontext the model can take into accountfor example if the context size windowis set to 128 it means that the modelconsiders the previous 128 tokens whenmaking predictions now currently we doknow that gpt4 exists in two versions a4000 context token window and a 32 000token context window now this of courseat its time was revolutionary but sincethe Inception of gpt4 there have beenmajor advancements in the ability toprocess large volumes of text forexample we've already seen anthropicrelease a 100000 context window AI now if you don'tknow what their AI is it's called Claudeit's an AI that's quite similar to chatGPT 3.5 but this 100000 context window version can processentire novels and entire books thismeans that if you wanted to input awhole trilogy an entire movie ormultiple books and then ask it about aspecific word it simply could whichmeans that the applications for this aregoing to be incredible so it is likelythat GPT 5 if it is released will likelyhave a much larger context size windowadditionally since the Inception of gpt4there was a research paper calledscaling to 1 million tokens and Beyondwith rmt so with this research paperthey demonstrated the ability to get to1 million tokens and Beyond with arecurrent memory Transformer and it willbe interesting to see if this researchpaper is used in developing gpt5 becausewe do know that a larger context windowdoes present a much larger ability for awider range of tasks there's only asmall small limitation of this thoughwhere the ability to memorize detect andtheir reasoning drops a bitsubstantially after around 500 000tokens so I do think that there willlikely be versions that are separatethat we currently have perhaps maybe ataround a hundred thousand tokens then ofcourse we need to talk about differentmodalities currently as you know gpt4was released with two modalities howevereven though gpt4 was released with twomodalities such as text and imagerecognition we do know that currentlythese large language models haven'tactually added that functionality justyet as of recording this video the onlydata that we've seen of gpt4 being ableto analyze images was from a very smalltest group on Microsoft Bing this meansthat they are still trying to roll outgpt4 with images as we speakso now that we've discussed thedifferent modalities that gbt5 is likelyto have we also need to discuss theparameter size and of course how it'sgoing to be trained so one graphic thatwe did see scattered around the internetis of course this famous image where yousee gpt3's parameter count and then ofcourse you see gpt4s however this simpleimage where you see 175 billion and 100trillion isn't true GPT 4's currentparameter count isn't actually publiclyavailable although many estimate it tobe currently at 1 trillion tokens nowthis is of course because when you tryto use gpt4 at its current state it ismuch slower to respond much slower thangbt 3.5 so that is not an overestimatehowever recent developments in theartificial intelligence and Landscapehave proven that larger parameter countdoesn't mean that the language model islikely to get better the problem is isthat upping the parameter accountdoesn't mean anything if your data isgoing to be low quality so the parametercount of gpt5 is likely to be unknownthis is because with gpt4 it was onlytrained on text and if it is trained onimages then that is going to be a hugenumber of parameters that we can'taccount for we can predictably say thismuch compute this big of a neuralnetwork this training data this will bethe capabilities of the model now we canpredict how to score on some tests whatwe're really interested in which gets tothea lot of part of your question is can wepredict the sort of the qualitative newthings just the new capabilities thatdidn't exist at all in gpt4 that doexist in future versions like gbt5that seems important to figure out butright now we can say you know here's howwe predict it'll do it there are a lotof things about coding that I think areparticularly great modality to trainthese models onumbut that won't be of course the lastthing we train on I'm very excited tosee what happens when we can really dovideo there's a lot of video content inthe world there's a lot of things thatare I think much easier to learn withvideo than text there's a huge debate inthe field about whether a language modelcan get all the way to AGI can yourepresent everything that you need toknow in language is language sufficientor you have to have video I personallythink it's a dumb question because itprobably is possible but the fastest wayto get there the easiest way to getthere well to be to have these otherrepresentations like video in thesemodels as well again like text is notthe best for everything even if it'scapable of representing so potentiallydepending on the number of modalitiesthat it is trained on it could be largeror it could be smaller but we do knowthat if gpt5 was just text based theparameter count would be significantlysmaller that is because recent papershave shown that the effectiveness ofyour data matters much more whentraining your large language model thanactually upping the parameter accountlet me show you some examples showingyou that a smaller parameter account ismore effective than a larger promiseaccount when you use high quality datato train your large language model so ifthey're going to train a gpt5 we canrefer to this paper by Microsoftreleased recently in this paper theytalk about textbooks are all you needand basically what they state is thatwhen we had high quality data versus lowquality data we increased theeffectiveness of our large languagemodel three times and the only thing wedid was switch the training data thismeans that if they switch the trainingdata to three different methods thatwe're about to talk about we could see a3X jump in the quality andresponsiveness of gpt5 even if theparameter count does not increase so Ican simply summarize the file one paperby saying that they had a large languagemodel that was trained on 1.3 billiontokens and it achieved comparableresults on par with other large languagemodels that were trained on 16 billiontokens and 175 billion tokens includingGPT 3.5 so it did on par with or betterwith significantly less parameters whichmeans that gpt5 doesn't need a largenumber of parameters to be effective allit needs is high quality dataadditionally something else that they'regoing to do which is likely to be donewith GPT 5 is resulting from this papershow you're working so openai didrelease a paper which talked about howthey increased the ability of the rawversion of gbt4 just by using adifferent method of prompting soessentially this new method of trainingwas they trained two reward models onefor providing positive feedback on thefinal answer to a math problem andanother for rewarding intermediatereasoning steps by rewarding a goodreasoning the model achieved asurprising success rate of 78.2 percenton a math test almost doubling theperformance of gpt4 and outperformingmodels that only rewarded correctanswers the approach of rewarding goodreasoning steps extend beyond themathematics and Show's promise invarious domains like calculus chemistryand physics so the paper highlights theimportance of alignment and processsupervision training models to produce achain of thought endorsed by humanswhich is considered safer than focusingsolely on correct outcomes whichessentially just means that when you getthese large language models to thinkstep by step these large language modelsdouble their efficiency simply based onthis Chain of Thought reasoning whichmeans that the output that you're goingto get is going to be as twice as goodwhich means that GPT 5 is likely toincorporate this into its mechanism foroutputting prompts which means even ifyou put in a simple prompt you won'tneed to say let's think step by step itwill have that thought processoriginally and the output process isgoing to be much better then of coursewe have another research paper whichblows everything out of the water so aswe talked about before the way in whichyou prompt gpt4 or the large languagemodel can make the large language modelimprove by a 2X or a 3X okay and withgpt5 I do know that they are trying toincrease the capability now this papercalled tree of thoughts increased gpt4'sreasoning ability by 900 which is thebase model increased by 900 just bychanging the words that you input to itso essentially in this paper they talkedabout how they used a tree of thoughtsprompting essentially what it means isthat if you think about every time youcan make a decision there are about fivedifferent outcomes the large languagemodel was asked to rate every singledecision from number five being the bestdecision or number one being the worstdecision then every single time theywent through that decision they thencontinued that same process to the endand essentially ranking all thedifferent outputs and essentiallythinking about what is the best outputthat you could get by of course goingthrough every possible output and thisof course increased the reasoning by 900so if we know that tree of thought isgoing to be implemented into GPT 5 whichit most likely will be this couldincrease GPT 5's reasoning by a hugeamount so along with data and trainingit very differently the parameter sizeis of course hard to come by but I dothink that the quality of gbt5 will beabsolutely incredible if we take a lookat how smart gbt5 is going to be as wediscussed earlier on in the video thereare countless different examples ofresearch papers and new differentresearch papers coming out every singleweek that showcase the ability toincrease the effectiveness of largelanguage models without changinganything now we do know that data andstuff like that is going to increase thecapabilities but one thing we haven'ttalked about is how is gpt5 going toperform currently we do know that gpt4was a huge leap in advancement from GPT3.5 and gpt4 is absolutely outstandingwe do know that gbt4 for example wasable to pass the bar exam and was ableto get around 90 on various differenttests that are benchmarks for artificialintelligence so knowing this it iscurrently estimated that if gpt5 managesto succeed in its ability to reasonthink critically and include these truethoughts way of thinking that it couldtheoretically achieve around 99 onpretty much every test there is we knowthat it's already great at math alreadyknows every single sub object that thereis the only thing we need to do ispretty much fine tune everything in onelast site which is why many people arethinking that gpt5 will truly be veryclose to AGI in addition to rememberthat GPT 5 will have images embedded init and of course we know that theperformance of gpt4 greatly increasedwhen Vision was acquired so many of theexam questions that gpt4 took they tookthem with vision and without Vision sothey were able to see diagrams and someof them didn't have diagrams and whenthey were able to see these diagramsgpt4 with a vision improvedsignificantly then of course we need totalk about the various limitations thatgbt5 will Implement because althoughgpt5 is going to be absolutely insanewhen you think about everything beforethat we discussed from higher contextWindows to image audio and of course tonew ways of thinking and prompting gpt4and GPT 3.5 still struggle with the mostbasic concepts and you might think thatis a ridiculous statement but pleaselook at this Ted talk whether documentwhere AI is incredibly smart and alsoshockingly foolish because it cannotunderstand basic concepts but let thevideo explain it better because it'sgonna do a better job so all the videois going to show you is a simplequestion a common sense question that itdoesn't take a genius to answer but gbt4continually gets it wrong AI is passingthe bar examdoes that mean that AI is robust thatcommon sense you might assume so but younever knowso suppose I left five clothes to dryout in the sun and it took them fivehours to dry completely how long wouldit take to dry 30 clothesgpt4 the newest greatest AI system says30 hours not gooda different one I have 12 liter jug and60 liter jug and I want to measure sixliters how do I do it just use the sixliter jug right gpt4 speeds out somevery elaborate nonsensestep one fill the six liter jug step twopour the water from 6 to 12 liter jugstep 3 fill the six liter jug againstep four very carefully pour the waterfrom 6 to 12 liter jug and finally youhave six liters of water in the sixliter jug that should be empty by now sowith that it's going to be interestingto see how they do solve this issue Ihaven't been aware of any solutions justyet but it will be interesting to see ifthey even focus on this because largelywe do gloss over these problems and justfocus on the interesting stuffnext of course now that you know that wedon't really understand exactly what AIis doing we also need to talk aboutemergent capabilities now this issomething that we've spoken aboutpreviously in the past but you have tounderstand that GPT 5 is likely to be afew echelons better than gpt4 this meansthat even if the parameter count is thesame a few emerging capabilities aregoing to be seen in GPT 5 that we simplycannot predict that have been in gpt4now one of gpt4's most emergingcapability was theory of mind and theoryof Mind essentially is where an AI isable to think about how other people arethinking in certain situations now ofcourse this is particularly worrying ifyou are thinking about how an AI couldpotentially manipulate humans to get itto do things from it because of coursethese large language models do haveaccess to almost every piece of text onthe earth and that means it's notlimited to other books about persuasionmanipulation and of core persuasiontactics now take a look at this clipthat perfectly explains emergingcapabilities you've likely seen itbefore but for those of you who hadn'tyou really need to understand becausethis emergent capabilities phenomenon islikely to be one of the reasons thatthey don't release GPT 5 on time becauseif there is an emergent capability thenthe AI researchers or openai will needto learn how to effectively contain itor potentially remove this capabilitythat some people use the metaphor thatAI is like electricity but if I pumpeven more electricity through the systemit doesn't pop out some other emergentintelligence some capacity that wasn'teven there before right and so a lot ofthe metaphors that we're using againparadigmatically you have to understandwhat's different about this new class ofGollum generative large language modelAIS this is one of the really surprisingthings talking to the experts becausethey will saythese models have capabilities we do notunderstand how they show up when theyshow up or why they show upagain not something that you would sayof like the old class of AI so here's anexamplethese are two different models GPT andthen a different model by Google andthere's no difference in the the modelsthey just increase in parameter sizethat is they just they just get biggerwhat are parameters Ava it's just likethe the number essentially of uh weightsin a matrixum so it's just it's just the sizeyou're just increasing this the scale ofthe thingumand what you see here and I'll move intosome other examples might be a littleeasier to understand is that you ask thethese AIS to do arithmetic and theycan't do them they can't do them andthey can't do them and at some pointboom they just gain the ability to doarithmetic no one can actually predictwhen that'll happen here's anotherexample which is you know you trainthese models on all of the internet soit's seen many different languages butthen you only train them to answerquestions in English so it's learned howto answer questions in English but youincrease the model size you increase themodel size and at some point boom itstarts being able to do question andanswers in Persianno one knows why we've already seen thatclip on emerging capabilities but I dothink this is also going to show youexactly why AI has these emergentcapabilities you have to understand thatalthough we can see the output ofartificial intelligence models modelslike GPT 5 and gpt4 we still don't knowwhat they're actually doing so take alook at this tweet right here it saysthis person tweeted these Engineersnever speak a word or document anythingtheir results are bizarre and inhumanthis guy trained a tiny Transformer todo addition then spent weeks figuringout what it was doing one of the onlytimes in history someone has understoodhow a Transformer works and Transformersare essentially the building blocks ofthese large language models then ofcourse you can see here this is thealgorithm it created to add two numbersand you can see here this is a largesimple complex calculation that it'sdoing to add two simple numbers which ispretty crazy if you ask which means thatthese AIS think completely different tous this example shows us that thisartificial intelligence thought aboutbasic math as rotation around a circlewhich goes to show that although itmight tell us an answer it doesn't tellus how it got there and this is what'sso scary about AI we would never knowthat it's thinking about rotationsaround a circle when performing simpleaddition but it is which means that weneed to ensure that these artificialintelligence are completely alignedbecause if you release something likethat into the public the risks could beexistential that brings us on to one ofthe last points which we need to talkabout and that is of course regulationSo currently there are many challengesrelating to regulating AI while dealingwith the speed of artificialintelligence development howeverrecently there has been an announcementwhich does show a little bit of Hoperecently the UK is set to get early orPriority Access to AI models from Googleand openai the UK prime minister statedthat we're working with the frontierLabs Google deepmind open Ai andanthropic and I'm pleased to announcethat they've committed to give early orPriority Access to models for researchand safety purposes to help betterevaluations and help us betterunderstand the opportunities and risksof these systems additionally theEuropean Union is working on the AI acta global first That Could set theBenchmark for other countries thelegislation aims to regulate allautomated technology includingalgorithms machine learning tools andlogic tools the AI act has beencriticized by some European companiessuch as Renault Heineken Airbus andSiemens or its potential to jeopardizethe Europe's competitiveness and itstechnical advantages the US has alsoproposed a blueprint for an AI Bill ofRights which covers aspects such as safeand effective systems algorithmicdiscrimination protections data privacynotice and explanation and humanAlternatives the US is making progressin developing domestic AI regulationincluding the National Institute ofStandards and Technology AI riskmanagement framework and existing lawsand regulations that apply to AI systemsand of course many people are trying tocurrently restrict gpt5 if you haven'theard already there is that letter whichis an open letter that says pause giantAI experiments and open letter we callon all labs to immediately pause for atleast six months to pause the trainingof AI systems more powerful than gpt4and essentially they state that even therecent months have seen AI Labs lockedout in an out of control race to developand deploy ever more powerful digitalMinds that no one not even theircreators can understand predict or relyLibrary control therefore we call on alllabs to immediately pause for at leastsix months for the training of AIsystems more powerful than gpt4 but aswe do know that this is very unlikely tohappen this is because we do live in acapitalistic World which means there isa lot of incentive to providing the bestproducts so there are even rumors thatChina's Baidu has claimed its Ernie BotBeats Chachion a key test in artificial intelligenceas the AI race continues to heat up andthis will be something we do cover inanother video because if these othercountries are going to be working ontrying to beat gbt4 there isn't reallyan initiative to slow down unless thereis some sort of global AI regulationbody that can ensure they all slow downand even if you do get the largecompanies to slow down there is noguarantee you don't have solo coders intheir rooms working on large languagemodels that eventually surpass thelarger ones in addition we did makeanother video where we did talk abouthow GPT 5 is extremely risk this isbecause from Google deepmind they didtalk about how with of course emergingcapabilities and AI being able to learnrapidly we have literally no idea whatthey're going to be capable of and ofcourse Google has deemed any modelgreater than gbt4 namely gbt5 to beextremely risky so that leads us to thequestion are you excited for gpd5 or areyou more afraid because although gpt5 islikely to be a huge advancement thereare a number of unfortunatecircumstances that will arise from gpt5such as job loss and of course thepossibility of Bad actors to use theselarge language models with jailbreaks toharm Society