Select Page

Original & Concise Bullet Point Briefs

Exploring The Power of GPT4All model in Langchain: Testing with Langchain Tools, Agents and Chains

Exploring GPT-for-all in a Local Environment: A Thank You to the Nomik Team and Thomas Anthony

  • This video explores how to use GPT-for-all in a local environment. It includes an overview of the model and a thank you to the Nomik team and Thomas Anthony for uploading the model to Hugging Face and creating llama CPP python Library
  • It presents tests of the large language model, using simple LLMs and tools such as Google Server API
  • The main challenge is understanding how to tokenize prompts and data
  • The video suggests learning more about language line chain and large language models by watching its associated playlist, and provides a link to download code from its GitHub repo.

New Chain Library Model Offers Efficiency and Google Searchability

  • The lm chain library is a 4.2 Gb model
  • The launching developers have created an abstraction of code, making efficient use of the line chain class
  • With a simple sentence the model can be initialized and used to answer questions
  • If more data is sent it may error out due to token size or content
  • Request chain can also be used for Google searches
  • The initialization agent and server API key must also be loaded.

Requirements for Optimal Performance of GPT-3 with Python Scripts

  • Server API keys must be provided to execute Python scripts
  • The script is taken directly from the pull up notebook and includes additional print statements
  • A try/except block should be used when running a complete script as it prevents stopping of the script in case of error
  • 4GB of RAM is required by GPT-3 for optimal performance
  • When executing the python script, one must include their server API key.

Understanding How to Set Up a Large Language Model Agent with Neural Networks and Math

  • Large language models often use neural networks as well as math to create coherent sentences
  • A local algorithm checks whether the words generated match a regular sentence
  • The video demonstrated setting up an agent that interacts with the large language model
  • To run the script, one should remove the pass from it and run each step, rather than running all steps at once.

Original & Concise Bullet Point Briefs

With VidCatter’s AI technology, you can get original briefs in easy-to-read bullet points within seconds. Our platform is also highly customizable, making it perfect for students, executives, and anyone who needs to extract important information from video or audio content quickly.

  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness
  • Scroll through to check it out for yourself!
  • Original summaries that highlight the key points of your content
  • Customizable to fit your specific needs
  • AI-powered technology that ensures accuracy and comprehensiveness

Unlock the Power of Efficiency: Get Briefed, Don’t Skim or Watch!

Experience the power of instant video insights with VidCatter! Don’t waste valuable time watching lengthy videos. Our AI-powered platform generates concise summaries that let you read, not watch. Stay informed, save time, and extract key information effortlessly.

welcome to inside Builder Channel largelanguage model automators python expertsand my dear friendsexploring the power of GPT for allinside longchain testing with toolsagents and chainsthe intention of this presentation isgoing to beworking with GPT for all in your localenvironment that is the primary you knowpurpose of thiseven though you would have already seena lot of videos where they show how torun this in local environment this videowill go one step above and it will startworking with various ways of interfacingthe GPT model with theclass constructs in launching that willbe the primary objective of thispresentation so before I go really intothe presentation I just gave an overviewlet us go back to the browser the GPTfor allwas introduced was has been you knowreleased by nomik Ai and you must haveheard about it a lot I am not going togo in detail about thisthe point here is that we are going toprimarily concentrate on how to get thiswork in our environmentbefore we start I really thank the nomikteam Lucas reussell who uploaded thenormic contest model in hugging facewe'll be using his the uploaded modeland Thomas Anthony who helped increating llama CPP python Library whichwill be used by land chain in order tointerface the model in interface themodel that is GPT for for all model withlaunching so all these people have donea great job so that today we canactually start working with python andlaunching so a lot of work has gone inthis this is very important because atthe end of the day the step that theyarethat these people have done nomiccrucial Mr crucial and Mr Thomas hasactually you know let us let us to useit use the model more easily in ourenvironmentwe will if you are thinking okay whitetest and what are the tests that isgoing to be done most important point isthat large language model as I wasexplaining is kind of a controller orit's more of a more of a backbone whenit comes to working with the variousinterfaces like your data stores yourfiles your services future it is goingto be like that having a large languagemodel that is present locally and youcan use it at your will is going tochange how we interact with uh interactwith the world in in near future so Ihope that you guys will you know reallyunderstand what is going on here andalso you know implement it in your ownenvironment the in the way thisparticular video is made is a particulartest and the videos made is that I willbe introducing the test that is done andthen I'll be also sharing as usual thenotebooks as well as this time that willbe a script alsoin all the previous videos I only sharedthe collab notebooks this time I had tomove these move to a script move thecommands to a script the python scriptwhich can be run in your localenvironment and you can test it byyourselfthe point of creating this is that thenby running it in your local environmentonly you can understand what is actuallygoing on without that it will beextremely tough for you to really tojudge that okay whether how to use thistechnology so that is the main intentionof this and I hope that you like thesevideos and do leave a like and share itwith others very important guys pleaseshare this technology with others andsubscribe to my channel so that furtherupdates online chain and the Big Datarelated Concepts will be shared herewith that said let us go to the testsinitially I'll be working with with thesimple large llm concept so I will nothave any kind of uh any kind of changeor anything it will be a simple Langchain based llmI did not get any output when I wastesting it in the collab notebook when Icreated my llm chain class object then Igot an output in case of pandas dataframe agent it errored out and in caseof request chain also it errored outthis is all in collab notebook okay I amI'm not talking about in script inscript I have still kept this but I haveI will be showing it to you I havecommented it out so but I have anywaystill recorded it there because in thefuture when the things change we willupdate it right and finally llm alongwith other tools and initialized agentsso when I started working with the toolslike Google server API and initializedit as an agent then it was able toactually get the output and it was uh Iwas actually surprised by thatso uh you will be seeing this in actionin a momentthe major challenge what I uh what Iunderstand from this various test isthat how we tokenize the prompts and thedata that is given to it so that iswhere these two uh these two testsfailed so I have actually given thefailed tests also here because seewhen you are only going to shareeverything that is Success you mightactually think okay this is easy butbelieve me guys it has to be it has to alot of research goes on into this and IIpersonally want you guys also to exploremore because uh once we start exploringonce you guys start exploring togetherthen we can learn a lot so that is themain intention of you know sharing whatis exactly going on also so that it willbe much more useful for you anduh the whole communityas I was telling all these people havedone a great job and they have moved thecommunity has a whole a step forward I Iwould say a hundred step forward butstillso with that said let us go to thecollab notebook and start working onthat so going here if you guys by anychance are coming to this video directlyand you are not aware of language linechain or large language models take alook at this playlist where I explaineverything from the beginningfor not only the line chain library butalso on the large language models youcan get a good idea about thatand the repo where the code the Jupiternotebook as well as the python codestored will be shared to you in thevideo description that is uh in thebelow in the video YouTube videodescription take a look at that you canget the code from there and this is thereport that you will be working on andfinally coming to the collab notebookthis is the collab notebook you have tojust use file and say upload notebookand you have to give the GitHub repolink and it will open this particularnotebook sonote this point when you are going toyour regular environment you need toinstall all this in your regularenvironment okay ensure that you allthese are important libraries so ensurethat you install it before you work onthe regular environment coming to thefirst and foremost step we get thequantized GPT for all Lora model from MrLucas's repo it is again face report themodel has been uploaded we will bepulling it and once you pull it insidethe Google collab notebook it will itwill get them get populated in thecontent directory so that will be theroot directory once you do this once youdon't need to do it again so this is a4.2 GBummodel so ensure that you have sufficientbandwidth or else you will actually youknow end up in trouble so you will notget the model it will so you understandwhat I'm trying to saythe we are we are going to start workingwith a llama CPP class insidelinechain.lm so when you are going toinstall it in if you already installedline chain better do a pip installupgrade launching so here when I amdoing here you see that I am doing pipinstall launching but when you are doinguser upgrade here so use the upgradeoption and then do the launching whathappens is it will pull the latestclasses or latest code that has beenwritten by the launching developers andit will put it in your own in your ownenvironment or else this particularclass will be erroring out it will saythat okay Lang chain does not have thisclass so ensure that you you properlyinstall itand I have just run it already because Ido not want to want you guys to waste itso waste your time so I've already runit and I am using very simple you knowprompt templates and this is theimportant point so I am initiating themodel so all you need to do is just passthis model path into Lama dot llama CPPwith capital c and the model getsinitializedbehind this uh behind the simple classinitialization lot of code actually goeson which is not shown to us so it hasbeen completely abstractedline chain guys have done a great job Iwant to leave a shout out here for themand uh you know it's one of the it looksvery simple right it looks you knowextremely simple that all you do is youknow create a class but in fact it's notthat simple if you take a look at thevarious libraries involved and variouscode unit execute to get this model init's not that easy because I was uh Iwas planning to create a much longernotebook because I had to go multiplesteps to get this model loaded into linechain object that is to create a largelanguage model that can beoperated by launchingenvironment but thanks to llama CPP wedon't need to do it so all I need to dois just run this single sentence andafter that when I tried okay after theafter that when I asked the question uhwhich team NFL team won the Super Bowland the year Justin Bieber was born andI gave the question it actually did notgive any output so it took a lot of timeand also ensure that it will take a lotof time guys the ram also it will takeit will take around four to five GB ofRAM so I'm sure that you have a good Ramin your machine also because you aregoing to run all these things as ascript I'll be showing you the scriptalso in a moment inside the inside thecommand promptso I create a chain here llm chain thatis again a large language sorry that's aline chain class again and when I askthis question so the same question Isend it through this and we we get theanswerokay the you see the answer here so youthe answer is right actually I checkedit out also I think the answer is rightbut whether the answer is right or wrongthat is the secondary thing the pointhere is that it is able to properly gothrough the question it is also you knowrephrasing the question and respondingto us back in the llm chain so this isone of the good things that I actuallyliked about and I wanted to share withyou guys alsoand then starts the challenge okay so Igot this chain right so I got greedy soI wanted to get the pandas data frameimmediately red and I want to do lots ofanalysis and I wanted to work withrequest chain also but both of thesethings errored outokay the what I find from so I'll beexecuting this so I I have not executedthat earlier I can execute it right nowso what happened was that it enters intothe chain and then it actually says thatfailed to tokenize so what I understoodis that the data that I'm pushing intothe environment is actually huge andwhen I am you know initiating this largelanguage model right I need to I think Ineed to also understand what goes on ininternally how much is the tokens Etc Ineed to modify that okay as of now thatparticular study I have not done so thatcould be the you know one of my one ofthe mistakes that I have made but stillthis is what uh is the result so uh wesee that and then I wanted to createrequest chain and in case of requestchain also I started facing the sameproblem so when I was sending therequest in what it was happening isagain so the request was going out so Iasked what are the three biggest statesin India and its population so I'masking for a Google search using arequestion I am not using any of theapis here it's a request chain that isgoing to work and I am writing the net Iam executing this and I get the similarkind of an error because I I think thatit is because of the token sizes or thecontacts that we are providing so it isstill getting the data you see the datais coming outprobably have allowed to zoom in a bitso I hope that you guys are uh sorry Icompletely forgot about zooming in but Ihope it is still visibleso the point I was trying to make isthat uh we can see that it is returningthe data however uh because the llm isunable to process it it is erroring outokay so we have to still work on thisbut yeah apart from all this quirksthe final point I was doing was I usedthe load tools and use theinitialization agent okay just a minuteI will be uploading I am in updating theserp API key here and I will be gettingback to youso I have already loaded this and now Iam going to load the server Googleserver so this is actually serper APIkey you have to just put your API keyhere and I'm initializing the agent thisis the regular process that we followand then I ask okay what is the weatherin Delhi and you see it actually givesthe replay backthe point that I wanted to make is firstand foremost ensure that you givesufficient time don't be in a hurry youhave to test it out step by step okayand also ensure that your system is upto date not from the perspective of youruh what is it software handle you needto be you need to take care of thePython packages are up to date becausemost of the time in case of collabnotebook a lot of packages are alreadyinstalled in the collab notebook andwhen you try it in your localenvironment when I'll be showing thescript in a moment just uh just let meshow this group so this is the script uhthat will be it's already shared withyou guys before that let us see what ishappening here the agent executor isstill running you saw that it's alreadyalmost two minutes it is still runningbut anyway let us go and startexplaining you the script script isnothing different guys I have just takenyou see even the script is directlytaken from the pull up notebook only Ihave not changed anything the point hereis that in my script I have commentedout the download because when I amrunning I've already run it once so Idon't need to get the model again so Ihave commented it out when you startuncommented and then start working okayand I'm I am going through the sameprocess that I show in the collabnotebook only thing is I have addedadditional print statements so that youwill understand what is going on so likethe print statements that you see here Ihave did it and this is the first llmwhere it is yeahjust a minute this is the template thatis getting created and replythis is the first reply so you see thisI have actually commented it out becauseit's not giving me any value so when yourun your script you can uncomment it anddo the script run and see what happensokay and in this case replay 2 is usingthe llm chain it gives the reply you cansee that I will be showing it in coupleof minutes and then I continue with thisagain here I have passed the triacexcept block because the reason is thatthis particular the agent when I try torun it using the uh using the data thatis provided so this data the spaceshortened CSV also is available insidethe repo only so you can find it outinside the repo I think it is inside theone of the project folders I will sharethe link also so that you can you guyscan get it or you can put your own csvsit doesn't matter there you can you canjust edit it and you know update yourown CSV file that is available with youand try it out but the point thechallenge is that you understood youunderstood right it will error becauseit is unable to tokenize and then I usethe request change and that also againpassed it so here I initiate the netchain and then I try accept it using thereply process I why I'm using try acceptbecause I am running a complete scriptso if any of the replies fail or if anyof the prints fail then I will I willjust accept it and go to the next stepor else my script will stop so that iswhy I used it and also this script willrequire your server API key to beprovided as provided along with thealong with the command when you areexecuting python the script name pythonExplorer exploring underscore GPT after that you need to give theuh the key the server API key because inthat way I can share the when I do itlike this I can share the code with youguys without you know thinking twice andthen I am loading the tools I aminitializing it and this replay I amalso going to share it with you I thinkit is done here and we see the resulthere and you can check it out that it isovercast by 41 degree celsius at uh inDelhi right now the point I wanted tomake it here make here is that the theGPT for all actually you know it workswith 4GB of RAM and it is it is easilyloadable so we saw this script also nowwhat I'm going to do is I will connectto my machine and I will show it you howto execute this okay just give me amomentI hope you guys are able to see theprompt here I am going I have alreadyget cloned the entire Library entire uhrepo here and I am entering into thefolder where my code is available thecode is available inside langchangin just a minute it's in projects Langchain x uh I think it is exploring GPTfor all the code is inside here so ifyou do an LS now you'll see that Ialready have the model here you see thisso you should keep the model in thisplace only ggt gggat hyphen model dotbin you once you execute it once youwill get it get the model directly alsoin this location only and here also yousee the space underscore shorten CSVthat I shared it with you in a momentupward is already available there onceyou pull the repo you will get it andthe the document exploring underscoreGPT for all.pi this is the uh this isthepython script that will we will beexecuting so I will be just doing a capcommand and I will be showing it to youguys whatever I was I showed it to youin the in the rep in the GitHub repo thesame thing I have just pulled it hereokay nothing different what I'm going todo right now is uh I'll not be able toshow you my server API key right so whatI'm going to do is I'm going to executethis and I will get back to youforeignstarted loading and all these thingshappened inside the collab notebook alsobut it was not visible to us because itwas in the inside this cell we wecouldn't actually see that so now youare seeing it in a step-by-step processand give it time guys because this seemy the laptop the system that I'mconnecting with is in the cloud instanceand I have given a four or eight GB ofRAM I suppose I don't exactly rememberthat it's in between that and uh it isit takes Consular amount of time becausethe entire process goes through variouskinds of you know Loops when it comes tolarge language models and gettingcoherent sentences out it is not justneural networks actually there ismathematics also involved when the datais given by the large language model orthe neural network the words are againrechecked by a local algorithm whetherthe particular word frequency is youknow sufficiently matching with theregular coherent sentence so that is akind of a you know process that goesthrough in in the back end if you thereis a separate uh you saw that the firstreplay we got it you see the data herereply from llm chain and uh the rest ofthe things I have actually skipped soagent setting has been done all thesethings I have skipped it in this placeit is not running right now it isdirectly jumping to the last stepwhy I skipped it because I alreadyshowed you guys that it is erroring outI did not want to you know spend againtime because each of this time righteach of this step it will try to get thedata for you and each of these step itwill error out instead of seeing theerrors here in the script I already youknow showed it in the collab notebook soyou guys can still let me let it run inthe meantime let us go to the browserand in the browser yeah you can you canuncomment it and run it whenever youuncomment it understand you have touncomment uncommented remove the passremove the pass from here and then runit so run it one by one I have given youall the things you don't need to runeverything by a single go you will getfrustrated that it takes a lot of timewhy to do that so you you go step bystep and understand okay what is goingon and uh you you you will understand uhvery very clearly that how this isworking let us go back to the terminalonceokay still the executor is not completedit takes almost two to two point five uhminutes so it's a regular thing now letus go back to the PDF nowbecause I wanted to uh you know thinkfor you guys also and I wanted you guysalso to think right now since you havethe GPD model write at your you knowcommand prompt how are you going to useit and what purposes you will be thinkwhat purposes you can think of first andforemost what I thought of is you knowto have a personal assistant just byusing a python script the script isalready available you saw that right allyou need to do is you know write a whileloop so that it it keeps on running soyou do you have to just ask it somethingyou have to wait a bit probably it's notas fast as uh whatever you are seeingfrom GPT 4 or gpt5 or whatever modelcomes out however all those models haveone big disadvantage that is it isoutside your system this particularmodel is inside your system right atyour control it is connected with linechain you can do a lot of stuff with itusing python you can log whatever youare asking for you can try to work withvarious kinds of interfaces variouskinds of files you can play with thetoken models understand the how thetokening is working so as I always youknow say at the end of my videospractice guys and now this practice isgoing to be pretty interesting it's notgoing to be boring if you have done allthe earlier practices now it will becomeinteresting the the point here is thatas you practice all these steps that Iam telling in the present in my earlierpresentations will be very useful inthis in this particular time ofdiscussion let us see whether it is donelet me check it yeah I think in in mycase it has done that I'm I'm going toshow it to you so you see this theoutput has been populated here and yousee what is the actual number of tokensthe number of runs it has gone throughyou can get a lot of information and itsays okay it is saying daily is a 20degree celsius I think it is getting aolder dataokay that is all fine so it is givingwrong information you can run it againand see the point I am trying to make ithere is that it is actually pulling thedata and I am thinking it is getting thedata from a different search result sothat is why the issue is coming you canI actually try to experiment with thatyou can also experiment with the scriptalso let us go back to the presentationand uh so that is the point I wanted toshare with you guys so I hope that youguys will be able to use this notebookJupiter notebook as well as the scriptvery helpfully and usefully uh the herethe link is uh you know wrong but I willbe sharing you the correct link in thein the description so and also again thecode will download 4.5 GB of model toyour local environment keep that in mindwhen you're executing the code justblindly if you execute and suddenly yousee okay your space is missing or ifyour bandwidth is gone then it's becauseof the model has got downloaded so keepthat in mind and uh yeah thanks forwatching again uh do do practice that isvery important and share thisinformation with others and alsosubscribe to my channel so furtherupdates regarding blank chain as well asvarious otherrelated libraries to large languagemodels to Python and to Big Data will beshared with you guys so in the nextvideo with forwards I take the leavepractice practice practice practice