AI offers boundless potential,from enhancing health carediagnosis to makingour cities smarter,empowering us with education.The potential is immense.While AI canrevolutionize so much,realizing this potentialalso rests on our abilityto solve some really importanttechnical and societalchallenges.So I would like to talk withyou about a new idea for machinelearning we termedliquid networks.We began to develop thiswork as a way of addressingsome of the challenges that wehave with today's AI solutions.Because despite thegreat opportunities,we also have plenty oftechnical challengesthat remain to be solved.So first among the AIchallenges is the data itself.We require hugeamounts of data thatgets fed in immense models.And these models have hugecomputational and environmentalcosts.We also have an issuewith data qualitybecause if the dataquality is not high,the performance of themodel will not be good.Bad data means bad performance.Furthermore, we havethese black box systemswhere it's reallyimpossible to find outhow the system makes decisions.And this is reallyproblematic, especiallyfor safety criticalapplications.So let me show you theessential idea behind liquid AI.And I will show this to you inthe context of an autonomousdriving application.And then we cangeneralize to others.So here is a self-driving car,which was built by our studentsat MIT using traditionaldeep neural networks.And it does pretty well.It was trained in the city.It drives really well ina completely differentenvironment.It can make decisionsat intersections.It can recognize the goal.So it's pretty good, right?But let me open thehood to show you howthis vehicle makes decisions.So you will see in theright-hand corner the map.You will see in the upper leftcorner the camera input stream.And the decision-making engineis the big rectangular boxin the middle with blueand yellow blinking lights.And there are about100,000 artificial neuronsthat are working togetherto tell this car what to do.And it is absolutelyimpossible to correlatehow the neuronsactivate with whatthe vehicle does becausethere are too many of them.There's also half amillion parameters.Take a look at thelower left-hand corner,where we see the attention map.This is where, in the image,the vehicle looks in orderto make decisions.You see how noisy it is?You see how this vehicle islooking at the bushes andat the trees on theside of the road?So this is a bit of a problem.Well, I would like to do better.I would like a vehicle whosedecisions I can understand.And in fact, withliquid networks,we have a new class of models.And here, you can see theliquid network solutionfor the same problem.Now you will seethe entire modelconsisting of 19 artificialneurons, liquid neurons.And look at the attention map.Look how clean itis and how focusedit is on the road horizon andon the sides of the road, whichis how I drive.And so liquid networks seem tounderstand their task betterthan deep networks.And because theyare so compact, theyhave many other properties.So in particular, we cantake the output of 19 neuronsand turn them intoa decision tree.Now, that could show the humanshow these networks decide.And so they are muchcloser to a worldwhere we can havemachine learning thatis understandable.We can apply liquid networksto many other applications.Here is a solutionconsisting of 11 neurons.And this is driving a plane ina canyon of unknown geometry.The plane has to hit thesepoints at unknown locations.And it's really extraordinarythat all you needis 11 artificial neurons,liquid network neurons,in order to solve this problem.So how did we accomplish this?Well, we started by thecontinuous time neural networkframework.And in continuous networks,the solution, or the neuron,is defined by a series ofdifferential equations.And these models are kind oftemporal neural network, whichincludes standard recurrentneural networks, alsoneural ODEs, continuoustime (CT) RNNs, and now,liquid networks.And it's reallyextraordinary because,by using differential equationsand by using continuous timenetworks, we can model veryelegantly complex problems,like problems thatinvolve physical dynamics.For instance, inthis case, we havethe half-cheetah standards.And it can be modeled elegantlywith these continuous timenetworks.However, when you take anexisting continuous timesolution and you modeleven a simple problem,like can you get thishalf cheetah to walk,you actually get performancethat is not that muchbetter than a standard LSTM.And so, however, with liquidnetworks, you can do better.OK, so how do we achievethis better performance?Well, we achieve it with twomathematical innovations.First of all, wechange the equationthat defines theactivity of the neuron.We start with a linearstate space model,and then we introducenon-linearitiesover the synaptic connections.And then when we plug thesetwo equations into each other,we end up with this equation.And so what's interestingabout this equation isthat the time constant thatshould go in front of x of tis actually dependent on x of t.And this allows us to haveneural network solutions thatare able to change theirunderlying equations basedon the input that theysee after training.We also do some other changes,like we change the wiringarchitecture of the network.And you can read aboutthis in our papers.And so, now, let's goback to the attentionof a whole suite of networks,CNNs, (CT) RNNs, LSTMs,and other solutions.So back to the drivingin lane problem,you'll see that all previoussolutions are reallylooking at the context,not at the actual task.And in fact, we havea mathematical basisfor this result. We can actuallyprove that our liquid networksolutions are causal.In other words, theyconnect cause and effectin ways that are consistent withthe mathematical definitionsof causality.Now, I promised youa fast solution.But these networks are definedby differential equations.So you might ask, do they reallyneed numerical problem solvers,because that would actuallybe a huge computational hit.Well, it turns outwe have a closed formsolution for the hairy equationthat goes inside the neuron.And the solutionhas a good bound.It's good enough.And you can see in this chart,in red, the ODE solution,and in blue, the solutionwith our approximation.And you see that they are reallyquite close to each other.So these liquid networks canlearn causal relationshipsbecause they form causal models.Unlike other models definedby differential equationslike neural ODEs and(CT) RNNs, in essence,these networks recognize whentheir outputs are being changedby certain interventions.And then they learn how tocorrelate cause and effect.All right, so let megive you a final exampleto convince you that thesenetworks are really valuable.So here, we have adifferent problem.We are training a dronehow to fly in the woods.Notice that it's summertime.So we give our dronesexamples of videoslike you'll see in this example.And these are notannotated in any way.And we train avariety of models,for instance, a standarddeep neural network.And now, when we get thestandard network trainedin that environment to findthe object and go to it,you see that the modelhas a lot of trouble.The attention is very noisy.And then also notice thatthe background is differentbecause now it's fall time.So the context ofthe task has changed.Because deep networks areso dependent on context,they don't do so well.But look at our liquidnetwork solution.They are so focused on the task.And the drone has noproblem finding the object.We can further go allthe way to the winterwith the same modeltrained in the summer.And we get a good solution.And finally, we can evenchange the context of the taskentirely.We can put it in anurban environment.And we can go from a staticobject to a dynamic object.The same model trained inthe summer in the woodsdoes well in this example.So this is, again,because we havea provably causal solution.So liquid networks are a newmodel for machine learning.They are compact,interpretable, and causal.And they have shown greatpromise in generalizationunder heavy distribution shifts.Thank you.[APPLAUSE]