Thank you. Thank you. Thank you so much, Curie. Yeah. Because for for inviting me. Thank you, Seth. I'm really honored to be here. This is definitely the biggest talk I'm giving since COVID, I hear that it's also one of the first in-person talks, so i'm, I'm feeling a little bit of heat, I'm feeling a little bit of pressure, but that's okay. Just, just bear with me. What I want to talk about today is resilience off autonomous systems. Why I think it's important? Why I think it's something that we need to work on and why I think it contributes to the current state of the art. So to set the scene, let me play a video. It hopefully we'll have sound and it will be around two minutes in length. And it's going to give a motivation of, of what I'll be speaking about. The declarative articulate. Directly. Pick the right arrow and I can write my caterer a little bit. I think they do it again. I'll forget there were over u du Bois bars of algebra. Take the larger window to 300 at 0100, you're cleared land any runway at Santa Fe altimeter 300, 200. Can you give me some information about I've got a vibration of a barrier and I don't know, I'm going on a 1500. I don't like I I don't think that's definitely one way to make sure I ever ANY to the camp and the wind is going to be a tailwind to three either at one girl, you can have any rental you want to clear land. Pleasure. Pleasure had on the palm of a runway to your mom for me anymore? I'll just talk about Freud would go, Oh my God. I 25 is your virginity. And maybe the frontage road off of AI 25 to 300 to 500 bucks. That's a prefix, a Zulu. And can you tell me what's going on now? Allow the free layer but have a high wildly share with us the gap here. All right, a lambda layer growth there. I think I'll break the product growth oriented good idea in a way under K4 here at nine are highlighted. Is these last few seconds were just to reassure you that no one died in the making of this video. This is an actual video from from air-traffic in Santa Fe. And just to kind of pre-emptively a richer or reassure you, I'm going to have a few more videos of also bad looking things and no one died in any of those either. Okay. So so let's kind of tried to unpack what what's happened in this video. So something went wrong with the aircraft. The pilot is not entirely sure what went wrong and even the defaults kind of seem to be progressing at first he's like I'm kind of losing my engine and then at some point he's like, I have a big vibration then she's like, oh, I have no engine anymore. It's not really clear what's going on. He is unsure whether he can land at Santa Fe at first the air traffic controllers, and I'm trying to guide them towards Santa Fe. And at some point he says, I don't think I'm going to make it he chooses on alternative sites to land, which ended up being the frontage road off the highway, and he manages to then land safely. Now, how is this this whole setup, if we wanted to automate what happened here, how is this different from the kind of classical methods off of robust than the adaptive control. So the story in adaptive control and robust control, and I don't want it to seem like I'm not not paying my due respect to those fields that are certainly valuable and I'll use some of the results they have to kind of compare against, against what we have. So but the big thing and adaptive and robust control is pay something went wrong. We lost some of the actuators. There's some physical damage, something we will try to reach the objective, the original objective still, under some lack of knowledge of the dynamics or change in knowledge of the dynamics or whatever this thing is. That's quite possibly impossible. No god of adaptive or robust control is going to make you be able to land in Los Angeles if you lose both of your engine sober Nashville, It's not going to happen. So what really happens in what happened here is that when something goes wrong, your original objective might not be reachable anymore. You simply might not have the capability to reach this objective. That's that's what happened here. The pilot was unsure. No one says that maybe he could have reached Santa Fe, but he certainly wasn't guaranteed to reach it. The what happened is the pilot kind of looked around that, hey, I believe that these are the spots that I am guaranteed to be able to reach. Chose the best one of them, which was this frontage road. And then figured out how to get there and eventually landed safely. And so this story of resilience to bad things happening and potentially trying to figure out the new objective that you can order. The closest objective that you can reach, that kind of goes across domains. So for the non aerospace folks out there, and I certainly don't think of myself as a, as an orthodox aerospace person. There's, there's, there's a lot of stories to be told here. Whether it's wave kind of multiagent network control, whether it's with systems off for UAVs, whether it's with straight up robotics where some of your actuators fail. This is, this is a picture from a, related to a recent grant that we started with with nasa. The point in all of these pictures in all of these stories is disasters, bad things, malfunctions, physical damage, hostile action will unavoidably happen, will eventually happen, things will go wrong. The first question you might then ask is, okay, when this disaster strikes, can the system be driven to certifiably complete its mission? And this is a question that is at least partially answered by adaptive or robust control that the certifiable part, this kind of iffy. But that is the question that certainly is partially answered. The second question is, can we design the system to be able to certifiably complete any mission after any disaster. So we would, we would love that if we were able to say, hey, before even disaster happens, I can guarantee that no matter what happens, I will be able to complete my original mission. That's great. That's probably unlikely. The third question then, maybe a more realistic is when a disaster strikes K9 immediately online at that point, figure out what missions the system is provably able to complete. And I want to emphasize this provably, given my current imperfect, incomplete knowledge of the system dynamics, can I provably, can I guarantee that I can complete a particular task? And to introduce a mathematical framework of what I'll be talking about. So generally I will be focused on standard continuous time, continuous space control systems. So x dot equals f of x u, where x is the system state, you is the system input and the EU allies in some that off permitted controls, you. Now bad things that could happen to this system. There's many of them and there's combinations of these, but I'll kind of be talking about three of them as a part of my talk. The first one is there could be physical damage. Physical damage will possibly change the system dynamics. The second thing is partial loss of control. That's something that we saw in this video. So there could be adversarial takeover, some sort of actuation failure, something like that. The third thing is actuator degradation. I used to be able to put a certain amount of thrust. I no longer have this capability. So my set of values that I can plug-in changed. With those three things, I get to a new system, which is x dot equals some f hat x u v, where u is now the controller that I still have authority over. V is the part of the original controls that I no longer have authority over. And this u, v now needs to live in some set U hat. And U hat is this, is this set of smaller or reduce the actuator capabilities. So I'll be talking about these three and I'll start from the right to left. In my mind, at least intuitively, it seems like these should go from easier to harder problems. Let me start with the first one that's actuator degradation. And this is a real problem. I found a picture online on a, on a boating forum where the person said KDE, rather on my sailboat can have sustained some physical damage. It was bent backwards and so I can still move it, I think two to the left as much as I want. I can't really do much to the right. And so everything that I was capable off, I'm still capable off. It's just that it's a smaller set of inputs that I can put it. So the question now is, okay, given this, this actuator degradation, so I still have the original dynamics. It's just that U is now u hat. What can I say about my certifiable capabilities? What can I say about what the system can reach? I'm just talking about reachability. Of course. One could talk about the reach, avoid the more complicated tasks, all of that. In theory, this question is not too hard. The dynamics are fully known. And even if the degraded actuator limits are not fully known, the reachable set, the new reachable set can be under approximated by just computing the reachable set with the under approximated limits of the actuary or in practice, the reachable set. Often even fully known non-linear system, is not really computable, definitely not computable in real time, even meaningful under approximations. And we want, under approximations because we want certify ability, they're difficult. So what kind of happens in practice is these computations of operational envelope, which is something that is done during the design phase of a system. It take weeks and can take months is through extensive testing computations, we kind of figure out some subset of what the vehicle is capable of doing. And we say, okay, this is the operational level. Now, we know, hence the reachable set for the nominal system. We spent months figuring it out, figuring out this operational envelope. We know a bound on the degradation. So our computation of the degraded reachable set, not start from scratch. I have no hope that after something goes bad in the middle of my mission, I will be able to recompute this new reachable set from scratch. That's not going to happen. So the, we stopped here and we said, Hey, are degraded reachable, said we shouldn't compute it from scratch. And we haven't an idea, at least nominally how to compute to this new reachable set. The idea is, hey, if I'm able to reach a particular state using the old nominal control, if I can find that control that's fairly similar to the nominal one, that's within epsilon, whatever epsilon means of the nominal control, then the new reachable, the new reachable state should not. So, should not be further than some ball of size h of that epsilon away from the original, from the original thing I reach. It's okay. The question is, how do I determine this ball? How do I how do I quantify this? Off? Yes. Okay, Perfect. Thank you. So this looks like some version of ground balls dilemma. Walls lemma is kind of a standard tool that says, Hey, I have the original dynamics now I have slightly changed dynamics. What can I say about the kind of divergence of trajectories? We can do better than that. And a way to do better than that is by imposing some, some other assumptions on the growth of these dynamics. And it's that if the right hand side is somehow bounded by a particular function, then we can have a bound on this divergence that's better or that generalizes the ground walls Lemma called Bihari inequality won't spend too much time on it. In point is the result kind of depends on the wildness of the system dynamics. I have a control that's really similar to the nominal one. Then the dynamics are not too wild. Then the new reachable state will not be too far away from the original reachable state. As in similar proofs derived from the groundwater lemma, we kind of find the difference between the two dynamics and then we bounded by itself, from the left, I have this difference. And then from the right eye bounded by, again the function of that difference. We can use the Lipschitz if needed, and then it reduces to grandma dilemma. But we can do better. The point is we can do better if more information is available now. Okay, So, so I have this idea that if the house door distance of the nominal control set and the new control said they're degraded set is bounded by m. Then for each nominal control signal, I can find a new control signal that's within the ball of the original control signal. And then each state in the nominal reachable set maps to a state in the degraded set. We then sum h of m ball. What that means pictorially is, and in this case my ball is a rectangle because I can do different things in different coordinates. But if my original reachable set is gray, then I know that around every great point in the rectangle, around every great pointed there is going to be a red point. Red point is the off nominal reachable set. I still have a bit of an issue, which is, I know that there's a bunch of these red points, but how do I know that everything kind of in-between these red points is actually reachable. How do I know that this reachable set is not just kind of disconnected? And to do that, we, there's another kind of geometric trick which has any state, we go backwards now any state in the boundary of the new reachable set is also close to a state in the boundary of the old reachable set. So it's not just that the sets are similar, it's not just that in one direction, I have this bound, they have it in the other direction. And indeed, I can see that the entire red thing is, is a conservative approximation of the reachable set. This turns out to be pretty useful. So this is an example of a NOR been ship dynamics. It's an easy ship dynamics model and we can see that the nominal set this in, is in blue. The reel off nominal degraded reachable set is in red. Our inner approximation of it is of course conservative, but not that. It's pretty decent. And in particular, we have an example here, which is there's a boat and the boat is trying to pass close to the danger zone, which is in blue, but not entered the dangers. At some point, the boat suffers damage. Here. At that point we want to figure out whether there's something in this new reachable set that allows us to pass next to the bad set. Or if I'm just, or if I can't guarantee that I will be able to avoid the bad set and I need to jump off the boat or stop or whatever. It turns out that we figure out that there is something in the new reachable set that avoids the bad set and we can proceed finding the appropriate control. Okay, so this was the first story, actuator degradation. Let me move to the second story, partial loss of control. So adversarial takeover and I, I have a video of that as well. Again, I guarantee no one died. You might find that's impossible, but still. So the pilot jumped out eventually that list, of course, if they stayed in that, it wouldn't be pleasant. What we saw here is, if you look at the last second, last few seconds of this, let me show it on both. You see that there is a, it's really difficult to console to it, right of the plane. And that's a part that's called an LFO on, that's legitimates control surface. It's just that it started for whatever reason, putting its own inputs that weren't commanded by the pilot. And the pilot couldn't counteract them or didn't know how to counteract them. And it ended up in a crash. So that's exactly the story that we have here. There is a system input that is uncontrolled and that's not the disturbance, it's not the little wind or something like that. It is possibly an adversarial bad input. And there's a two-player game here. Player one, which is the controller. What remains of it wants to reach a particular state? Player 2, which is the adversary environment, whatever you want to call it, wishes to obstruct P1, player 1. The players play simultaneously, possibly with some odd coupling between them. The question is, can player one when sub-questions are, CAN player one when for any state or an goal state and the starting state can player one when, if there's some sort of a time limit, if they need to reach this state within ten minutes, things like that. It's not too hard to develop some intuition for that. So player 1, intuitively can win for reaching every state. If it can kind of cancel out the bad puts, it can cancel out the adversarial inputs and still has a little bit of control on its own that allows it to steer somehow. And that potentially fights the drift if there's, if there's any drift. Now in linear affine system, the answer to this is kind of clear because if you have bounded inputs and if there is a drift, eventually the drift becomes too big and things don't work out, but we can still ask the question. So okay. Unbounded state space is bounded sets of states to reach. Tightening the drift might be possible and in all cases, the available input, we have this intuition that the available input needs to be stronger than the adversary. So if the original system dynamics are x dot equals AX plus B bar U bar, loss of control over several actuators means that the system shifts to x dot is now AX plus B, U plus C V, where V is the bad part, B is the part that can't control anymore. And b, sorry, u is the part that I still can't control. The notion of available input stronger than the adversary is that for whatever v is put in, there exists a u such that I can cancel it out such that 0 equals b u plus cv. And I said I need, I need a bit more than that. So I need that 0 is kind of in the interior of this set. Now, of course, this is under certain assumptions and you can ask whether this is necessary or sufficient, and it kind of depends on the, on the framework. But let's say that we have figured out that the system is resilient to the loss of authority. The second question we can ask is how resilient is the system? So I'm asking this question in the sense of, hey, let's say the time able to reach a particular state within two minutes. And now something bad happens. And I'm still able to reach that same state, but now it takes me ten hours. Right? Amine still resilient in theory, I can still reach it. In practice. It's probably not going to work out for me. So what I want is somehow compare the nominal reach time, the nominal lock them over each time that I could've gotten before the damage to the worst-case optimal or each time after the damage worst-case being, whatever the adversary throws at me. And so we define this, this quotient. In this case we have soup in which is the, the interpretation is the adversary first chooses an input or the adversary chooses an input such that whatever the controller chooses, this reach them will be large. You could ask similar questions and you could ask, Hey, should we have in soup here, should there be some sort of causality? This is mathematically the easiest thing and kind of intuitively the easiest thing. I'm not saying that the other questions are not equally important. This resilience quotient is a number between 01. If it's 0, that means there is no resilience I can to reach the new state. If it's one, that means that my I didn't lose anything by losing actuated. So this is kinda cool. I have this number between 01 that says how resilient are you? Now, it turns out that for drift closed systems, computing this quotient is not too hard. In general. It's kind of three partially nested optimal control problems for drift law systems. A lot of things and for integrators to love things kinda become geometric optimal control turns into optimization. And we can have these weird geometric pictures that allow us to figure out the quantitative resilience. And this is something that we call the minimax maxi-max quotient theorem, like a 20 page proof, but it works. You can do it. Now for general linear systems, even, even just for linear systems, optimal control signals, we know their bang bang. They're impossible to analytically determine the set the velocities that I can go in at any point depends on where I am. And so it's not very likely that I will be able to figure out the exact quantitative resilience. What our idea is is to get some sort of bounce, get a lower bound on the nominal optimal reach time. Purely upwind up theory. So a lower bound on V dot of X t, get an upper bound on the optimal non the nominal optimal reached time, get a lower bound. On the worst-case Hopkin molar each time. Get an upper bound on the worst-case optimal or each time combine those four and get some sort of lower and upper bounds for quantitative resilience. And it's not going to give us an exact answer, but it is going to tell us something like, Hey, I know that if a damaged if damage happens, you won't waste more than twice the amount of time that it would have taken you on a Sunday. You could be satisfied with that or not, but it's something. And so the first thing that of course we decided to apply it to is elegance and fighter jets. We then find the model affect one 17. This is an imaginary fighter jet developed by Swedish researchers and we tried to see whether the system is resilient to the loss of these control surfaces. It turns out that it's resilient to the loss of control surfaces, but it is not resilient to the loss of direct telephone TO, in other words, in our idealized model, indeed, the pilot could not have done anything that the system was simply not resilient to this, to this loss. There's other applications. We were doing some stuff with spacecraft attitude control, some things with opinion dynamics. How can you still control people's opinions if you lose power over some of the media, of course, this is something that dictators would be very interested in. It altogether, we're very interested in this kind of work. So let me go now for the, for the third, the third part of this talk, which is the hardest, at least nominally changing dynamics. Bad thing happens, my dynamics change. So in such a case I have x dot equals f of x2. F is now new. It's not the old f, it's unknown. And it's not the disturbance again, if I lose physically a part of my airplane or a part of the vehicle, the dynamics will structurally change. It is not a disturbance, it's not one parameter, finite number of parameters that will change its new, new dynamics. Now, robust than the adaptive control deal with disturbance or unknown finitely many parameters. Importantly, they generally assume that the original targets remains reachable. Like we showed. That's just not the case sometimes. So let me give on, on both an example of this happening. Again, no one died. And if you're wondering how no one died there, those were RC airplanes. But so, so the airplane that was hit, dirt suddenly it makes were way off. Certainly at that point, it's couldn't just try to continue to its original happy location right at that point it wants to figure out where it can land, they fit, can land safely, what it can do. So what we are interested in is this. After a change in dynamics, the system can almost certainly no longer use the same control law to reach the target. In fact, the target might not be reachable using any control. We're interested in what the system is certifiably capable of reaching. Course. The answer to if, if, if you know nothing, you can assure nothin, the dynamics can be anything, then anything can happen. However, if you do have some set of possible dynamics, whatever it is, however large or smaller, finite or infinite. Today's, you can try to compute the reachable set for each of those dynamics. You can try to intersect all of those sets. And what do you get this certainly reachable by all of these dynamics. It doesn't say how it's reachable, but it's certainly is reachable by all of these dynamics. Now easier said than done. Of course, I'm trying to intersect infinitely many reachable set. So fun nonlinear system. That's, you know, I'm, I'm, I'm surprised PowerPoint, it didn't complain when I wrote this. So in theory, sure, I defined this well in practice, how do I compute this? And so our idea is, I will try to figure out what the velocities the system can Postgres in at a particular time. So let's say that after a short amount of learning, I can figure out, hey, my local dynamics right now are this here. And if I have some sort of a Lipschitz bound or some other bound on how quickly these dynamics change. Then at least for a short time, I will be able to say something about what is guaranteed to be reachable. So now I tried to, again do something pretty hard which is intersect infinitely many of these guarantee, of these, the velocity that's now velocities that are guaranteed at time t r. And of course this is not the only way to approach it, but velocities that are guaranteed at time t are the velocities that are available at time 0 modulo in some way, maximum wildness of the system. What I mean by this maximum wildness is some sort of a Lipschitz bound on the change in dynamics. And I can often get from physical knowledge from, from design parameters. I could, I have a shot at getting that thing can also figure out the local dynamics by doing learning in an arbitrarily short time, which ideally I can do it sends the learning part of what's called myopic control that we used to work at. So for Control F find systems, it turns out that if I know these dynamics at time 0 and I know a Lipschitz bound the guaranteed velocity set there's an intersection off a bunch That's which are all the same there. So there's infinitely many. But they're translated delay td rotated copies of more or less the same object. And so in our case that the object is an ellipse. So how do I compute that? Well, an initial idea is so an intersection of infinitely many ellipses is sadly not an ellipse. And it's hard to geometrically determined, but you can try fitting a maximal ball in there. And at least you will get an under approximation of some sort. Hopefully it will be good, maybe not depending on how kind of L AND gate to this object is. We can do better than that. We can try to create a more complicated object. This, this is something that my students submitted recently. We can try to fit on the lips in there. We can try to fit the polygon of some sort in there. So still some sort of a geometrically simple object. And we can get to these reachable set under approximation. So in this case, if the systems true reachable set is blue, of course, we can compute this true reachable sudden because we don't know the true dynamics of the system. We only know very little of those. We know them locally at time 0. And we know the, the Lipschitz bounded. Though, our red and green approximations are under approximations of this reachable set obtained from these guaranteed velocity sets. The red one is obtained from kind of naive maximal ball. Think the, the, the green one is obtained from the velocities that are polygon though. So you can see that it kind of looks like a, like an oval polygon. And so we did that on a bunch of examples. We did a ton, an example of a quadcopter. And of course, as time progresses, these, as time progresses, these, these approximations become worse. But at least over short time, we can say something. Okay, so now hopefully I sold you or tried to sell you on these three stories. I claim that I'm not about to retire. That there is still that there's still a lot of work to be done. First of all, there's a lot of technical assumptions that I kind of lied to you about here that I didn't mention. For instance, a loss of control authority. And we only do things with linear systems. For our unknown dynamics, we kind of need the pool actuation or foolish actuation, things like that. Importantly, we can be smart. For instance, for unknown systems, I might not just know the Lipschitz bound on the changing dynamics. I might know that some actuators simply do not affect some of the systems. Toy, it's a linear system. I know, I know that some things are 0. That should be somehow useful, that should give me a better bound on the, on the reachable set. I'm also interested in not only what can be reached, how do I reach? And this is something that we're trying to do to merge kind of this guaranteed reachability with real buster or myopic control. Finally, we're interested, not just in reachability, we're interested in, Hey, can I reach something without dying first? So these are kind of reach, reach a boy tasks. There's many other things as you move towards implementation. There's computational limitations for these things. There's time delays in receiving information from sensors or actuators. There's partial observability. Our medium-term goals are, we want to combine a lot of these scenarios. Partially we did that. We have something about the actuator degradation with disturbances. We want to, for instance, talk about partially unknown dynamics with partial loss of control, things like that. We want to talk about end-to-end planning, learning, and control. So first, I want to figure out what tasks can be provably completed. Second, I need to figure out what I need to learn about the dynamics in order to complete this task. That is provably completable. And third, they want to complete task. I want to have an assured control law. I want to have more physics-based and design-based results, so exploitation up significantly and unchanged prior knowledge for better estimation. Long-term, we want to validate this stuff on now on real systems. So we want to use sensors and perception to recognize fault pipe. So far we just said, Oh yeah, I know that I don't have control over this actuator anymore. How do we know that? Though? We want to use fault detection, sensor fusion, and state estimation, we want to deal with complex missions in high fidelity simulations. So not just linear reachability missions, but missions in hybrid systems missions where there's multiple objectives are a sequence of objectives. And then it goes through automata and hybrid systems. And that's something that is of course, a lot of interest in urban air mobility and overall unmanned air mobility, particularly for the Department of Defense. Finally, we want to put this thing on actual hardware. We want to implement it on board. And in order to do that, we need to talk about real-time computations. We need to talk about time delayed control. There's a lot of applications. We've done some work with our mall quadcopter that, that we have in our lab. We're currently engaged with nasa on there supposedly high fidelity simulation of a lander on, on a moon of Jupiter. That's a big robotic arm that moves and that needs to stay resilient for his longest possible or provide some sort of useful, some sort of useful information and useful sampling for as long as possible. We're also doing some stuff with the with the robo Simeon with Chris. How's her from, from UIUC? And finally, on the topic that's far away from robotics or aerospace, where else interested in resilience of power networks when There's an adversarial cyber actor or some sort of a disaster that put some of your power source at some of the parts of your power network out of commission. What can we do about that? Can we ensure that the system still retains its capability to complete the objective and provide service. The point is, I claim that's dealing with abrupt mid mission events is crucial for autonomy in challenging environments. I also claim that while classic robustness and adaptation are amazing, they are not enough. We need task assignment and we need assured resilience. Our current results of taking a step in that direction, we're able to compute reachable sets with unknown dynamics, with actuator degradation, with partial loss of control. There's still a lot more to be done. I'm interested in autonomous task assignments. So actually figuring out how the person figured out that the highway or the frontage road is the actual best thing to land that how to perform real-time learning. So updating of this knowledge of the dynamics and control for a complex missions in challenging environments. I should acknowledge nasa mass sign nasa, I forgot to put them the third time. And also our our work with AF works. So the Department of Air Force and the discovery Partners Institute, and of course, my students who are, among others, who are kind of taking the lead on this with actuator degradation. It's Hume's L Kabir with partial loss of control. It's JBB Bouvier and our amazing undergrad students who never went to MIT, Catlin shoe and her new advisor to true fan. And Taha Shah who is doing work on unknown dynamics. Thank you so much. I'm sorry for the mess up with the PowerPoints. Who knew there's multiple versions of PowerPoint? Raise it. When you talk about the beginning operation? Yeah. Yeah. Yeah. There was no women bacteria. Yeah. At each moment in time? Yes. And you compared times? Yeah. That's probably buy this game theoretic or adversarial. And you look good. Though if one case you're looking at your degree. And then the other case, vision changes and vital. Understand how to put that in. Yeah. So I don't I don't think that betrays the failure of what we're of what I was talking about. I think that's actually a very clear challenge is to what we're doing. Because the part on the so, so, so the part on the actuator loss or degradation of your capabilities relies heavily on the fact that I can still control all parts equally at every time that the machine doesn't change. That this is a onetime event that happened at the beginning and now I'm just whatever doing, but it's not a continuous degradation in the, in the loss of actuation capabilities. That's a much harder thing because I am losing this, these capabilities over time or the adversary can do different things over time. And I think it's a big challenge to try to kind of combine these two. What my guestimate on how one could try combining them. And for all the, all the hundreds of members of the media in this room, please don't quote me on this. What I would, what I would try to get at is try to put in this notion of change dynamics are changing dynamics or loss of actuator capabilities into the Bihari inequality, getting a worse inequality essentially. But putting in this notion that, hey, things might change over time to give myself a worse bound and then try to combine it in that way. I'm not promising anything. Does that does that answer? Thank you. Yes, please. The first example. The third example. Yeah. Okay. Perfect. Yeah, absolutely. So all long-term all bets are off because if you have no long-term knowledge, so the only knowledge I used right now is I know something about the dynamics right now. And this big abound on what's going to happen over time. And of course, as time progresses, I'm going to get worse and worse. The idea behind this is that I will use this, this work to provide myself short-term guarantees and to say, Hey, there is a way for you to not die in the next two seconds and perform a particular maneuver. In those two seconds, I will be able to learn something whether actively or not they'll be able to collect some information. Then I will have a better idea of what I'm working with. And maybe next time my bound won't be just two seconds. Maybe I'll be able to say, hey, for the next five seconds, you are capable of doing something. So this is something that we're really interested in, that kind of this, this, this framework needs to be combined with online learning. It shouldn't be a onetime thing because one time thing will only give you a short time guarantees. Great question. Thank you. In some sense. None. Not and so the bar is very low right now. So right now, what we are assuming in general, we have some, were playing some little, little toy examples where there's more things. But right now we assume that I know the system state entirely perfectly immediately, that I am able to plug in any control entirely perfectly immediately and the time able to see any adversaries input immediately perfectly entirely. That's obviously not going to be true. So the first thing that you might want to consider before talking about the technical aspects of what sensor time using and things like that. This just tried to develop the theoretical framework of hay. Not able to see xA t. I'm perhaps only able to see the kind of standard story of y of t, which is some function of x of t. Or I am able to see this is something that my student is working on right now. Instead of being able to see the adversary v of t at every time t, I'm only able to see v of t minus tau for some tau. What can I say about my guaranteed resilience Deng? And once we have a grip on that, then we can start talking about specific perception capabilities and all of that, which is, I agree, a really interesting problem. I just don't think that we're there yet. Yeah. Yeah, it's my method. I love it. So the method, the method that I'm trying to pitch here is really easy. And of course it only works in kind of an idealized system which are, so far we've developed it didn't an idealized system that forgets about all of these other issues that I just traced and answering the previous question, which is this, you can figure out your local dynamics in an arbitrarily short amount of time by plugging in different controls. And so plugging in different test controls, seeing how the system responds. And if you can do that in an arbitrarily short amount of time and you have perfect resolution on how quickly you can change these test controls and so on. You can get a model of your local dynamics. So just what's happening right now with an arbitrarily small error. Then of course you would have to kind of tried to keep doing that forever or start using some sort of a more and more standard system identification methods and so on. But that's, that's the story we call it myopic control. It's great. You should give it a shot, please. Yeah. Right. Yeah. I think that's a very good question. It's a certainly something that's of interest to the learning community altogether. It comes down to what the community calls, I guess the exploration versus exploitation, where this exploitation story is, hey, I know very little, but I know something. So let me just stick with what I know. And the exploration is, hey, let me just try learning for the sake of learning and hope that it gets me some more better. Ultimately, what, what you can do with, and it's kind of two extremes. One extreme is you say, hey, I'm not going to actively learn at all. I'm going to stay within what is guaranteed. That will already enabled me to collect some information. I'm not really choosing what information to collect. I'm not trying to figure things out, but I will get more data and hopefully this data will be helpful. The other option is to say, okay, let's try to develop a, this is again, very high level, very difficult technically. Let's try to develop some sort of a probabilistic interpretation of this. Let's say, hey, this thing is guaranteed with 100% that I can get to these, that I can get to these the velocity. If I'm not happy with any of these velocities, maybe I can have a somewhat bigger set that I can still get to with 99 percent. Am I willing to pay the price of 1% in order to get a lot more data. And that comes down to uncertainty quantification. How do I know what action performing? Which action will provide me with what they timed, with the quality of what date. And that's something that I haven't mentioned at all that we're very interested in is trying to figure out what are the most useful actions to figure out the system dynamics? Well, ideally stay safe. That's the big story. It's a massive question. It's an amazing question. It's also like 200 PhD theses worth. Thank you all.