Speaker, Dr. Babak Mahmoudi. Dr. Mahmoudi completed his undergraduate training in electrical engineering and artificial intelligence. He transition to biomedical engineering for his PhD at the University of Florida. Or he developed a framework for designing intelligent neural interface control systems using reinforcement learning to enable automatically decode decoding and brain signals via closed-loop interaction with the nervous system. Currently, he is an assistant professor in the biomedical informatics and biomedical engineering program at Emory and Georgia Tech. And he's also the director of the neuroinformatics, an intelligent systems lab at Emory, currently has research has centered at the interface between artificial intelligence and neuroscience. And he's leading multiple a collaborative projects in this domain on the design, development and implementation of various AI platforms for neural decoding and closed-loop modulation. Applications include memory, movement disorders, epilepsy, and psychiatric disorders. I'm personally very excited for his talk today and I'm sure many of you are, so I will dr. My hooded the floor. Thank you for the introduction and thank you for the opportunity for me to talk about some of the works that we're doing in the area of AI and, and calls with neuromodulation. Good morning everyone. So today I am going to talk about a closed-loop neuromodulation from an AI standpoint. And we will not cover certain areas in this domain. I will talk about the AI for closed loop neuromodulation as, as a new paradigm. I will also talk about some system design elements for biomarker discovery and automate that called control and closed loop optimization. And I will also talk about platforms for design and implementation of such systems in clinical and experimental set. So neuromodulation is gaining widespread application as a therapy and also as a research tool. Many of you are familiar with deep brain stimulation. This is a therapy that has had FDA approval for movement disorders since 1997. And there are new generations of deep brain stimulation are being developed and are being used in clinical settings for various conditions in addition to movement disorders such as epilepsy, psychiatric disorders, depression. It also has been bytes. We used as a tool for modulating peripheral nervous system. And there is an area which is very active. Bioelectronic medicine. Actually goal is to control the function of the organ systems by modulating the information pathway from the brain to the war against the stem by vagus nerve stimulation. So with these systems, there is a need to design controls. Control system. So neuromodulation is based typically on the function of the nervous system. And like any engineering system, we need to think about control strategy. The first generation of these devices were open loop. And by open loop, that means that the devices, they didn't have any dancing capabilities. And in fact, this is one paradigm and one approach for building or a stimulation. You just deliver certain stimulation parameters. And then for a prespecified amount of time. So the next step was adding the sensing capability and building responsive closed-loop, not a simulation. And this is actually a device that got FDA approval 2017 for, for epilepsy where there was an input and the input has some detectors and if there are some events or detect that there are predefined amount of stimulation is delivered. The third approach, which is a more advanced approach for designing closed-loop simulations, is adaptive stimulation. Or basically you need to have feedback loop where you want to control a certain property at the output of the system. By modulating gain input and changing the inputs, the stimulation, stimulation parameters. So there are an ever increasing, there is an ever increasing complexity of neural interface systems. Like increasing the parameter space, increasing the number of contacts that the data is being recorded, and also the amount of the information that are collected using nor no interface technologies. And it becomes much more challenging to identify optimum stimulation stimulation strategies. In addition, controlling nervous system as a complex system is, we are dealing with a dynamical system that is changing over time and it requires more advanced strategies. So I will talk about paradigm in which we can build a little bit more intelligence for these devices. And the idea here is that to allow the device itself learned what is going to be the optimal strategy based on interaction with the brain. And reformulate this as reinforcement learning in the context of artificial intelligence and machine learning. Where there is an agent, this agent is represent, is representing the control strategy and take some actions by taking actions at changes the state of the brain. And depending on the desired output or the desired property that we need from the, the recordings from the brain, we define a reward for this agent and through this interaction, this agent, the idea is that to learn to maximize its measure of the reward. And that will basically lead to identifying optimal controller strategy in order to optimal demodulate, modulate the modulator brain. This is, this is a general concept of what we mean by designing intelligence for this closed loop control systems. So with regard to implementing this paradigm, there are certain elements that are needed and this can be used in a wide range of applications. And narrow stance sink and normal modulation modalities. And in fact, this is in addition to being a therapeutic approach, this is a powerful tool to understand the causal mechanisms of controlling certain aspect of neuron-like DVT and study their subsequent, subsequent behavior. So this approach, what we need is we need to identify a set of biomarkers. Either we do have these biomarkers are priori based on some hypothesis, or we use a data-driven approach to identify those, those biomarkers from multi-modal recording system. And then we will map this into neural stimulation and neuromodulation. The stems, which could range from electrical stimulation in human to optogenetics in rodents. And electrical stimulation in, in non-human primates are just as an example. And then we learn from, from these, these approaches and we can develop algorithms and use this knowledge in order to build platforms that can facilitate the design of this, these systems. So in summary, the design elements of AI based closed-loop neuromodulation, and we'll boil down into three main areas. One is identifying target biomarkers and understanding how stimulation affect those biomarkers. Designing control system and interactive learning strategies to modulate those biomarkers. And also building, we need to have platforms for design and implementation of such closed-loop systems. Because one thing that is important in this paradigm is the complexity of designing such, such system. Both from prototyping and also from them implementation and experimental and clinical settings. So I will start with one use case on designing automated DBS programming for tremor control, which have collaborated with Svetlana me see knowledge here at Emory. And the goal is to basically. Build an automated programming system. So when the patients are implanted with DBS leads with for Parkinson's disease. After the implantation surgery, they go to the clinic and a clinician evaluates the patient's response to stimulation. And NEO, I adjust this stimulation in order to achieve optimal therapeutic efficacy and minimize side effects. So there was a study back in 2018 that they use variables. And more specifically here, Watch, which have accelerometer to record the tremor in an objective fashion from these patients and use that on kinematics as an endpoint and target biomarker, if you will, to identify the optimal stimulation parameters. And the strategy that they used was a great search. So basically they sampled certain stimulation and parameter over a grid, and then they tried to identify what stimulation parameter had the best effect and the stimulation prime. There are electrodes and the amplitude of the stimulation on, on each, on each contact. So we started with data, offline data that was collected to study how different stimulation, how different patients responded to different stimulation. And for this, we used a technique, a function approximation taken you through a Gaussian process. This is similar too. Other techniques that you might have been familiar with, for example, linear regression or a neural net. And to identify and model how does stimulation effect that then results in different patients. And as you can see here, for different for one particular patient at different states, they effect of stimulation was, was different. That implies some state dependency in, in response to stimulation and also across different patients on the right. There are differences in how the therapy is affected by different stimulation, different stimulation strategies. The one advantage of Gaussian process that will give us some level of uncertainty about the regression. And on this here, the surfaces that are above and below the medial surface are showing the upper and lower bounds. And the medial surface represents the mean response of the of the stimuli output. So we use this approach to test a few stimulation control strategies and we identify that Bayesian optimization is, was an effective technique to identify the optimal stimulation prime parameter and by optimal simulations parameter here, the surface at the bottom. The goal here is that we want to find the bottom of the surface. This is like minimizing a function, a function that is expressed by the stimulation parameter. And this is showing what would be the response to those as stimulation parameter. And then the idea is that to find what is the optimal stimulation parameters corresponding to the mean of the surface. So we implemented this from from scratch. And in a real-time fashion day. We built a system that when a patient comes to the clinic, the system will be able to learn just from interacting with the patient to calculate the target objective function, finds the optimal stimulation parameter and then tries best to, to achieve the, the, the best setting. And of course, the goal here is that we want to minimize the time that it takes for the optimizer to find this stimulation parameters. And the goal is that we use that as one of, one of, one of the objectives for designing and testing the DBS. So working with all generation of DBS systems, these systems were not designed for such experiments and we collaborated with Metronic group at Utah in order to build an interface for these devices, which we could communicate and implement this in a distributed fashion because the device themselves, they did not have any computing, computing capabilities. So we implemented a workflow for, for the design of this optimization approach. And with this workflow, basically, we start that with the steel knowledge of the clinician into an algorithmic workflow. And we tried to strategies one being using monopolar stimulation and trying to optimize with monopolar stimulation. And provide a decision support tool to the clinician whether they wanted to implement more advanced the stimulation strategies such as, such as bipolar, bipolar stimulation. So the first step to implements AI system, we need to make sure that the objective function that we are using to build and optimize the simulation parameters are, are representing the desired outcome. Otherwise, whatever results that we're getting will not be meaningful. So we did a regression and correlation analysis and we found that the output of the watch, which basically use the classifier to classify the signal from accelerometer into different clinical settings for the, that represented the outcome. And there was a very good, good agreement between the clinicians and the output of the output of the classifiers. And then we basically use this Gaussian processes to build models from, from the interaction with the patient. And this model was built add in a, an iterative fashion, starting from randomly selecting some stimulation parameters. And those were to get an idea of what this stimulation response. And then starting from there, we wanted to, we wanted the optimizer to find the optimal stimulation parameter. We also built safety constraints into the implementation of this system because one of the issues with regard to using AI systems when they are interacting with the brain is that is the safety, safety constraints. And this was implemented, the side effects were implemented in the design of, of the objective function. And we, using a safe Bayesian optimization approach, we let the optimizer gradually increase the boundaries of the optimization in order to identify what he's going to be. The optimal stimulation setting. Of course, the compromise here is that it will take longer for the optimizer to identify the stimulation parameters. And then wait, when we looked at the results across different in our patient population, we found that in general there was a very good agreement with the automated approach, the performance of the automated approach, and what clinicians used to use for, for programming, programming the device. In both cases, there was significant improvement compared to the baseline. And in some cases that the results that we got there better than manual stimulation programming that the clinician used to do in the clinic. So this is one example of implementing an optimizer or a closed-loop control approach to design intelligent cause little neuromodulation, if you want to call it called an intelligence. The other element that I mention is that identifying biomarkers. And for the biomarker, we, we need to find what is representative of the Desired outcome that we want from closed loop optimization or closed-loop control. And then the other advantage of this, this pipeline is that it will tell us about what are the underlying mechanisms. And by underlying mechanisms, I mean, what are the effects on the nerve? What are the neurophysiological effects of stimulation than they are, than the closed-loop optimizer is interacting with the, with the nervous system. So there was, this is another study based on using interpretable machine learning approach to classify between different neurons states in depressed patients. The data that was record that intra-operative way in patients with depression. And this approach, in this approach basically there was a baseline recording which was, which is called pre. And then there was a set of stimulation that they were collect that and then there were Of optimize the stimulation. And the goal here was that to see whether the effect of this simulation on an optimal contacts will have a different neurophysiological signatures than those which they are not optimized. And by not optimize favor basically a number of parameters that in context that they were, they were, they were tried as a control. So for this, we extract that this set of features and those features are the canonical frequencies from LFP is that they were David record that. And then we looked at the difference between the signals before and after each of each of those, those, uh, stimulation approaches. And these basically represented two different brain states that we tried to classify. And then after the classification, we use and interpret a picture interpretation or feature learning approach to identify which of those biomarkers had the most impact on separation, that probability of these two, these two states, or in other words, we basically interrogated the classifier to identify what are the most informative features that contributed to the, to this classification. And the approach that we used, we used logistic regression with elastic net regularization. Regularization is basically the idea here is that too sparse defy the model and identify the model parameters that are essential for the, for the classification. On top, you can see that cross-validation error as a function of the number of parameters that were present in the, in the model. And as the, as we increase the regularization term, basically a shrank the model parameters. And as you can see, shrinking the model parameters to some degree. Up to some point, it improve the performance of the classifier. And then after that, you can see that the error increase and that shows that those were by further reducing the effect of those, those, those modal coefficients that corresponds to the input biomarkers, the error there are increased. So we use the regularization coefficient that corresponded to the minimum cross-validation error. And then we also look at the performance of the classifier because that is you want to be able to separate those non States first before interpreting the effects of different inputs to the, to the classifier. And then we identify that, in this case, beta, for example, had the most impact on the separation of the States before and after. And then we did this between 44 other stimulation strategies on the left, as you can see, the meat versus pre, which was non-optimal stimulation parameter. You can see that there was less the probability which is indicating there was. The effect of the simulation was not significant on changing the, changing the biomarkers. So we use this approach for, in another setting which we use data from previous study by korea was published in PNAS. And their study basically focused on the effect of amigdala stimulation and on, on memory performance. Than the underlying hypothesis was that since amygdala is responsible in emotional processing, the premise was that if there is some emotional influence on some stimuli, there would be, those stimuli would be more likely to be remembered. And they did this study by delivering Theta modulated gamma stimulation by these bursts of this stimulation to the amygdala and then recording from different parts of the brain. This was done in patients with epilepsy, which were in the epilepsy monitoring unit. So they're different brain regions were implanted with electrode as part of their clinical intervention. And then after the stimulation they did testing immediately. And then one day after, after the experiment to study the effect on short-term and long-term memory, we use the same, the same paradigm. But the goal here was two-fold. One was to identify what are the stake miniatures of those trials that they were remembered correctly versus those that they were not remember correctly? That was the first question. And the second question, what was the effect of stimulation? So if we looked at the stimulated and non stimulated trial, what biomarkers, what a normal features where modulated, by the way, the amygdala stimulation and more specifically, we'll look at the downstream effect of amygdala stimulation on the hippocampus. And we basically, I looked into the biomarkers that predicted steamy, the correct memories in both CA1 and, and amygdala stimulation using, using these, these approaches. And then we looked at the effect of the stimulation on those, on those biomarkers and then we see what were they effect of those stimulation. We're on the neurophysiological recordings in this, in the CA1. And what we found that slow gamma was the most significantly band which were modulated by amygdala stimulation, the slow gamma in, in the, in the hippocampus. So this is also representing another approach for designing Identifying the biomarker that can be used for designing closed-loop neuromodulation in this, in this paradigm. But one thing that is important, especially from a clinical standpoint and also from an experimental standpoint, is that how to design systems that can learn from interaction. And one of the earlier works on reinforcement learning, there were notoriously slow because this is a learning based on interaction. And by interaction, this can be considered as try and error error approach. Though, we started building a platform which could facilitate the design of neural stimulation systems. And this was having people who are the end users of this platform in mind. Though, the goal here is that to build a number of tools, services, and repositories that can be used. For building closed-loop control systems. And as I mentioned earlier, we are using an RL based approach and also a model predictive control approach for building, building such systems. The downside of model predictive control, which is established method in classical control, is that you will need to have a detailed model of the environment or the system under control. So we are testing different control design strategies. And as you can imagine, implementing and testing all of these in experimental settings will be a very challenging. So we are building simulation environments which we are leveraging the biophysical and computational models of the brain for interacting with these learning systems to build and prototype them. Though, this will provide a very efficient way of designing closed-loop control neuromodulation systems. But one of the challenges here is trends relating these from the simulation environment into the experimental and the clinical setting and by the challenge, there are two main aspect is that Candace system converge or behave the same way that it is behaving in the simulation environment. And what are the hardware systems that are needed in order to implement these oil-based distance in experimental says in experimental settings. Though this led to a project that we are building a platform called Neuro deeper. And the idea, the normal liver platform, is to build some cross domain closed-loop pipelines for hardware implementation, for implementing these pipelines in the hardware. And also provide a set of models, as in a library that can be readily used in designing this system. Here is a simple example of using building a closed loop, closed loop pipeline. And for building a close the pipeline, as I mentioned, there are multiple steps are involved. One being the feature extraction. So we need to extract a set of feature, do some digital signal processing on the, on the signal. Then there was a step of applying analytics and machine learning in order to extract biomarkers. So these biomarkers can, the features of the signal can directly be used as biomarkers or we want to link them to the desired outcome using this, this analytic approaches. And then we implement those control strategies that I mentioned. And here is a case of using optical stimulation in for, for modulating the brain and electrically recording The, their brain signals. This is the setup that we, the in-vivo setup that we, we use for prototyping the assistance. But from a design standpoint, why do we need this? And from a clinical standpoint, because as I mentioned, there is no platform that can support building such systems. And the end goal is that we want to build devices and embed this algorithms into chips and devices. And this platform that would allow us to create end-to-end pipelines and workflows which can directly be implemented in hardware for, for seamless translation into, into implantable devices or into experimental settings. And for this, we are using FPGA as the target. So for deep learning, as you know, a GPUs were essential for implementing deep learning models. It's, it's not, it's hard to imagine to train a deep learning model without GPUs these days. Then we are building this system using an FPGA for the reason of being efficient, power, power efficient, and also translatable in terms of building A6 chips. So there are several challenges in building such end-to-end pipelines. What kind of an abstraction should be provide for domain-specific accelerators. These are getting into a little bit of details on the computer science aspect of building such platform. But From a user's standpoint, the user wants to have what kind of abstraction should be provide for application programs. And this is where the people who wants to use this platform in order to build their, their own pipeline. And how we can incorporate this into an end-to-end platform. Because each of those areas that I mentioned in previous slide, those represent a domain specific. And these are domain specific and designing hardware for those will requires domain specific implementation in hardware. So we are using a full stack approach for building this end-to-end, the stamps that starts with component and flow programming model. This is the development environment and also adds the compiler and hardware implementation in an end-to-end fashion. So this basically what means from a user standpoint is that we are offering a programming language that will allow express pipelines and algorithms using this using this programming language which will be directly implementable on heartburn for building chips, for implantable devices. And also we are offering the model's control, stimulate simulation control, biomarker identification, and also feature extraction as, as libraries that can be used in this, in this environment. And this is an open source platform which we will provide to the community. This is an ongoing effort that we have on building the platforms. So another aspect of building such system, the system will be on a local machine. But in order to expand the capabilities of these systems, we are developing a hybrid Cloud infrastructures that will allow distributed implementation of these closed-loop systems. I will talk later about some, some use cases, which for example, with COVID constraints constraining all of the interactions that we used to have. Now, with this approach, you might be able to run closed loop experiments and without being in a specific physical lab. And we can use Cloud in order to, to run closed loop experiments using this approach. And in addition to the distributed nature of the, of these pipelines, we can also make this scalable. There are unlimited, virtually unlimited resources. If you want to scale up, you are the computing. And that can be done in the, in the Cloud. So either we publish the work on software defined workflows, that is, basically creates a blueprint for building such hybrid Cloud Edge infrastructure. Another platform that we are developing here is for Spark program at NIH. And this that the spark for the goal for this Spark program is to use vagus nerve stimulation in order to develop therapies for various conditions. And the goal here is to regulate the physiology of the organ systems for using, using vagus nerve stimulation. There are multiple groups are involved and there are multiple cores are in place in order to facilitate this is a large consortium. But from a platform design perspective, we are building software infrastructures. That's just one of the areas that we are developing. And also we are building physiological models for closed-loop control. As I mentioned, we need to have at prototyping and simulation environment for, for these devices. So these are, can fall into models that are built from the experimental data in a data-driven fashion and also the models that are mechanistic and are based on biophysical. How. Equations that predict the pork and organs physiology to stimulation. And also we are implementing this in the context of closed-loop control for gastrointestinal and end card, card the existence. So this is four for the interest of time, I'll just briefly go over these elements which are representing different models and modules that are built in the, in the platform. It has modelling elements, data analytic elements, simulation engines, and also control modules for designing closed-loop VNS control. And the design criteria here is that our goal is that to feel M, modular environment where there is models are containerized and when the user interact with the platform, basically there are pre-built model that can be pipelined together in order to create a closed loop simulation. And this will allow to combine models from different programming languages. For example, if your physiological model is in Python and your control is, controller is in math lab or vice versa. You can, you can feel this, this closed loop simulation. It will be flexible and extensible. It will be compatible with existing platform. And we will target a heterogeneous user base from algorithm and model developers and experimental Eastern clinicians. This is the front end of the platform which basically, so I can drag and drop, you can. Each of these nodes are representing a physiological model or a controller. And on the backend, this can be implemented both locally and also. It can be implemented in a distributed fashion. In fact, we have our servers in Amazon, web servers services, cloud services. And then we were able to Ron closed loop simulation Actress Atlantic, where the models were running in Switzerland. And we basically ran the controllers here in, in Atlanta. As far as the physiological modeling is concerned, we are building both mechanistic models and also in silico, data driven models from the experimental setups. Here on the left you can see Model, mechanistic model of the cardiac system, which we used for building a model predictive control in order to regulate the mean arterial pressure and heart rate using vagus nerve stimulation. We also use a data-driven approach as a reduced order model because running this of physiological model detail physiological model will be very computationally expensive and simulating just one cycle of this, of this model may take up to, up to an hour. So we use a data-driven approach to yield a reduced order model. Approximate this. You the detail physiological model. And here you can see on their ride we used an LSTM. This is a deep learning model to predict the output of the model given, given the input input state. And then be used also this for building the architecture of the control system that can learn from interaction, directly from interaction with the environment. And also we use this LSTM for our simulations. Here are some of the results that it shows the performance of the LSTM based in controlling the blood pressure, desired levels of blood pressure and heart rate versus the detailed experimental or divide physical model. We also extend that the RL based approach. And the advantage of the oil based approach was that the contour, the RL agent, was able to learn not only the optimal stimulation strategy from interacting with this detailed model, but also it was able to build. Underlying model of the, of the system as well. So we used, for these experiments, we also used a reduced order model, which in this case we use the temporal convolutional network because it was a more computationally efficient as opposed to LSTM. And this is some of the results that we got from the URL based control approach that started with random states. And then eventually it learns the optimal control strategies in order to achieve the desired level of arterial pressure and heart rate. So with in the cardiac system, we had access to the detailed biophysical model. But for the GI, we did not. We only had access to the experimental data. And we have developed a pipeline as part of this platform that used different modalities of data in order to create models from the experimental data. In this case, we used MRI data, fMRI data. Actually this was recorded from rats and show the movement of the stomach over time. As you can see, this will give us a very clear view from the spatial standpoint of the stomach. But we had poor temporal resolution. So we are also building models based on the ENG recordings from the electrodes that were placed on the surface of the stomach. And by the way, this data was collected in response to vagus nerve stimulation. And we, in this case, we combat, we are recording data from the surface of the stomach in response to vagus nerve stimulation. In order to build models from these time series data and identify control strategies that will induce desired of physiological responses in the stomach. And in this case is the coordinated movement of the smooth muscles in the stomach. And we're building the process of building a real town closed loop control systems for this. So to summarize, precision neuromodulation and using AI based approaches is a promising approach which all considered as next-generation of neuromodulation therapies for neurological and psychiatric disorders. I presented the framework to leverage AI for building such closed-loop on neuromodulation control systems. These involve interpre, leveraging interpretable AI for biomarker discovery, and also to interpret the behavior of these systems. You can think of them as autonomous driving system that eventually it's going to learn what is going, what is the optimal stimulation approach? But the question is that what does that mean? How we can make this black box approaches a little bit more transparent and use those as scientific tools in order to identify the effects of stimulation on the, on the neurophysiological response as in-silico modeling. An essential and important components of this platform, either by physical modelling approaches or data-driven approaches, are essential for building more accurate neuromodulation control strategies. And this is going to be an iterative process. So when you collect the data and you build a model, you may not have collected the optimal data or the most relevant data for this task. And through this iterative process, basically we can improve the data collection and optimize the design of the experiments as well. And last but not least, open source platforms are essentials to create infrastructure for community engagement and also implementing this platform. Because if we build a control system or a pipeline that requires a cluster of GPUs in order to run the computation. This is not going to be clinically useful and it will be limited in terms of usability in the experimental settings. And to this end, we are leveraging the edge computing as well as the Cloud computing approaches in order to address. Those issues as well. Now of course, this was done by the hard work of the people in the lab and also funding from NIH and other funding sources here at Emory. And this was by no means my a my own entirely my own work and basically was the results of collaboration, multi-center collaboration with many people who I didn't have space to put down all the names here. With that. Thank you for the attention and I'll take any questions. Applause from the entire virtual audience. Thank you for that very comprehensive taught. There are so many wonderful examples up on incorporating Mary technology, AI, modern computing approaches into the clinical rounds and stuff that can advance basic science research. So we have a few, we have few minutes for questions and so the floor is open. Let's see. It looks like there's, there's one question that has appeared in the chat. So brush IDE, you can go ahead and unmute yourself and, and pose the question if you'd like. Sure. Thank you very much. Electoral map would be fuzzy. Great talk. So my question is about the variability of the effects for brain stimulation. As far as I know, the affect is highly variable between patients. To what extend your models can be generalize to new population. So, so basically I want to ask, is there a way or have you thought a way for decreasing the amount of data that's being used to train a new model for each patient. Chrome, algebra and nice experience? Yes, That is an excellent question. When we talk about model, that's we are talking about the control model. We are talking about various biomarker model. And we may also talk about the underlying model that predicts the effect of this simulation. So this variability can also within the same patient will be present. That means that over time, the model might be my change due to the non-stationarity of the system. So in the results that I showed, those were actress, multiple patients. For the case of the DBS for PD, we started with, with random weights, parameters for each patient. And the speed of convergence was one of the criteria to, to optimize in order to reach to a patient a specific response to for for, for, for the therapy. We'd regard to how we can improve that speed of convergence. So I should, we always starts from a naive estate. The question to me that probably is not the best strategy. The question is that how we can leverage prior knowledge from other patients? You can, for example, use patients stop typing in order to starts with a blueprint for those patient population and that could improve the performance. So this is something that we are looking into as to address that patient variation that you mentioned. But this is, this is one example and that's one of the active a is that we are we are looking. Yeah. Thank you very much. I had one question that I wanted to ask. I know that you discuss some of the biomarkers for a memory performance and for depression outcomes. My understanding is the biomarkers you're looking at are specific oscillatory frequency bands in some of the data. And I was curious what your vision is to try to trigger some kind of closed-loop stimulation off the, off about pathological oscillatory band or more. To design a system to reinforce specific therapeutic oscillatory regimes. Though. That's exactly So. This is a continuation of these ways, is a collaboration with Washington University and University of Utah that I mentioned the query in men and John really? So there we have a project. And the goal here is that to tie, first, tie the presentation of this stimuli to certain oscillatory state. And even without any intervention, we want to see FV presented stimuli in those states. That is going to be an improvement in the, in the memory performance. And the next step will be to basically try to induce those, those, uh, stimulation, those states via, via stimulation to more look into the, into the causal effect of regulating those biomarkers and the subsequent memory, memory performance. This is, this is an ongoing study that we are in the beginning of that. Wonderful. I know that we're or at the time right now, I'm sure that if I'm if anyone has any other questions, you can feel free to put them in the chat or unmute yourself. But I did want to thank everyone for joining us for the seminar. And you know, another round of applause for our wonderful speaker today. Thank you very much.