Now I think you started your thought. It was your research. They've done it has a fascinating background. Over the years and I read some more healthier oriented stuff then she went to see if. You see their characters revolver. At all and. Then right. From here. If you have more you know. Where that would be already V.F.L.. From the restaurant where she's. So struggles for a medical work law. Where you have you going to see she's very nice about the system over here so please thank you. Thanks very much for the invitation and for all of you for arriving for the talk. So to take a step back and talk about robotics and rehabilitation for a moment is this of these two feedback you know each other maybe. OK. OK so if we look at the field of robotics over maybe like the last fifty years and we started to talk that's kind of where we saw robots traditionally where we still see the majority of robots stay right in manufacturing domains robots that can't really be around humans and where if we've started to see robots now in general society so the robot vacuum cleaner is probably the most pervasive within the home and we have robots for medical applications and also for toy applications and then a bit further down these are kind of like wish domains where we would like to see robots interacting with humans so we'd like for them to be physical assistants to us we'd like them to be task partners with us like they sample with a robot I don't know if everyone wants them to have a Gemini that's actually hard on a close friend but that's an option too and now we've passed for a moment and take a look at the domain of rehabilitation these are all. Images that I pulled from our I.C.'s website so I should mention. My lab is actually at the Rehabilitation Institute Chicago which is a very large rehab hospital in the country and it and basically these are all image his taking from I C's Web site and so what we treat are you see in the domain of rehab in general sensory impairments cognitive impairments and also motor impairment the motor impairments are probably the greatest where they have. Cognitive cause like a stroke where it's actually a part of your brain is having a lesion in part of your brain is why you don't have proper motor control or it could actually be more physical like a. Now if we go back now to the robotic side as we've made this transition from the factory into where we'd like robots to be around us and out in the general General world we've had this increasing autonomy of robots now need to be perceiving their X. tunnel world and they need to be making decisions about this and this is really great news for people with motor impairments because in the field of robotics we are already synthetically sensing the world we're reasoning about it and we're taking action within it and these are all tools that can be leveraged within rehabilitation to help people with any of these motor impairment on the right hand side of the slide not more in form not just motor impairments impairments in general. OK if the field of rehabilitation robotics. Primarily consists of robots like these so they tend to be robots that are physical therapy robots so they are you they're taking the place of a physical therapist or they are helping them so physical therapy is actually very taxing when their joints wear out it's very tough to do that so robots can provide repetition that's very consistent and they can help the actual the therapist and so on the top we see the local MT That's a complete commercial product there's a lot of planar robots that have been used for ARM rehabilitation and then towards the bottom we see more of these high dimensional exoskeletons that would guide your your arm into certain motions and so if you talk to a clinician this is about rehabilitation robotics means now. What you will notice is that none of these machines are actually doing any sort of a high level autonomy that I talked about on the previous slide So these robots are not sensing the external world and they are not making decisions about what they should be doing based on that they do sense interactions with them so there are like four sensors things like that that there's no sort of high level autonomy of what is the control behavior I should be doing based on this. Now there is a smaller field of assistive robotics and these might be robots or that are taking the place of pet therapy for example there might be robots that are meant to be a coach within a therapy environment there's a great example from here at Georgia Tech this is out of Charlie Kemp screw up on the lower right hand side and that's a rope so it sort of meant to be an aid and that's sort of at the top sort of stream of these are very often amiss robots and they're forming an autonomous role. Where might the work of my lad fits a sort of in between these two worlds so in particular. Within rehabilitation we have a lot of the system machines so the most persuasive is definitely the wheelchair there's also prosthetics of course upper and lower limb prosthetics that are or thought X. and X. the skeletons and the exoskeletons today are mainly just being used in therapy context so there's a lot of it's very good for the body to be upright so if you are for for example a paraplegic it's good for your body to be operating not always seated and so they can actually just be used in therapy domains as well but we would like for eventually for these to be mobility aids and there are some companies that are working on that and then in the lower left hand side we see the Chico robotic arm in this is meant to be mounted to a powered wheelchair Now these are all assisted machines that are one hundred percent under the control of the humans so these are tele operated machines and it's not always that simple to control them so for example with the Jayco you can see the insect he's controlling it with his foot so with the Jayco it's six dimensional control problem it comes with it and a joystick that's a three axis joystick so it also has a twist and so what you end up doing is the sort of modes. Where you control just position and you push a button and you switch control just orientation push a button need to switch back and forth and this is also assuming that you can actually use a joystick which a lot of the people who have benefit from the use of a robotic arm or the eight over a product arm like this don't have that kind of hand functionality so there's this issue of of control and control interfaces for people with motor impairments another good example are prostheses actually so it's this sort of confound that the higher the level of your amputation the fewer residual muscles that are left in your arm in order to gather that you signals from to control the device So currently the use of the decoding for signals is Max at like ten or twelve classes and that's controlling down here it's not controlling up here in each of those classes only cost controls essentially one half of the degree of freedom so one class is elbow extension one class is flexion and so there's this huge problem of you know what happens if I want to control my elbow and my wrist at the same time that's a whole new set of classes and so it's a very difficult problem this issue of control control interfaces and this is where we would really like to have robotics help so in particular. Into robots and have them gain some sort of autonomy or partial autonomy they can supplement part of this control problem so it doesn't need to be one hundred percent under the control of a human to human still always going to be there but not it doesn't need to be one hundred percent so how do you turn the system machine into a robot this is going to be no surprise for anyone in this class to start with the machine you need to add sensing capabilities so that it's able to sense the outside world computing an artificial intelligence or autonomy and then you're a system machine has now turned into an assisted robot and I'm going to be talking specifically actually you know first I'll talk about challenges and special considerations so within this domain it's a little bit different than traditional human robot teams so we do see examples of human robot teens in society like that robot example is one of them but this tends to be someone who wants to be interacting with the robot isn't a silly depend. On the robot the robot is not necessarily seen as a physical extension of their own body so this issue of actually getting control correct is a big one and we can probably take we can learn some good lessons from the field of prosthetics so. What we have on that as an image on the top is an example of what's called Body powered prosthesis Now what this is is it's normally has some sort of a hook at the end and the way that you actually actually did it is by throwing your other shoulder so you see is wearing like a harness you throw the shoulder to actually to sort of manipulate it in the place and then to lock it and then you switch control to the wrist and that's kind of how it works so it's a very sort of tedious process. And so it seems like this that having actually a mechanical arm should be a no brainer right that you would be more functional with some sort of arm then without in that something that would be electric would be more functional than the body part are. As it turns out you have a twenty seven percent rejection rate of both body powered and mile walk to proceed. Prostheses are sorry one in five overall rejection rates a twenty percent rejection rate twenty seven percent use it as a passive device which means they wear it but it's only cosmetic they don't actually use its functionality and so there's a lot of different reasons why this happens there's the fact that it's there's a lack of functional need some ninety percent of tasks you can actually perform you know manually so if you still have one of your arms you can just use your other hand to get everything done there's also a huge thing is comfort so the socket gets really hot or it's an uncomfortable fit that it's not as functional or that it's not as durable so there's a mechanical failure with the electric arms that there just isn't with the body part arms so so the take home message from this is that just giving someone a machine and having some function isn't good enough it needs to actually meet their expectations and needs and chances are they will not want to use your machine if you don't. So to kind of summarize that we need to really pay a lot of attention to user acceptance if we are going to have. Sort of autonomy we need to be paying it's very close attention to the flow control authority you never want to take away control from someone when they don't want you to if you do they probably won't want to use your machine at all and it needs they need to be able anticipate when these transfers of control are going to happen when it's going to be taken from them and when it's going to be given back to them and it really key are these interfaces for feedback and control like I mentioned So in particular if you are someone who has. Put in three for example or severe motor impairment things degenerative diseases like LS. You the types of interfaces that are available to you are limited so you're not going to use a regular joystick people who can use a regular joystick and operate a powered wheelchair that wheelchair becomes like an extension of themselves and it's no issue what becomes more of a challenge is when you have to use these sort of control interface of so on the right hand side is the system puff which you guys have probably seen it's like the straw based interface what's tough and then on the left hand side is a head based array so these are basically switch sensors sensors that can either be proximity arrays or you actually touch them and you just operate on with your head. What's difficult about these interfaces is that you can only control one to run dimension at a time when control dimensional time so for a wheelchair This means you can control speed or orientation but not at the same time you can kind of cheated because you can lock one of them and then your and then issue the second one while the other one is still moving so you can kind of kind of get around it but nominally only control one at a time and they're what are called non-proportional control interfaces which means that when you blow harder you don't go faster so it's just a switch that you turn on and off you can't actually control the speed you can go back to a menu interface and choose your power level so if you want to for example. To get up into a van take a ramp up into a van you have to first navigate through this little menu using the straw you select power level two you go up the van playing up the ramp then you need to stop go back to your navigation menu go back to power level one because if you try to orient yourself inside of the van while you're on power level two you'll hit all the walls and so it can be a very cumbersome process right and. One of the aims of my life is that we want to have good we want to be able to use these kind of interfaces to control different sorts of machines so that we are excess to this population. So in my lab were developing a smart wheelchair this is a picture of it and two of the things that were really priority is being as customisation to the user and also low cost so the reason why customization is important is that not only does every user have different preferences and actually different impairments but those are probably going to change over time so they may be in therapy and their motor impairments are getting better they might have a degenerative disease and their motor impairments are getting worse it also the amount systems that they want actually might just change week to week for example based on their level of pain and they want more help when they have more pain things like that so we want to be able to customize to the user and we want we have course don't want them back to come back to the lab to get this the reason why we're interested in low cost is that it's going to be a long time before this is covered by insurance at all so you have to go through these huge clinical trials you have to demonstrate when when a therapist prescribe the powered wheelchair they go they fill out a stack of forms this high and they have to justify every single seat cushion every single type of interface to everything and so you basically would need to provide a justification for the autonomy that this is necessary for some sort of reason that provides a situational functionality and the only way you would have those justification is if you did significant clinical trials and so this is going to take time not to mention things like F.D.A. approval so that So if we want it to actually be reachable to the general public faster than that timeline it needs to be finance a bill out of pocket at least for a portion of the population. OK So one way to do that is to interface with the commercial hardware there is a type of input control input for smart wheelchairs that's called an expandable input it's meant to use multiple different kinds of control interfaces the good news for us is that we can just interrupt that control signal and interpret and read what that control signal would have died reason about it in our system and then pass something else we can. Module This also means that the proprietary wheelchair control module is still what's executing the control it's not our controllers which probably voids the warranty but we have small hope that maybe it wouldn't. And what we're also aiming for is to have good performance with these sort of lower dimensional lower bandwidth control interfaces. All right so customization when we were the company customisation is that we have modular software so we have these different sorts of assistance modules and a user can opt into or out of any number of them we also customize our control sharing so each of these control modules has some sort of way that it's reasoning about the input from the user so this example is a really simple linear control one that we are reasoning about input from the user and lots of times the way that we're doing this control sharing is premised arise so you can tune these parameters for each user we haven't actually gotten to automatically adapting them but that's on our horizon and so we want to we will want to adapt them based on the user's feedback that we get from them and also just how we as we can autonomous we assess how the system is doing. The modular software we also have modular hardware and this is also how you can help keep costs down so the idea with our chair is that you would start out with a powered wheelchair and a control interface this would have been covered by insurance and then we're going to add the computing power electronics and then also we have these sort of sensor modules where as you add on the dish the additional modules it will unlock a different additional software modules so if you only have R.G.B. decent third you can't do all of the navigation stuff probably you want to be able to do but but maybe that's not a priority for someone and so it's not worth it for them to add in these other sensors and these are just examples of the sensors that we have. Just a really high level of this is our control architecture basically set consists of high level goals and low level goals and we are in low level control. And we take we reason about user input at both of these levels so both at the high level and the lower level and then we do sort of like arbitration between the automation and the user's input at both level and adaptation that's future work but we're going to be we're going to be customized in that so so if we look at first high level one of the things that we want to do is audit autonomy sleep perceive goals in the environment and the reason why we want to do that is there someone is using these very limited control interfaces for us and for them to tell us what they want to do through this interface can become for some and so ideas that we can be more seamless if we're able to automatically detect some goals because then now you're just reasoning about a small set of things in the environment instead of every possible thing you know when you. So one thing OK it just shows up weird on my screen so this is an algorithm. Very developed for Honestly detecting doorways using just the depth information from an R.G.B. sensor and so it's basically extracting the plane which is what you see in magenta and then it's looking at about wheelchair height it's searching a very small strip just to speed things up that's the green line and it's looking for we're doorways of a width of a minimum with that's just a little bit narrower than what's actually specified in the American Disabilities Act that E.D.S. specifications and then out to like twice that and anything wider than that we can just get there is open space and so it's using that and then if it finds that gap then insert and with a bit of a with like it such as a box basically just for clearance what's nice about this is because we've extracted the plane with that magenta the magenta image we basically get the position and also the orientation of the doorway and so we can use that to actually set navigation goals now to send to an autonomous planner where we would achieve the first navigation goal and by a T.V. network kind of already lined up to go through the goal or through the door and then the other goal the second goal be on the other side so. Yes So the video on the left was just showing identified doors and that's the green dot that you see that you see periodic. And then on the right is the actually going through and I'm sorry that's so chalky. That the video on the right doesn't only play choppy but sorry about that OK some other goals that we've looked at is identifying docking locations so if for example in a crowded environment like a restaurant trying to find an appropriate place to dock a table can be a challenge desks can be a challenge because they have sort of a narrow opening that you're supposed to go in and so this is work that my student Siddharth did and so he's basically it's sort of similar in concept we're looking for planes but now it's actually horizontal planes that are sort of within the range of what we could actually interact with if you're seated in a wheelchair and then were we're looking them we're looking for the shape to be there circular or rectangular including squares and then we're looking for docking locations around that where we have sort of two methods one is that we take sort of the width of the wheelchair and slide it around and we can get any docking location based on that we also have. We also have you see there on the left we've got a bowl so another another mechanism is that we also search for circular objects on top of the table with the idea that these if you place the balls which are often place settings and we anchor it to those locations. All right so then once we have all these potential goals in the environment and also the user's input we have to reason about what we think is actually our most our goal within the world and so one example of this is just this is just one equation that's an example but I. See in this video I'm telling operating robot on the left you see here I stop the robot doesn't do anything because it's not confident yet enough about which door I wanted but then once I give a signal to go more towards the left now it now it knows in this particular control sharing has the robot take over one hundred percent of the control when the user gives no signal could imagine another scenario is you want interpreter to stop when you get no signal from the user these are all things that can be customized. And the confidence is based partly on our perception confidence of seeing. Door repeatedly in multiple frames and also this sort of agreement between the signals that we're getting from the user and where the door's located. And then also distance so we won't take over control for a doorway that we're really far away from. So then once a goal is sorted it gets sent down to the lower level command arbitration level again here reasoning about. About what we're seeing from the user and so here's an example again if you watch my phone it's going to be that I take my phone off so it's going to go through the doorway but you're going to see that I'm going to take back control. Now it just took over control I'm going to take back control and it's going to seamlessly give it to me right away so in that this again this is a particular control sharing mechanism we have multiple different control sharing mechanisms on the robot but and this could be a user could select which type of control sharing they want to have but this is that if the user is giving the signal and there's no imminent collision the user gets one hundred percent of the control. Just so some work that a master student minded auditor a couple of years ago and this is just an example of how we brought some machine learning into it so here instead of reasoning explicitly about space constraints in the environment what we did was a we took a demonstration learning approach where this was just done in simulation with a simulated version the wheelchair but we basically had the wheelchair go through a doorway that had all of the sort of four configurations of constraint so we had a totally open on the other side we had a hallway on the other side and then we had just once just a wall on the right and just a wall on the left and we provided for each of these we provided six demonstrations that provide that approached it from the right approach to from the center and approached it from the left and then we did that once at the lowest speed we would ever want to go and once the height had faster speed we would ever want to go it and what we did was we coded this within a gallon mixture model and so and we use that the variance of that model and that's what circles right here to set how much control we allow. The user to get so basically if you war in a very spatially constrained to space and you were deviating a lot from what the planner wanted to do the system wouldn't let you deviate that much so you were basically allowed to deviate within a certain number of standard deviations of what had been demonstrated and so instead of reasoning explicitly about what were obstacles in the world we were kind of encoding it all together this idea of spatial constraints and also speed and we were doing control sharing that way. So we also what we do when we have the. When we're about to have a collision is that we're actually forward projecting the user's goal just from the operation commands that we're getting from them and we're seeing whether that collides with an obstacle and what we do is we iterate of Lee take away control and move it over to the planned path so we give that as the goal to a planner and then we will shift the control over to the place the plan or iteratively just the minimum amount that we need in order to get around the obstacle now this ends up being sort of a mix in that we don't want to make our IT or to iteration is too small because that will be too computationally costly we don't want it to be one big step and we take all control so what you can see Sorry I should be talking during that So what you're going to see here is if you watch my left I'm telling operating the robot and you'll see that I'm trying to drive it straight into the chair and it's not going to let me it's going to four protect my command and go around it and here I'm doing it to the right some still trying to drive into the chair and I'm trying to drive in the mat and it ended basically finds paths around but this is always just before a projection and actually we're starting to see some evidence that we are wondering actually how much we need the whole perception in in lieu of this now I think that once we start working with actual patients that will probably define this but as it turns out just gets you a lot of functionality and you for very directly use their intent right there just for projecting their commands but it does require the user to be getting a command all the time. Yeah I mean you know where you we are sampling the user input at like fifty hertz or something and so we're just constantly for projecting so. When we for a project yeah it's just the current command that's right. OK so the sort of next steps with this work as I mentioned that we haven't done in the adaptation yet and really we can adapt the behaviors themselves and then also how this control sharing happens and then also the user validation so we haven't run any subject studies yet you might notice that our wheelchair is a development platform and doesn't have a seat on it and it's important for user studies to for things to be super robust because if your plan or for example is a little bit flaky all you're going to be evaluating is the plan or like the users are just going to. Roll out and go after people and that's all that's going to matter you want to evaluate your control sharing so we're not quite there yet but what's really great is that I mentioned that my labs in the Rehabilitation Institute of Chicago. And we have a very large patient population there so it's actually the number one rehab and hospital in the world in the country by world in this report that was the world I was looking for in for twenty four years and we also have the largest physical rehab center in the world I believe certainly in the country and what this means is that we get a lot of patients coming through so something like I spend a cord injury aren't that many of them and so. You want to so I have access to a big patient base all the time but but we actually do and we've got all these clinicians and therapists who are just a couple floors below us in the hospital so it helps to keep it right we read the elevators with with people with patients coming to the hospital and we can just go down and talk to therapists and say you know like does this even make sense like with lots of times what we think would be good work as roboticists or as engineers doesn't match what it would actually be the most useful for a patient lot of times it's not necessarily the really interesting robotic solution that would actually be the most helpful for the patient in ways that sort of that sort of on board backs. OK So just to take. A moment before talking about like the cuts just to mention some of the tools we might be using to perform this adaptation my prior work actually before I started R.E.C. and Northwestern was just a robot learning from demonstration it wasn't a system to rehab robotics robots at all and so what I did was a lot a lot of work with was how to provide corrections after demonstration so this is an example because this is the robot that I was using during my post-op with. Fell and here it's that we actually had touchpads around the wrist and hand of the robot and we would use this to provide pose corrections we could do it this example is just at the end but we could do it throughout the trajectory and then that new corrected trajectory becomes actually a new example that you can just feed back into your demonstration or your demonstration learning policy. During my Ph D. I was working with a Segway robot So this is a very different kind of robot it was a mobile robot obviously it's dynamically balancing so like a touch based interface to correct it wouldn't be a good one it would just push it back and so and you'd have to run next to it so there we had this sort of that we called advice operators and basically what it allowed for was it was a graphical interface. It was an octave plot where you could through a text based interface you could select a chunk of a chunk of points and so here it's basically that I've just highlighted from a certain point to the end and I take all those points I apply and vice operator where it might be to like turn will turn tighter or turn more loosely and that's affecting both the translational and rotational speed you apply these mathematical functions to the recorded data points it's in there you get sort of you know fifty new data points just by quite selecting that chunk and putting on one correction and to go through and correct all these points by here and this is really feasible right any motion that you would want to correct is probably going to be on the order of hundreds of data points if you're sampling at thirty Hertz. And so that was the idea with advice operators we've also looked at different way. To incorporate these sort of and so with the first. So the first was just to try to refine the demonstration that you gave so in the upper left hand side this was sort of a toy problem problem and it was that we had to find a sinusoid piecewise and then we provide rectories corrections to try to smooth out the curves and not pause in the middle and things like that and we did we could do all of that with by providing correction so you sort of have the emergence of novel undemonstrated motions. In the right here we were we're looking at policy reuse So this is that you've had a demonstrated policy and you now want to provide a correction so that you can use it to perform a different task so this is now picking up a can if you just try to use the ball grasping policy it doesn't the hand doesn't have the right orientation it doesn't work but what you can do is provide corrections and you can actually provide corrections that show you can see right here the sort of constraints on the task so because we were using sort of a G.M. formulation you could show within the limits of the covariance of what you have demonstrated where there is some variability so along the can you can guess at the top progress at the bottom and you can show this just by doing one or two corrections and in this case we applied the correction actually to the dataset we already had recorded to kind of just rotate the entire dataset and then if you're clever about the way new sequence in which you sequence these tasks now the fact that I showed the variance for the can all I need to do is provide one correction to then show to have that variance translate to the edge of the tree. And then we also do this for policy scaffolding This is from my thesis this is just a simulator domain with a point robot trying to drive on a race track and what was demonstrated was just how to turn left and how to turn right and how to go straight so these sort of transitions like how to straighten out after turning were never shown and we could get we could get the robot to drive the whole track of by providing corrections and also it was important to note that actually the human operator couldn't really provide a high quality demonstrate. The whole track so if they were going to be able to drive the track without driving off the track they took it out a slower rate than we wanted the robot to go at so this was sort of the advantage of providing corrections so we're going we're going to be using these kind of tools in in my lab and it's just sort of it's on the it's on the. Last word it's coming up. OK so other projects so one project that we're working on in my lab that's actually a collaboration with Magnus here and with Todd Murphy is looking at computer full trust in human instruction so this is not trust in the robot It's trust in the human So the idea or where this came about from was that I joined Northwestern and Todd and I started talking and I was coming from this world of demonstration for robots and toddlers coming from the world of optimal control and what's really great about demonstration learning is that you have this very intuitive way to show policies right you can just show up or to drive policies you don't have to be robotic sex for control X. for all you need to do is be able to show. But you don't actually it's a data driven technique right you don't have sort of formal control theory guarantees with this sort of technique now from the standpoint of optimal control you're able to verify these controls for feasibility for stability but you need to be an expert to define them so Tom and I were trying to find the middle ground between these two worlds and what we started asking ourselves was how much should a human be allowed to interact interact with a controlled machine so if a human is providing demonstrations or if a human is providing corrections and it's a dynamic system that can be destabilized how much control should they be allowed to have. And the sort of idea that we've taken is that we have the sort of computer computer will measure of trust in the human and this is based on the humans past interactions with the system you can do things like analyze will this this human tried to demonstrate a trajectory for me maybe we did in simulation to be safe and if I try to stabilize that trajectory. Questions blow up or I really get outside of the basin of attraction right and so. How much that happens and the human sort of the characteristics of the the instruction that the human gave over time we use that to compute a measure of trust and then we basically temper how much of the human instruction we need incorporates on the left here these are obviously just illustrations Illustrated grass on the left we see that the robot is the robot tried to track that human trajectory it goes on stable and so what we can say instead in the middle we see that actually we've maybe just tempered it so we took only a step in that direction we scaled it down and then eventually Now if you have a new controller that able to track that structure you can sort of step it towards the where you wanted it to be and basically as we trust the human more we're willing to get closer and closer to the edge of that basin of attraction because it could be that the human is just taking us to a place that is stable but for many sims we maybe wouldn't want to take that step with the human until we know we can trust them. One work. A work that was presented at a trial last year when it was just done and simulation was looking at how to introduce some partial automation of prostheses So as I mentioned before prosthetics electric prosthetics or my a lecture prosthetics are normally one hundred percent under the control of the human so there's not even any sensors in the joints that are in the motor so it's just that the human is constantly watching what's happening in the human stop it so what North This is control normally looks like so you take the data you do some signal processing and then you have some sort of open loop controller and you're controlling one joint at a time. What we were proposing was to actually have a very small and very very small I mean you know like three or five Number of automated controllers running on the arm this arm on the left is the is the arm being developed by the Center for medicine at R.E.C. and they actually will have two to the joints Well Paul sensors in them and if this project that we've kind of put this project on the back burner but they had said the other went forward with that they would they would provide us with sensors at all the joints so this now enables closed loop control and the idea. That we would have a very small number of automated controllers that represented some sort of coordinated multi multi degree of freedom motions that you would see in general life so where we got sort of inspiration from the SO is that we were talking to a man who had one amputation. Just above the wrist and one amputation just above the elbow and what he really missed was no longer being able to fish because he could not do a motion like this right this is a two dimensional motion or it's a two degree of freedom motion and also to generate any sort of oscillating movements with a signal means you're constantly fluctuating the signal you get which is very tiring and so the idea we had was well if we could have a couple of a couple of simple. Automated controllers like that on it then what they do You could be controlling after you selected that control or after you selected that automated controller what the E.M.T. input could be doing is actually just modulating the primitive ization of it and so here's an example so as I mentioned we are using actual sensors but we were just using a simulated version of this prosthetic arm and this is exactly that sort of it's a crank turning task as it is the cyclical repetitive motion and what Matt is doing is with the signals he's actually just modulating the ant in the speed of the motion and with this was very very simple processing where it's actually tied to his biceps and triceps that that wouldn't be actually how you would do this you would have an array and you would be using machine learning on this but just for this pilot experiment that's what we're doing so now that he's got sort of the amplitude right now what he's doing is increasing the speed and then are know now he's decreasing the speed he did increase it and it decreases to a point where it actually reversed direction. OK Another question that we're looking at in mice in my lab is that how do you actually control these higher dimensional assisted robots so if we want to be using these limited control interfaces how do we accomplish like six dimensional control with them and because as I mentioned you know even with. This Jayco arm the control interface that comes with it isn't accessible to all the users who could benefit from this machine. And so one project that we're working on that's a collaboration with Sondra most of all the issues that Northwestern are I.C. is to use what he has lab has developed what they call a body machine interface and to use this to control the arm now with need about the body machine interface it's basically a vast i'm use sensors and it can can and what they have a person do is do what they call a calibration van so they just move and all the degrees of freedom that they have and lots of times when you have a high spinal cord injury you still have a little bit of residual movement in your shoulders and so what they do is they have people move in however they can and then they apply for now P.C.A. but like dimensionality reduction to this and then you get the two principal components that match that user's motion ability right so the interface is customized to the user instead of the user needing to fit the form factor of the interface so for They've used this to control it within two dimensions they control a cursor on a screen they control the power to filter what we want to do is use it to control this higher dimensional system so the question is first can you get extract these higher dimensional signals from this so that I'm use I mean it's news that you've given one dimensional signal but there is redundancy so the question is can you actually extract reliable signals that the user can reproduce and use in the high dimension and also commute automation to help train this so the idea is that if we have the robot that's able to control in all six dimensions we can start out with a low dimension so for example maybe it's only one dimensional control and it's just that the humanists modulating the speed of a preprogramed trajectory and then we maybe go up to three dimensional control in the human is controlling orientation and the robots controlling position or something like that and the question is that if we iteratively take away control from the robot and transfer it to the human community train a human to be able to fully control in the end and use a six dimensional control and so what's neat about this project is that we would actually be using robot machine learning to elicit. Human motor learning. And that's something that we're just starting to go forward with with this project another nice thing about his interface is that in contrast to for example a brain machine interface it actually encourages movement from the user so you it actually has a rehabilitation purpose so you can regain or at least maintain some of your motor ability by using this interface. Another thing that we're looking at is a system tele operation this is a collaboration with CMU and here it's that this this is building on some work that they started doing with her but there are a lot and the idea is that you can take the signals that you're getting from the user so here the user is only controlling you'll notice on the upper left hand side the users pushing buttons on the top of the joystick this is to do that mode switching between orientation and position control and and basically though even if the user is only controlling in one of these in one of these modes if it's for example position if you have a sensor that's observing the scene you can maybe and for which of the objects within the scene you're going towards and then the more and more confident you get in that inference the more and more control the machine can take over so that by the time you get to the point where you're actually doing the dextrous manipulation of picking up the object you have one hundred percent machine control or or maybe not in might be that you control sharing to not be that but as I'm sure. You were able to see is so basically in the lower right we have we have assistance now that video also started a little bit earlier so that's not quite fair but but basic you'll notice that with the assistance he's going for the box and once the system becomes confident that he's going for the box it takes over versus It's much slower to try to do this without assistance and people in the mode searching and this is for someone who has full use of their hand. It's. OK And so that's pretty much it so the basic theme of my lab story I was still a young lad so I was a little bit of a lab pitch at the end so first off as we have to we're really grateful to R.E.C. for giving us a. Fantastic lab space more up we were right downtown on the shores of Lake Michigan we've got a nice big space we kind of have the penthouse club even though we're the youngest lot which is lucky for us and yeah it was it just it was a clinical space of four we got there we were just there at the right time for really glad to have it. And then the really the goal of my lab is that we want to have a system robots that make human more able to automation that's customizable teachable by the human not by shots roboticists and also adaptable. And so I'd like to without I'd like to think my graduate students met Barry Alex fraud and Siddharth chain and some fast past members of the lab J. Frey and. Well some undergraduate stuff and Luke and also the funding agencies and the stuff and I With that I'm happy to take any questions. For instance. So. Far there's. The actual user evaluation so yes so my lab hasn't we haven't gotten there yet but I think that the hardest part about getting to that point I think is having the system reliable enough that you are not testing your like we're not testing the system components such as should work right or right yeah exactly and so you mean you can't get it reliable enough we're going to change it for. Example Yeah exactly so it's actually getting it to the point where it's you know so for example like for that doorways traversal stuff we're having some trouble with our path planner and it's like well what we actually want to evaluate is the control sharing but if the planner is sometimes copping out that's what you're actually going to be evaluating the results of our worst of the. Difference between. You're sitting in your. Drawl. Yeah. Everybody you know the control Yeah. Yeah. So you're starting to see. How do you have progressed very yes it is very hard and that's something you know that the people. Like him in prosthetics have been struggling with for a long time right and there's this big divide between what you've seen happen in prosthetics academic research and what actually is deployed and research. Yeah exactly. Yeah. Yeah. No not not projecting what the humans doing but trying to infer all these goals so it least from us using the system in this is that you know not if deployed in the real world this is not being used by actual users but in the limited environment of my lab because we have someone always present priding a control signal if we're getting always the control signal actually just doing the for production with obstacle void and safety is definitely the smoothest system and so you kind of wonder well if we were going to do that then maybe maybe we just need to. I mean maybe the person providing control input the whole way through the door is it is OK right now this doesn't handle the group of people who maybe would want to have to not can provide a control signal the whole time so for example I've spoken with there's an O.T. an occupational therapist at R.E.C. who has a very unique perspective because she was an O.T. for fifteen years and then she developed herself and at the time that I spoke with her at that point she was on a vent which means that you know her lungs she didn't have control of her lungs really well anymore she had just switched from needing to use one of these which based had a raised from using a joystick about two or three months before and she said that control with the head based area was enormously more difficult so for one one thing was that there was a little bit a delay in control and so. Like she would often almost roll out into the street because she wouldn't stop in time and trying to do things like go through doorways which she set in constrained maneuvers was really difficult and but she said was two things she said that were interesting she said you know last week I had a lot of pain so I would have loved to just be able to say like take me to the bathroom and not not have to touch any sort of control but she's like but when I when I don't have a lot of pain I like to be able to provide as much control as we want and so the sort of thing of of just for protecting the user commands that depends on the user always providing the command and other things she said that was when I told her what we would maybe be able to provide she's like that sounds great but if a cost four thousand dollars I want take it in the someone who's like very limit in their court Claudia life so. That's right. Yeah that's something that you know we've gone back and forth on in my lab so we have we've we've taken the approach with the exception of our collaboration Lisandro in the body machine interface we've chosen explicitly to not develop any interfaces and the reason why we're doing that is partly because you know there are these one it wouldn't be covered by insurance but a tablet isn't necessary that expensive Another is that we are not experts in interface design and there's a lot of like hundreds of thousands of people have validated these interfaces are us that set that set a lot of the interfaces are still pretty rotten and so I think we would. Travel very similar Could you. How could you. Write about what we're about right I mean that we would you and I don't. Write Yeah you know what do you get the rights to jobs were people. Go into the future yes and then after we fuse it showed up with the facts. First. Yeah yeah. Yeah yeah but. Yeah so I think that auditory is one is one big one so the reason why I was talking about the control interface is because we're limiting the ideas like how much how much feedback you actually provide through a joystick you can afford to force control joystick you can provide some interface but at the back I think the auditory makes sense there so this issue of as you add in more feedback channels you don't want to do it to do it at the expense of sensory channels that are being used so this is why you know like there are for example there are systems for the blind like headphones and you know you've got some sort of a distance sensing you preferred to provide this like auto. But him out of conversation with anyone while they're wearing it you know and so there's also that sort of question so I do think that we are going to want to provide feedback eventually and also we want to receive feedback from the user we want to do this sort of adaptation and things like that we do but that's not something we've looked into yet. You. Know. Yeah. Yeah. Yeah. Yeah. So so what we're me so we've we've But it was actually only about one simulator prosthesis project where we were targeting people with imitations we're really looking at is anyone who would struggle to use traditional control interfaces or struggle to write these assisted robots and so it could be a degenerative disease it could be a loss it could be cerebral palsy it could be muscular dystrophy the any of those so ours is really based on like the function that you have as a result of those diseases rather than disease themselves.