I think. They. Gave us. A. Run for the. Thank you so much thanks. Because of Mike working yes I can hear OK so today I'm going to talk a little bit about some of the work that we're doing in my lab coronation before I jump into what my work is I want to tell you a little bit about why I do this work so my research vision is to come up with a theoretical basis for a coherent unifying framework for Montserrat systems so right now the way that we do most of our research is kind of very specific very platform specific and it's really hard to take solutions from one area and transpose and apply them to different areas so take for example this environment that we have here say that we want to deploy a group of robots to do some environmental monitoring so you deploy aerial robots to take a measurement in the air some ground or a boss and some underwater robots maybe the way that we code that's right now is we come up with a set of controllers for the aerial robots and then after that we'll come up with a set of controllers for the ground robots and another set of controllers for underwater robots and we end up with something that's really messy and all these controllers take a really long time to develop and they're very specific to the platform that we're using them on. And so. We can do it translate us from one thing to another from you know from one from their robots of the ground of our from a ground rod to the underwater robot and then they don't all be end up not playing well together because we've had to do these these kind of hacks and trying to get them to do the things that they want them that we want them to do but they're all different tasks and so what I'd like to do you know in as my research is they come up with an end to end. Assess that starts with some high level specifications for example using simple Multi Touch inputs on the i Pad. For a group to ask. This this and time process will deliver code for the individual robots in the team and what we want to do is we want to have a probably correct algorithm we want to work all the time that helps establish the science of distributed multi about Coronation so. Much of our systems are everywhere so they exist and transportation and energy in manufacturing and even in health care so there's lots of robots everywhere but the one thing that I'm not seeing here is you avi's right I have not seen you A.V. is going out in the world and most are about tasks and this is not because mostly because you have these are not very powerful there's plenty of that out there become of all kinds of hardware at every level that you could imagine from twenty dollars to you know to twenty thousand dollars. And it's not that they're not being used at all outside of the lab so they're being used for sports photography for cinema there's entire film festivals that are based entirely drone filled movie so movies. Even There's been talk of of having them do retail these are of us are equipped with all kinds of sums are as they have cameras they have I am using have G.P.S. they have sonar magnetometers you name it it's on these guys and you can have cameras. Of everything possible level from a really cheap camera to a really nice camera. And while the hardware is there the software just has not caught up so there is hardware to do all kinds of fun interaction stuff with you A.V. is bad the software is kind of liking. And so my goal is to come up with a software that helps us integrate the systems and get them to work with all the different hardware that they have so some of the necessary challenges here. You are automating interaction so if you want to control a team of robots whether they're U A V's or ground her boss or a combination of them you need to be able to tell them what to do and you need to be able to do that easily so of if I was going to use a team of us to film a movie than I don't want to have to write controllers for each robot to tell each of them what to do I want to be able to easily tell them OK you need to do this and there and then there's something very simple so we need to figure out how I can use just high level specifications in order to control the team of robots and make sure that I do everything chiefly. We also need to be able to manage resources so we have a ton of robots that we want to use but there's only limited bandwidth we can really we can send a million messages back and forth we also have limited power we can't run the robots for very long time without thinking about how we're going to refuel them so we need to think about how we're actually going to manage resources and then on top of that how do we actually organize the system so it's great you have a bunch of robots you have controllers but how do you decide who's going to do which part of a task so these are kind of three areas that I'm going to talk about today which I think are the necessary challenges that we overcome in order to get our systems out of the lab and into the real world so first I'll talk about some work that I've done in other mating interaction so let's take this very simple example I have a group of robots the corridors of the bottom and I want to go to that orange blob at the very top. And so these are our friendly angels and then we have some unfriendly agents that are somewhere in the middle there we kind of know where these guys are you know the you might say OK if I want to navigate these robots through that are just one of the shortest way and the one that I would usually pick if I didn't have these unfriendly engines is to go through the metal this is shortest fastest Probably but we already know that there is one from an agency that. Definitely not an option so we have two other options here so we have the option of going around the locks an option of going around the right. So we can go around the laugh that's land so maybe that safer or maybe we have a heuristic that tells us that going over water is dangerous because water and robots don't mix at least not aerial robots. But how do we actually decide. Which way to go so as a human we can come up with some kind if you're sick or can just Susser escape right I can say OK well going over water is risky so maybe I should go the other way but in order to get together I want to make the decision and I need to come up with a heuristic for them to make that decision. But the fact is I hear sticks are really hard to develop you can't possibly anticipate every single every single scenario that's going to happen and these unpredictable scenarios actually require cognition they actually require humans to be in the loop the nice thing about automating interaction is that because it's so simple we can actually make changes really quickly so let's say that OK we have we decided that the robot should go around the left because water is lost because land is lost interest and water. And other robots go around they decide Wait there's actually some unfriendly agents over there so now this is not a good option anymore and maybe now they have to re compute but now they have a heuristic that tells them that you know maybe the water is not safe at all and so how are they actually going to decide that they go over water so the nice thing about about having this automated interaction is that we can actually make high level specifications very simple so we can bring the human in the loop to say immediately that OK this is not a good option we need to go the other way. And so this allows the robots to reroute without computational overhead while taking advantage of the fact that you know humans have the cognition that's required to make these kinds of decisions so in order to do that yes we need to think about how do we actually automate interaction how. Well we actually tell the system what to do and part one is to develop an interface so there's lots of interfaces out there so this is actually just one interface from a robot called the Ghost drone and on this on this app you basically just point to where you want the robot to go and there are about those are that's great it's really nice an alligator for one robot but what if I have three of us now how am I going to control them from three different problems I need three different figures to point on the screen so I need three hands and none of that makes sense I shouldn't have to do this and I should just be able to tell the robots with one simple command where to go and they should be able to deal with the rest of them where they should be able to deal with the interactions between themselves. So. Normally when we control teams of robots we think about it as a bottom up approach we start with the robot so we want to control and then we think about all the governing equations of how we're actually going to tell them what to do then we have a student who writes code for maybe days and days and days. Or even days and days and days more. And then we need three students to run a multi about experiment this is really not the best paradigm so this bottom of control really means that the usability ends up being an afterthought because you're thinking about the robots the equations the code and then you think about the people I use it but by that time is too late. So usability is really important in order to reduce Manning So right now the paradigm for for robot control in the military is five people so one robot we want to switch the paradigm one person to five robots it enables quick reactivity if it's usable if this is if this is a system that you're using then it's really impossible for you to go but I can make really quick changes to the system and say Well I want the Ross of you this and that we have to go through the code and change everything and it's going to be really difficult. And also the. This frees up the operator to have more situational awareness so if I'm controlling a team of robots that and I don't have to look at three different screens or if I don't have three different people that are trying to do it then I have about better situational awareness because I don't have to ask everyone else what they're doing and I have you know some of my cognition is freed up to actually make decisions so instead we decided in some cases is better to take a top down approach so a top down approach enables human and purrs for cognitive tasks and it allows us so we've done as we developed a novel control algorithm that London self well to impart for a smartphone or from a tablet so in the top down approach we think about the user first and then we'd say OK how is the user going to interact with a system so let's say we want the user to interact with the system through an i Pad So we want to develop it I got out but we need a controller that we can actually specify using just the Multi Touch inputs on the i Pad So in the code that up and down we can have one person controlling hereabouts from something as simple as an i Pad app and I will show you a video of what it looks like. So there's been a lot of related work here in H.R. I have a mainly kind of focus on the interfaces for these kinds of systems and ignore the controller which is challenging because we need to build the controller so that it can be specified from the interface there's some do some work in controlling robots through high level specifications by a for this you kind of. You need to know a special form of English so that you can actually specify the task and it is a specified very. In a very detailed way. And there have been some other some other work on other interfaces for control. Bed for example and some of the is there are blocking a second local minima are there is no guarantees that we're actually going to converse to where we want to go. So one thing about the i Pod interface. We thought well we have a bunch of robots we don't want to be able we don't want to limit the system to how many robots we can use so we need to be able to abstract the system very we need an abstraction so that we can lower complexity so the first thing I thought about was OK why don't we enclose all the robots in some box and then we can move the box on a screen using just the Multi Touch inputs and all the robots will stay in the box and out of the box most they'll move with it. So this is great because it does simplify the problem but since all of us have to be in that box in that abstraction them in requires a lot of coordination so this and so this is not a very good approach so Another option is to basically draw multiple boxes in the environment and say that the union of those boxes is rather of us are allowed to be and use some controllers or have them flow through those boxes in some sequence so we draw boxes we let them overlap now the union of those boxes is the free space for the robots that's our abstraction of the environment but the nice thing about this is that allows the robots to spread so that there's not that much coordination they're not kind of stuck in the same box all the time but to be honest it's too difficult to draw cells in three D. I'm drawing a mom on an other to the i Pod It's it's really nice this is really nice interface for a to the system but if we're talking about aerial robots that need to change their their height then this is not a good way of doing it because it's too hard to draw boxes and three so what we settled on was to have some an initial cell in the environment and I could drag and manipulate that cell so that seller be in three days we have a three D. box and as we drag the box on a screen or like pinch it so that so we can change the shape and scale. We can kind of got the same spread that we get when we try multiple overlapping cells because we end up getting. Multiple overlapping cells so as I move the box around the environment using I thought I you know I can save multiple snapshots and then this becomes my abstraction this becomes my virtual boundary. So this is much easier than drawing prisms and three D. So this is the interface that we settled on so here we can translate this thirty box by dragging so at the initial state we have an experiment that I'm going to show you have three robots and they're surrounded by a box so you lift up the box for the robots to take off and then you can manipulate the box however you like by you can translate it by dragging and you can scale it by pinching. So that translates these Multi Touch Shuster is into low level controllers for the robot. But what kind of controller can we use to do this so one thing that we know is everyone the robots to say away from the boundary we don't want to get close to the boundary they actually don't know what's beyond the boundary we're not selling them information about the environment everything is controlled through the i Pad app. So what we do is we push the robots in from the boundary using. Basically an inverse exponential and then we make sure that the next. The next cell that we have in the sequence overlaps the center of the previous cell so that we're always get pushed towards the center they can actually transition into the next cell and they get pushed towards the center of that cell and they keep transitioning in a sequence to get to the last one. So this is how we're going to pursue robots through this abstraction but we also need the robots to to stay far away from each other and we don't want them to actually collide so we do is one the robots are beyond a specific distance from each other so beyond the communication limits they don't actually have information about each other they don't care they don't have to worry about it so when they get to the interaction limit so they have information but at that at the interaction I'm at that's when the other robots start influencing them and so they start pushing each other. Way and of course a very a small the sense we have basically and for that for so really high force that pushes them away and make sure that they don't actually get close to each other. So we do as we actually saw these two controllers and I we have a way of getting rid of local minima so we don't so we don't actually get stuck anywhere. And so we resign them together they basically use the same function the same inverse expands. And we actually get a controller that works so this is what it looks like so this is actually me controlling i Pad app and there I was are getting all the information in real time so there's somebody back there holding a remote control just for emergency stop purposes is not actually controlling the robots I'm doing all of it and you can see all the robots are actually moving in a jumpy way that's because I didn't spend a lot of time tuning the controller I don't have to it's not it's not a nice motion and if you wanted a nice motion you could spend some time tuning in but everything is very real time so the actual real time position of the robot and then the robots get information about those the prism so they get information about the abstraction in real time. So they fly through the abstraction through the virtual space. Without having to know anything about the environment and we can avoid collisions by by making the interface on the on the i Pad. Help us kind of a void drawing rectangles that you know where they intersect with oxygen. So while the robot motions jumpy is not tuned and so the nice thing about this is that you can actually use this with any kind of controller so we don't really bother with with say the controller because I wasn't the point with of this work but it's seventy something that you can use any kind of controller that you want. So with this what we did was we actually show that we can have a novel control algorithm that uses these very high level specifications so just small to touch inputs on the i Pad. The. So you actually need to think about the interface when you're actually developing the algorithm that was how the robots what to do. And what this does is reduces operator load but it also helps to bring the human in the loop which is something that I could be really important for multi about systems so now we've talked about one way that we can automate interaction but we of course have. You know we have a number of robots and we don't want them all communicating with each other all the time we don't want all of the robots kind of running out of battery so persistence is a problem that you know people have been talking about recently how can we send robots out on very long duration missions. And so I'm going to talk a little bit about that now. So there's a very similar environment right we have. We have an environment where we've been deploying robots to take some kind of measurements. In this environment we have task agents. Agents other robots that are actually doing the measurements they're doing the work and these robots have limited battery supplies and maybe they have cameras the cameras get bio fouled so they need to be replaced they're clean. We have delivery robots so the delivery robots will actually take interchangeable batteries or cameras or other sensors to the robots that are out in the field so that we can minimize down time so a lot of the a lot of the solutions in prisons right now deal with the robot actually driving back to a station to charge spending a lot of time charging and coming back but there has been some recent work in to changing batteries using just robots so no human intervention necessary and so we thought that we would take advantage of this work and leverage it in order to make a persistent solution that actually reduces the downtime of robots and doesn't waste that energy driving back and forth to a base station or that time that is exactly charge the robot so some of these quaters can fly for maybe like forty minutes and I need. Hour to charge so it's not really realistic to have a robot that flies or for a few minutes and have to drive back and charge and you have to also take into account the time that a drive is back and forth OK so. We have limited onboard power and limited carrying capacity so not all of the so the robots that actually deliver the power to the robots can only carry a certain amount as well we can carry an unlimited amount of batteries and then we also assign priorities on some of the robots are doing the work let's say that we don't have enough batteries for everybody so we need to assign some priorities to make sure that the robots are really needed actually get it so our goal with this problem is to optimally solve resource syllabary with time to request So let's say I'm a robot out in the field I say I need I need a charge like in the next ten minutes or so there's a double line I need that charge in ten minutes otherwise I'm going to go down. So we want to minimize the total distance traveled by the delivery robots and their deviation from a delivery time so there's a penalty for delivering power early and there's a penalty for delivering power late so for delivering early then I mean they were taking a battery has not fully discharged and so maybe that's wasting resources because now I have to come back sooner next time. OK so. There are many existing approaches to persistent autonomy so some of them are software based so some of them however deal only with the problem of kind of controlling their own boss and kind of ignore the energy problem saying that well you know that part will kind of fix itself. There are some works in energy where system where the robots actually have to drive back and charge or they have to go somewhere to charge or they assume that charging is instantaneous by some tanker. And then there are some hardware based solutions where they talk about battery interchange so being able to interchange about is autonomous only without any human intervention so we're going to love bridges better interchange. In order to find a solution to this resource Well every problem. And this source is sound a lot like actually operations research problem so and there's been a lot of work in operations research on these types of models so far wanderer stochastic modeling with curing theory and in this approach we in prose probability distributions on arrival rates and the locations of. Requests for deliveries and it has a stochastic analysis of policies to serve sarcastic requests so if we use this approach that is hard to actually guarantee that we will be able to make the deliveries on time so we choose a makes an interface formulation where we can express the objective algebraically it's easy to actually impose any kind of constraints that we want every can actually have guarantees of optimality so we can guarantee that we've actually delivered the batteries to the best of our ability and this is actually very close to the vehicle routing problem with time windows. Which is nice because now we have a formulation that we can use and modify for our problem so we started with something like a traveling salesman problem so we have a cost that we want to minimize and the cost is basically to be the traveling discerns it could be energy use. And the rules of the Traveling Salesman Problem are that we enter each node once. And then we exit each node once. And we of course have to eliminate sub tours you can have sub tours that are going around the state so let's say this is our robot that's delivering and then we have the call of the dots are the tasks or the deliveries other one has to make says this is basically a pictorial description of the travelling salesman problem with now we modify this to turn it into a vehicle routing problem so now we have multiple vehicles that are making the same deliveries and so are our objective function changes a little bit and what we do in order to accommodate this. If we add these virtual start and nodes and the robot has to visit these virtual start and nodes in order to make sure that that we're getting our optimal solution. And then on top of that we have capacity constraints so our robots are not able to carry an unlimited amount of batteries they need to carry only can carry only X. amount and so there are some tosser by and I completed because we might not actually have enough capacity to serve them so we have delivery capacity constraints but we also have total battery power constrained so our robots that are on the field carried out of power and our robots that are doing the deliveries can't run out of power. That we also have on top of that some restrictions on the arrival and departure of the partner time so this program's indefinite stagnation it implicitly eliminates the sub tours. And finally. We want to be able to serve all of this how asks So a lot of the formulations and vehicle routing make sure that we reach all the toss and then there's no feasible solution if you can't if you can't do it so we wanted to kind of soften this we don't want any hard delivery timings we want to soften them and permits keeping deliveries permit making deliveries late and so we add a couple other terms here so we have a penalty for serving a delivery late or early and then we also have this Law Center equal to that used to be an equal so we used to have to visit every state once but now we can visit on less than once if we need to. So the resulting mix in a new linear problem. Is a our objective it has passed continuity constraints it has a time flow so you can only exit a say after you visited it and you can only reach another state after certain amount of time. And we have of course our capacity constraints so the way that this is actually works is we have these time and us so we don't actually don't have a complete list of requests in advance these requests come as a robust need to write we don't have we don't know what's going to happen the robots could change or tell us they could change where they're going to be the battery so we plan for a finite horizon. So let's say we have our first time window we take some time and we do our computations and then we actually follow that plan for a certain amount of time and then we say OK we're going now have a new time and so we build a new plan based on the information that we have them and I execute that plan for a while and then we do the same thing until we're done right or an hour until forever so this allows us to dynamically reschedule also if something really important pops up towards the end of time and one that we know that we can handle in time you know two. And so this way we don't really get a global optimal solution but the global optimal solution is impossible because we don't know all of the deliveries in advance if we did then we might be able to do that but we obviously get the optimal because we get the optimal pace on the constraints that we set on the problem. The problem with this is that it's exponential so the more about that we have the harder it actually gets so if you have seven task robots and four delivery robots you're already taking an entire minute to the computation for each time window so it's not just a goal of computation you're doing that every however many minutes less a little bit too long so it's not ideal for for large systems. But it works for small systems so here's an example. Of this system sold them then diamond in the middle is the control center and the four kind of stars around it are the task robots so and the first time window. B. both robots and I'm making one delivery and they had back to the control center so in the second time window both robots have to deliver is planned so the first one actually goes and comes he goes down the green one goes down up and then back around to the control center the second one goes down and up but in the time he doesn't actually make it back to the control center so bad the plan is to do that and so it turns out that at the beginning of the next time it all there's another delivery that robot can make and since it has a capacity of three it can actually change its plan make that delivery and then of course plan to plan to go back to the control center because now of course it doesn't have any more batteries laughs. So in this problem we've solved resource delivery with time requests we formulated the problem so that it always has a feasible solution whereas some of the previous approaches to this problem don't have don't always have a feasible solution. It allows dynamic rerouting and relative priorities on the task robots and our ongoing work here is to distribute this approach so right now it's very centralized and we're trying to figure out how we can actually break the saw so that each robot can play its own card and somehow negotiate with the others to make sure that all of the tasks are taken care of. We've actually done some work on power modeling So how do robots actually use batteries based on the task that they're doing we can better predict when the requests will be made and of course asynchronous flags of course since this is Central I's everything is synchronous but we are actually don't need everything to be synchronized. So we've also done some work in managing kind of network connectivity managing bandwidth usage and such but actually I don't have time to talk about that today I want to talk about something that is more exciting and not quite complete yet so. So one of the things that we think a lot about in the lab are also organizations so we figured out how can we automate interaction OK manage resources but how do we figure out which robots to do which part of the task. So where do we actually get inspiration so there's been a lot of work in many different you know Sun in many different animal groups so for example ants and fish and these are these these pictures are from the references but there are by no means only groups that are working in these areas there are researchers studying many different kinds of animals but there are very few researchers studying humans. Humans are really good at organization they're really good at. Localization they're really good at a lot of things that we want robots to do. The problem is that humans are very complex we have cognitive abilities we can do a lot lots of risks we use context. And so it's really hard to kind of study the way that humans interact and by just watching them and asking them questions to figure out you know what can we take away from this to actually apply it to robots. So we actually want to do this because humans are so good at it that we decided that this is a road that we want to go down. And so we took inspiration from. From a collaboration between MIT when I was there and this dance company called us and what they did was they gave like two hundred people umbrellas umbrellas with color changing L.E.D.S. so you could change the color however you wanted and they had an overhead camera they put them on a football field and that overhead camera shows that image that's on the back as a that's kind of like a global feedback for the system and what they asked people to do was to make shapes a spell words. And they were actually very good at it and there was no way that someone on you know one corner of the football field was talking with another person on the other corner of the ball field everything was done very locally but of course with this global feedback so this is one hundred looking like so be Adam spell words and things and there are definitely some people who saw this guy is like pointing and telling people around him what to do I was there several people who take leadership roles but how do you determine you know who does these things is it kind of like inherent in the person's behavior is it because that person just happens to be in the right space at the right time. Is it a combination of both it could be a it could be any number of things and so we're really fascinated by this video and thought maybe we should do as we should kind of analyze a video figure out what everybody is doing kind of track. But as a total mass because people can change their umbrella colors instantly they can turn their umbrella off so you totally lose track of them and so what better way to do this by using a computer game so we came up with an interactive game a multiplayer game where we can actually crowd source this kind of data and this actually gave us the additional benefit that we could actually give humans robot like capabilities so we can control how they interface with each other and how they move it around the world. So this is what our game interface looks like this is actually version two So let's say you're playing a game and I play this game so far twice in my in my classes so I don't have a very big sample sat. Let's say you're playing a game so this is what you would see on your screen and we actually gave all the students earplugs they couldn't hear each other making comments or or whatever and make sure that they couldn't see each other screens so I'm playing a game so I'm this blue dot that's in the middle here and what I can see is basically what I would see with a laser scanner I can see the color. Or of the other people around me this is my local view or my neighborhood view I can see the colored people around me but I can I have information based on a laser scanner and the reason that we did this was of course image processing can be very expensive it can be really hard to use cameras to see the colors of other people around you but laser scanners come really cheap these days so if we decide to go at that we actually built a little platform that's not quite done yet in order to actually implement some of the results from this. So that's me I'm playing the game so there's a tiles on the floor just so that I can keep track of whether I'm moving or not because if you don't have that it's kind of hard when you're pressing up to know that your robots are actually your robot your agent is actually moving up. And so we asked them to form all kinds of shapes So here's their global view so they have a local view on a global view the global result of a delayed and not every position that they can attain is on a global view so there's like a boundary around it where where you wouldn't show up on a global view by you but you're still playing the game. So we also wanted to like how little information can we give people and have them actually complete the tasks we had already played this once before so we knew that they can complete a task with this with this level of information. So we said OK what happens if we take away the global view and then we ask three people to form shapes how will they actually do it we want actually sure that it would be possible and we also decided to test how they would do it with only a global view so if you don't have a local view that and how could you actually get these tasks completed OK So let's first take a lark at something that was done with with both the global and a lot of you so here we asked there were about twenty. Twenty six people playing. And we asked them to form an empty circle with a unified color and they have both they had access to the global and local view and of course they could change their color so this is somewhere in the middle. So this is what the. Experiment look like so the robots move people the people move around and you can see the some people actually switch their colors and they're doing this in order to localize So some people will end up kind of driving in a very particular direction in order to localize the end of moving in a very specific way so they come up with all different kinds of strategies to localize So of course you can also localize so it moves fast here but this is because we drop some packets. So. People are very good at localization. And so we have people coming into the circle there are some people who actually waited because they've maybe were not confident in their abilities or they just way so I'm a body form something and then they decided to join the party when everything was almost done I was everybody has kind of a different strategy for doing this. And. So so I actually this is this video has like a little bit of a mistake in it so there is a person who's who's actually right up there in that corner not the one that's moving but there was somebody who opened an extra screen and I think my student took it out of the last and that's a take a person out of the video I was it was a person who's up there and just because Number two was actually signaling to them like hey what are you doing like Get back here the servos over here so they have no other way of communicating except the create language when we took it away so that guys signaling to the other person like hey like come on we got here like in the wrong place you're holding us back so we had actually incentivize the students by telling them that we'd give them a pizza party based on how much money they earned from doing this experiment so the faster they worked the more money they had for their piece of party and we're actually having a pizza party tomorrow. So they earned so we started with five hundred dollars and they worked their way down based on how long it took them to do it experiment and ended up with like three hundred ninety dollars so that's a really nice piece of party for thirty people. So so yeah so they ended up doing this but they ended up creating languages. Where we took language away which was actually really fascinating and something that we'd like to transition to robots so. This is the one I want to show. Troops. So actually this one is super fast so they had actually done this one. After doing five prior experiments so this one took a total of fifty five seconds around a little bit faster because we have a compensated for the drop packets. But but you can see that they actually do a really good job and some people actually stay in the color that that is not the unified color because they want to make sure that they're in the right place and so this way you can actually see yourself in the global view and you know which one is YOU So we do all kinds of interesting things. So in this one we give them only a global view so the global view is a little bit delayed. But they were able to do it anyway so we asked them to form an empty circle with loops with the unified color. So the unified color is hard to localize So you see people kind of moving around a lot more to try to figure out where they are because that kind of gives them more information so it's like active perception. And so but the truth is you actually don't need to know which part of the circle you're And as long as you're part of the circle brain so people are actually able to do this pretty well also. So you can see the circle starting to form the sky is kind of way into last minute OK and I'll just go find a spot for myself to go for it and the guy in the middle probably thinks he's some kind of leader like I'm just going to hang out here. So the goal here is actually surprisingly surprisingly simple to do because in fact we figured out that most people use the global. More often than the local view the global view gives you more information especially now in the global you have color in the local You don't and so the global view gives you a lot more information and so even though it's delayed people use it more often so the white dots means that people signal that they were done so we were able. To Surat I'm done but this is only visible from from the control panel so the other players that are playing can see them OK And finally. We asked them to to build the right angle using only a local view. So this is one thing to watch out for in the video is that so the good thing is that they knew how many people were playing. And so they knew about it when their rectangle seemed a little bit too small that it was too good to be sure but so to my Take it was actually started forming and because there are no global view they don't actually know so the people that are over here don't know that there is an almost form rectangle over here with the people that are over here don't know that there is a partial right angle for here. So they all start out in the same color course they see the same color they only have the local view so sorry about those drop packets but you can see that like many different structures start to form in different places but then people realize that there is too few of us and so there is no way this is going to be the complete thing and so. This kind of sort of walk around and try to figure out OK what's going on so this kind of partial rectangle forms here but these guys are like you know that they know that it's really small say Star kind of moving around. The hard thing is that you don't actually know the extent of the rectangle so in the local view you maybe see about like two people deep in each direction and so it's really hard for you to know that the rest of that space is right. Also these people probably think that they're done right and they're here they're all sad that guy number eleven up in the middle over there he probably thinks he's done like I'm forming the side of the rectangle I'm all good but is simply not true so a busy guy number twenty and number four in time they're like moving around and saying hey persist definitely not enough people here and so what they do is they realize OK it's actually time that maybe we go and figure out where the rest of these these people are. So you can see that some of them are actually signalling that they're done but they have no idea that there's there's no other there's just not enough people there. So eventually one of these guys will actually go down and see that you know there's actually a full almost a full rectangle over here so these guys are actually doing pretty well they see that there is a full basically a full rights angle right to kind of maybe have an idea that there is this is not as many people as four of the rectangle previously so they keep kind of moving around and now here we go now here we have a guy coming down and saying wait a minute Piers I actually almost full rectangle here so dry heave maneuvers around and then this guy goes back to recruit people just like I am. Just like ADD so they go back to recruit people but without a pheromone trail and he's like hey come on down the rectangle over there let's go study signaling by kind of tapping into other people and saying that I'm a doctor but he also I guess kind of gets lost. There is I wait a minute we're my I don't know where I am so eventually they actually all end up coming down here and forming a rectangle but what happens is at the end. Everybody kind of thinks that has a rectangle but they're not sure because they can only see a certain distance and so one by one they start like coming out of the rectangle and driving around is to make sure that it's fully formed that someone else will dread and that someone else will do it and so this was definitely the hardest one to do and I'm actually really amazed we were. Actually able to do it. But I eventually eventually they do form a. Rectangle see if I can find my mouse somewhere. Lost. There. So it's really a long video. Be you can see the writing is almost form now and then there is these people that will come out and say I'm not sure if it's complete so I'm going to kind of do a lock about see what's going on. Soberly almost there but it takes a really long time. So with this work the kinds of things I want to answer is like how do people actually vocalize what kind of localization strategies can we service these guys trying to signal that we need to kind of move closer someone needs to fill in the gap so what kind of localization strategies carry actually transfer to other applications so great this is a computer game but how can we actually put these on robots and what environments can we use these kinds of strategies. How do people actually reach a consensus so in this local view version it's actually really really hard to reach a consensus because they don't actually know they don't actually know what else is going on they only kind of know what's going on in their neighborhood so it might be easier to reach a consensus of color of a reaching a consensus that like our shape is actually built is a totally different story. On top of that I think that some of the success for us comes from the fact that people are very diverse so some people will end up being leaders all the time some people will be followers all the time and some will go back and forth and so I think that some of that diversity is actually necessary but how much diversity is not to say so how many different controllers can we get out of this that will actually be useful. To build a a multi system for these kinds of tasks and finally like how can we actually learned directly from the data so the idea is to build this game to run it like on Mechanical Turk. And to be able to collect a lot of data so we can come up with very different strategies for coronation. And so how can we actually use our data and do you know do learning honestly we can figure out controllers for you know for these kinds of systems. So we're told about the questions we'll answer so I won't so I got my vision from our firm alter of our systems is to really kind of develop a science a unifying science a multi about systems or we can take solutions from one area and transfer them easily to another so mater about research is very scattered it comes from many different fields it comes from manufacturing from warehousing from task assignment from search and rescue from many many different areas and they're really not there's really not a unifying method and so I in my belief I think that the three important things are automating interaction managing resources and of course organization how do we actually figure out who does what and how do we actually structure the group how do we get the group to structure itself and so I think that obviously three things are necessary in order to come up with the science of multi about systems so a lot of this work was done by my students and then the camera who was responsible for the vehicle routing staff and I who are responsible for the game and then the stuff with the i Pad app was not right at MIT so I THANK YOU GUYS SO MUCH THANK YOU. That's so having a small box is not that big of a problem because you use the union anyway. So if you were doing like the first version where everyone has the same side the same box that would be a problem. But it doesn't have been a problem because you're taking the union anyway. The one thing that you could maybe do is try to make the boxes like a sliver. Yeah yeah so that's a for one thing that we didn't think about. But yeah you could do that. They do because they were on the same way and they were playing the game they were so I am the same people were doing these three experiments so we had like seven experiments so Afterall they kind of figured out how many people. We haven't thought about that know. I think because there are people playing they can play out some may that's not paying attention and so they kind of try to control the person to coming and joining the you know the task but no that's not something that we thought about we have thought about like varying the number of people that play each task and so you can have maybe you know the same group of people playing like seven or eight different things except maybe you like split them up in different ways so that at the same time one group is is playing with a people or another group is playing with twenty people and kind of varied out so that we don't get this this kind of. Like we don't. People getting used to how many people there are but that's not something that we thought about until after reading this which And it was just at the end of October so it's a very recent recent experiment. It's a bunch of a few robots have the capabilities you have in the game so we wanted to do something that. Would be easy to kind of port for the game to the robots so we gave them very limited capabilities but if there's a game the towers these kinds of capabilities I would be interested to know if we could collect their data that would be really cool it's a great idea. So we have them answer a questionnaire after every every instance of the game that they play so some of them are longer than others because we don't want them to get too tired answering like twenty questions after every game that a plane so we can gas for most of their answers and we can track which players which in every game. So if we can guess from some of their answers some of them have said that their leaders but they don't really act like leaders. And so it's kind of it's definitely something that's that's going to take a lot of time to to figure out who does I ask them so most of them are pretty good about it and some of them some of them answered very fully with their strategies and what they used to localize but I think some of them deserve that they don't even realize that they were doing which is actually even more interesting is you think they are doing one thing but you end up doing something that influence others to do something totally different. Yeah. Yeah not that I know of that is really that's a really neat idea. Yeah. Yeah so my thought is I guess my beer if it works out for these kinds of thoughts that I might be like a really cool way to do multi robot research in general so for all kinds of things like target tracking or surveillance or whatever it is just to be able to learn because humans are so good at making these kinds of decisions and so that actually is a great idea and I did not think about that at all. Thank you.