Human-guided task transfer for interactive robots
Fitzgerald, Tesca Kate
MetadataShow full item record
Adaptability is an essential skill in human cognition, enabling us to draw from our extensive, life-long experiences with various objects and tasks in order to address novel problems. To date, robots do not have this kind of adaptability, and yet, as our expectations of robots' interactive and assistive capacity grows, it will be increasingly important for them to adapt to unpredictable environments in a similar manner as humans. While a robot can be pre-programmed for many tasks and their variations, specifying these behaviors would require tedious effort, and still would not adequately prepare a robot for every scenario it may encounter. Rather than require more demonstration data in order to attempt generalization across these variations, we leverage continued interaction with the teacher within the context of the new target task. This approach first requires an understanding of how task differences, interaction, and transfer are related. We define a taxonomy of transfer problems that models the relationship between task differences and information requirements for transfer. Based on this taxonomy, we analyze a particular category of transfer problems in which the target environment contains new, unfamiliar objects. We present an interactive approach that enables the robot to learn the mapping between familiar source objects and new target objects using assistance from a human teacher (provided by indicating the next object to be used at each step of the task). After a limited number of assists, our approach enables the robot to autonomously infer the objects used to complete the remainder of the task. Furthermore, we identify the effect of noisy feedback during interaction and present a confidence-guided approach to moderating the robot's requests for assistance. We then address a second category of transfer problems in which we replace the tool that the robot uses to manipulate other objects in the environment. For example, the robot may learn a scooping task using a spoon, and at a later time must transfer its task model to use a mug instead. We utilize interactive corrections to record the motion constraints imposed by the new tool, and then model the underlying relationship between the robot's gripper and the new tooltip. Not only do we find that corrections are sufficient for the robot to model the new constraints afforded by the tool within the context of the corrected task, but the learned model can also be reused on other tasks that provide a similar context for that tool (e.g. in the tool surfaces used to execute the task). Overall, this work enables a robot to address a wide variety of transfer problems without extensive demonstrations or domain-specific knowledge, and thus contributes toward a future of adaptive, collaborative robots.