Facilitating Reliable Autonomy with Human-Robot Interaction
MetadataShow full item record
Autonomous robots are increasingly deployed to complex environments in which we cannot predict all possible failure cases a priori. Robustness to failures can be provided by humans enacting the roles of: (1) developers who can iteratively incorporate robustness into the robot system, (2) collocated bystanders who can be approached for aid, and (3) remote teleoperators who can be contacted for guidance. However, assisting the robot in any of these roles can place demands on the time or effort of the human. This dissertation develops modules to reduce the frequency and duration of failure interventions in order to increase the reliability of autonomous robots, while also reducing the demand on humans. In pursuit of that goal, the dissertation makes the following contributions: (1) A development paradigm for autonomous robots that separates task specification from error recovery. The paradigm reduces burden on developers while making the robot robust to failures. (2) A model for gauging the interruptibility of collocated humans. A human-subjects study shows that using the model can reduce the time expended by the robot during failure recovery. (3) A human-subjects experiment on the effects of decision support provided to remote operators during failures. The results show that humans need both diagnosis and action recommendations as decision support during an intervention. (4) An evaluation of model features and unstructured Machine Learning (ML) techniques in pursuit of learning robust suggestions models from intervention data, in order to reduce developer effort. The results indicate that careful crafting of features can lead to improved performance, but that without such feature selection, current ML algorithms lack robustness in addressing a domain where the robot's observations are heavily influenced by the user's actions.