Human-Robot Trust in Time-Sensitive Scenarios
MetadataShow full item record
Trust is a key element for successful human-robot interaction. When robots are placed in human environments, it is essential to ensure a trusting relationship is established between the robots and the individuals they interact with. As such, this dissertation focuses on understanding human-robot trust in time-sensitive interaction scenarios in which a robot behaves as an assistant to the human by providing advice. The objective of this work is to develop algorithms that not only can infer human trust based on prior human-robot interactions but can also mitigate negative outcomes when trust violations occur. To achieve this, we begin by examining aspects of trust between humans and interactive robots during a cognitive problem-solving scenario and a simulated driving scenario. We then investigate and identify the key factors that contribute to humans’ decisions to trust robots in these time-sensitive scenarios. Next, we develop a computational framework for modeling human-robot trust in these scenarios. The experimental results show that this approach can be used to model behavior-based trust and to predict humans’ decisions to take advice from the robot in time-sensitive scenarios. However, the performance of the model degrades when the robot makes mistakes. To address this, we develop a set of trust repair strategies to repair broken trust such that potential negative outcomes can be mitigated. We validate the effectiveness of these trust repair strategies using the simulated driving scenario as the testbed and further evaluate the impact of trust repair on the trust model.