Trust Modeling in Multi-Robot Patrolling
Christensen, Henrik I.
MetadataShow full item record
On typical multi-robot teams, there is an implicit assumption that robots can be trusted to effectively perform assigned tasks. The multi-robot patrolling task is an example of a domain that is particularly sensitive to reliability and performance of robots. Yet reliable performance of team members may not always be a valid assumption even within homogeneous teams. For instance, a robot’s performance may deteriorate over time or a robot may not estimate tasks correctly. Robots that can identify poorly performing team members as performance deteriorates, can dynamically adjust the task assignment strategy. This paper investigates the use of an observation based trust model for detecting unreliable robot team members. Robots can reason over this model to perform dynamic task reassignment to trusted team members. Experiments were performed in simulation and using a team of indoor robots in a patrolling task to demonstrate both centralized and decentralized approaches to task reassignment. The results clearly demonstrate that the use of a trust model can improve performance in the multi-robot patrolling task.