Show simple item record

dc.contributor.advisorChristensen, Henrik I.
dc.contributor.authorPippin, Charles Everett
dc.date.accessioned2014-01-13T16:51:40Z
dc.date.available2014-01-13T16:51:40Z
dc.date.created2013-12
dc.date.issued2013-11-15
dc.date.submittedDecember 2013
dc.identifier.urihttp://hdl.handle.net/1853/50376
dc.description.abstractAgents in most types of societies use information about potential partners to determine whether to form mutually beneficial partnerships. We can say that when this information is used to decide to form a partnership that one agent trusts another, and when agents work together for mutual benefit in a partnership, we refer to this as a form of cooperation. Current multi-robot teams typically have the team's goals either explicitly or implicitly encoded into each robot's utility function and are expected to cooperate and perform as designed. However, there are many situations in which robots may not be interested in full cooperation, or may not be capable of performing as expected. In addition, the control strategy for robots may be fixed with no mechanism for modifying the team structure if teammate performance deteriorates. This dissertation investigates the application of trust to multi-robot teams. This research also addresses the problem of how cooperation can be enabled through the use of incentive mechanisms. We posit a framework wherein robot teams may be formed dynamically, using models of trust. These models are used to improve performance on the team, through evolution of the team dynamics. In this context, robots learn online which of their peers are capable and trustworthy to dynamically adjust their teaming strategy. We apply this framework to multi-robot task allocation and patrolling domains and show that performance is improved when this approach is used on teams that may have poorly performing or untrustworthy members. The contributions of this dissertation include algorithms for applying performance characteristics of individual robots to task allocation, methods for monitoring performance of robot team members, and a framework for modeling trust of robot team members. This work also includes experimental results gathered using simulations and on a team of indoor mobile robots to show that the use of a trust model can improve performance on multi-robot teams in the patrolling task.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectTrust
dc.subjectReputation
dc.subjectMulti-robot cooperation
dc.subjectTask assignment
dc.subject.lcshRobots
dc.subject.lcshEvolutionary robotics
dc.subject.lcshMultiagent systems
dc.titleTrust and reputation for formation and evolution of multi-robot teams
dc.typeDissertation
dc.description.degreePh.D.
dc.contributor.departmentComputer Science
thesis.degree.levelDoctoral
dc.contributor.committeeMemberBalch, Tucker
dc.contributor.committeeMemberDellaert, Frank
dc.contributor.committeeMemberEgerstedt, Magnus
dc.contributor.committeeMemberParker, Lynne
dc.date.updated2014-01-13T16:51:40Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record