Show simple item record

dc.contributor.authorHermans, Tucker
dc.contributor.authorRehg, James M.
dc.contributor.authorBobick, Aaron F.
dc.date.accessioned2014-05-01T19:27:29Z
dc.date.available2014-05-01T19:27:29Z
dc.date.issued2013-05
dc.identifier.citationHermans, T.; Rehg, J.M.; & Bobick, A.F. (2013). "Decoupling Behavior, Perception, and Control for Autonomous Learning of Affordances". Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2013), 6-10 May 2013, pp.4989-4996.en_US
dc.identifier.urihttp://hdl.handle.net/1853/51694
dc.description©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.en_US
dc.descriptionPresented at the 2013 IEEE International Conference on Robotics and Automation (ICRA), 6-10 May 2013, Karlsruhe, Germany.
dc.descriptionDOI: 10.1109/ICRA.2013.6631290
dc.description.abstractA novel behavior representation is introduced that permits a robot to systematically explore the best methods by which to successfully execute an affordance-based behavior for a particular object. The approach decomposes affordance-based behaviors into three components. We first define controllers that specify how to achieve a desired change in object state through changes in the agent’s state. For each controller we develop at least one behavior primitive that determines how the controller outputs translate to specific movements of the agent. Additionally we provide multiple perceptual proxies that define the representation of the object that is to be computed as input to the controller during execution. A variety of proxies may be selected for a given controller and a given proxy may provide input for more than one controller. When developing an appropriate affordance-based behavior strategy for a given object, the robot can systematically vary these elements as well as note the impact of additional task variables such as location in the workspace. We demonstrate the approach using a PR2 robot that explores different combinations of controller, behavior primitive, and proxy to perform a push or pull positioning behavior on a selection of household objects, learning which methods best work for each object.en_US
dc.language.isoen_USen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectAffordance-based behavioren_US
dc.subjectBehavior primitiveen_US
dc.subjectControllersen_US
dc.subjectObject stateen_US
dc.subjectPerceptual proxiesen_US
dc.subjectPR2 roboten_US
dc.subjectPull positioningen_US
dc.subjectPush positioningen_US
dc.titleDecoupling Behavior, Perception, and Control for Autonomous Learning of Affordancesen_US
dc.typePost-printen_US
dc.typeProceedingsen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Center for Robotics and Intelligent Machinesen_US
dc.contributor.corporatenameGeorgia Institute of Technology. School of Interactive Computingen_US
dc.publisher.originalInstitute of Electrical and Electronics Engineers
dc.identifier.doi10.1109/ICRA.2013.6631290
dc.embargo.termsnullen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record