• Login
    View Item 
    •   SMARTech Home
    • Institute for Robotics and Intelligent Machines (IRIM)
    • IRIM Articles and Papers
    • Computational Perception & Robotics
    • View Item
    •   SMARTech Home
    • Institute for Robotics and Intelligent Machines (IRIM)
    • IRIM Articles and Papers
    • Computational Perception & Robotics
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Decoupling Behavior, Perception, and Control for Autonomous Learning of Affordances

    Thumbnail
    View/Open
    hermans-icra2013.pdf (1.933Mb)
    Date
    2013-05
    Author
    Hermans, Tucker
    Rehg, James M.
    Bobick, Aaron F.
    Metadata
    Show full item record
    Abstract
    A novel behavior representation is introduced that permits a robot to systematically explore the best methods by which to successfully execute an affordance-based behavior for a particular object. The approach decomposes affordance-based behaviors into three components. We first define controllers that specify how to achieve a desired change in object state through changes in the agent’s state. For each controller we develop at least one behavior primitive that determines how the controller outputs translate to specific movements of the agent. Additionally we provide multiple perceptual proxies that define the representation of the object that is to be computed as input to the controller during execution. A variety of proxies may be selected for a given controller and a given proxy may provide input for more than one controller. When developing an appropriate affordance-based behavior strategy for a given object, the robot can systematically vary these elements as well as note the impact of additional task variables such as location in the workspace. We demonstrate the approach using a PR2 robot that explores different combinations of controller, behavior primitive, and proxy to perform a push or pull positioning behavior on a selection of household objects, learning which methods best work for each object.
    URI
    http://hdl.handle.net/1853/51694
    Collections
    • Computational Perception & Robotics [213]
    • Computational Perception & Robotics Publications [213]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    • About
    • Terms of Use
    • Contact Us
    • Emergency Information
    • Legal & Privacy Information
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    • Login
    Georgia Tech

    © Georgia Institute of Technology

    • About
    • Terms of Use
    • Contact Us
    • Emergency Information
    • Legal & Privacy Information
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    • Login
    Georgia Tech

    © Georgia Institute of Technology