• Login
    View Item 
    •   SMARTech Home
    • Institute for Robotics and Intelligent Machines (IRIM)
    • IRIM Articles and Papers
    • Healthcare Robotics Lab
    • View Item
    •   SMARTech Home
    • Institute for Robotics and Intelligent Machines (IRIM)
    • IRIM Articles and Papers
    • Healthcare Robotics Lab
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Autonomously learning to visually detect where manipulation will succeed

    Thumbnail
    View/Open
    HCR_AR_001.pdf (1.011Mb)
    Date
    2013-09
    Author
    Nguyen, Hai
    Kemp, Charles C.
    Metadata
    Show full item record
    Abstract
    Visual features can help predict if a manipulation behavior will succeed at a given location. For example, the success of a behavior that flips light switches depends on the location of the switch. We present methods that enable a mobile manipulator to autonomously learn a function that takes an RGB image and a registered 3D point cloud as input and returns a 3D location at which a manipulation behavior is likely to succeed. With our methods, robots autonomously train a pair of support vector machine (SVM) classifiers by trying behaviors at locations in the world and observing the results. Our methods require a pair of manipulation behaviors that can change the state of the world between two sets (e.g., light switch up and light switch down), classifiers that detect when each behavior has been successful, and an initial hint as to where one of the behaviors will be successful. When given an image feature vector associated with a 3D location, a trained SVM predicts if the associated manipulation behavior will be successful at the 3D location. To evaluate our approach, we performed experiments with a PR2 robot from Willow Garage in a simulated home using behaviors that flip a light switch, push a rocker-type light switch, and operate a drawer. By using active learning, the robot efficiently learned SVMs that enabled it to consistently succeed at these tasks. After training, the robot also continued to learn in order to adapt in the event of failure.
    URI
    http://hdl.handle.net/1853/49876
    Collections
    • Healthcare Robotics Lab [49]
    • Healthcare Robotics Lab Publications [55]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology