Show simple item record

dc.contributor.authorNguyen, Haien_US
dc.contributor.authorKemp, Charles C.en_US
dc.date.accessioned2013-12-19T17:49:07Z
dc.date.available2013-12-19T17:49:07Z
dc.date.issued2013-09
dc.identifier.citationAutonomously Learning to Visually Detect Where Manipulation Will Succeed, Hai Nguyen and Charles C. Kemp, Autonomous Robots, September 2013.en_US
dc.identifier.issn0929-5593
dc.identifier.issn1573-7527
dc.identifier.urihttp://hdl.handle.net/1853/49876
dc.description© The Author(s) 2013. This article is published with open access at Springerlink.comen_US
dc.descriptionDOI: 10.1007/s10514-013-9363-yen_US
dc.description.abstractVisual features can help predict if a manipulation behavior will succeed at a given location. For example, the success of a behavior that flips light switches depends on the location of the switch. We present methods that enable a mobile manipulator to autonomously learn a function that takes an RGB image and a registered 3D point cloud as input and returns a 3D location at which a manipulation behavior is likely to succeed. With our methods, robots autonomously train a pair of support vector machine (SVM) classifiers by trying behaviors at locations in the world and observing the results. Our methods require a pair of manipulation behaviors that can change the state of the world between two sets (e.g., light switch up and light switch down), classifiers that detect when each behavior has been successful, and an initial hint as to where one of the behaviors will be successful. When given an image feature vector associated with a 3D location, a trained SVM predicts if the associated manipulation behavior will be successful at the 3D location. To evaluate our approach, we performed experiments with a PR2 robot from Willow Garage in a simulated home using behaviors that flip a light switch, push a rocker-type light switch, and operate a drawer. By using active learning, the robot efficiently learned SVMs that enabled it to consistently succeed at these tasks. After training, the robot also continued to learn in order to adapt in the event of failure.en_US
dc.language.isoen_USen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectRobot learningen_US
dc.subjectMobile manipulationen_US
dc.subjectHome robotsen_US
dc.subjectBehavior-based systemsen_US
dc.subjectActive learningen_US
dc.titleAutonomously learning to visually detect where manipulation will succeeden_US
dc.typeArticleen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Healthcare Robotics Laben_US
dc.contributor.corporatenameGeorgia Institute of Technology. Institute for Robotics and Intelligent Machinesen_US
dc.publisher.originalSpringer Verlagen_US
dc.identifier.doi10.1007/s10514-013-9363-y


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record