Show simple item record

dc.contributor.authorNguyen, Haien_US
dc.contributor.authorJain, Advaiten_US
dc.contributor.authorAnderson, Cressel D.en_US
dc.contributor.authorKemp, Charles C.en_US
dc.date.accessioned2011-03-11T18:46:47Z
dc.date.available2011-03-11T18:46:47Z
dc.date.issued2008-09
dc.identifier.citationHai Nguyen; Jain, A.; Anderson, C.; Kemp, C.C., "A clickable world: Behavior selection through pointing and context for mobile manipulation," IROS 2008. IEEE/RSJ International Conference on Intelligent Robots and Systems, 22-26 Sept. 2008, 787-793.en_US
dc.identifier.isbn978-1-4244-2057-5
dc.identifier.urihttp://hdl.handle.net/1853/37365
dc.description©2008 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.en_US
dc.descriptionPresented at IROS 2008. IEEE/RSJ International Conference on Intelligent Robots and Systems, 22-26 Sept. 2008, Nice, France.en_US
dc.descriptionDOI: 10.1109/IROS.2008.4651216en_US
dc.description.abstractWe present a new behavior selection system for human-robot interaction that maps virtual buttons overlaid on the physical environment to the robotpsilas behaviors, thereby creating a clickable world. The user clicks on a virtual button and activates the associated behavior by briefly illuminating a corresponding 3D location with an off-the-shelf green laser pointer. As we have described in previous work, the robot can detect this click and estimate its 3D location using an omnidirectional camera and a pan/tilt stereo camera. In this paper, we show that the robot can select the appropriate behavior to execute using the 3D location of the click, the context around this 3D location, and its own state. For this work, the robot performs this selection process using a cascade of classifiers. We demonstrate the efficacy of this approach with an assistive object-fetching application. Through empirical evaluation, we show that the 3D location of the click, the state of the robot, and the surrounding context is sufficient for the robot to choose the correct behavior from a set of behaviors and perform the following tasks: pick-up a designated object from a floor or table, deliver an object to a designated person, place an object on a designated table, go to a designated location, and touch a designated location with its end effector.en_US
dc.language.isoen_USen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectEnd effectorsen_US
dc.subjectHuman-robot interactionen_US
dc.subjectMobile robotsen_US
dc.titleA Clickable World: Behavior Selection Through Pointing and Context for Mobile Manipulationen_US
dc.typeProceedingsen_US
dc.typePost-printen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Dept. of Biomedical Engineeringen_US
dc.contributor.corporatenameEmory University. Dept. of Biomedical Engineeringen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Center for Robotics and Intelligent Machinesen_US
dc.publisher.originalInstitute of Electrical and Electronics Engineersen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record