Show simple item record

dc.contributor.advisorThomaz, Andrea L.
dc.contributor.advisorChernova, Sonia
dc.contributor.authorChu, Vivian
dc.date.accessioned2018-05-31T18:12:29Z
dc.date.available2018-05-31T18:12:29Z
dc.date.created2018-05
dc.date.issued2018-01-09
dc.date.submittedMay 2018
dc.identifier.urihttp://hdl.handle.net/1853/59839
dc.description.abstractThe real world is complex, unstructured, and contains high levels of uncertainty. Although past work shows that robots can successfully operate in situations where a single skill is needed, they will need a framework that enables them to reason and learn continuously so that they can operate effectively in human-centric environments. One framework that allows robots to aggregate a library of skills is to model the world using affordances. In this thesis, we choose to model affordances as the relationship between a robot's actions on its environment and the effects of those actions. By modeling the world with affordances, robots can reason about what actions they need to take to achieve a goal. This thesis provides a framework that allows robots to learn affordance models through interaction and human guidance. Within the scope of robot affordance learning, there has been a large focus on learning visual skill representations due to the difficulty of getting robots to interact with the environment. Furthermore, utilizing different modalities (e.g., touch and sound) introduces challenges such as different sampling rates and data resolution. This thesis addresses the above challenges by contributing a human-centered framework for robot affordance learning that allows human teachers to guide the robot in the modeling process throughout the entire pipeline of affordance learning. We introduce several novel human-guided robot self-exploration algorithms that use human guidance to enable robots to efficiently explore the environment and learn affordance models for a diverse range of manipulation tasks. The work contributes a multisensory affordance model that integrates visual, haptic, and audio input, and a novel control framework that allows adaptive object manipulation using multisensory affordances.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectRobotics
dc.subjectRobot learning
dc.subjectAffordance learning
dc.subjectHuman robot interaction
dc.subjectMultisensory data
dc.subjectRobot object manipulation
dc.subjectHuman-guided robot exploration
dc.subjectMachine learning
dc.subjectArtificial intelligence
dc.subjectHaptics
dc.subjectAdaptable controllers
dc.subjectMultisensory robot control
dc.subjectHuman-guided affordance learning
dc.subjectInteractive multisensory perception
dc.subjectMultimodal data
dc.subjectSensor fusion
dc.titleTeaching robots about human environments: Leveraging human interaction to efficiently learn and use multisensory object affordances
dc.typeDissertation
dc.description.degreePh.D.
dc.contributor.departmentInteractive Computing
thesis.degree.levelDoctoral
dc.contributor.committeeMemberChristensen, Henrik I.
dc.contributor.committeeMemberKemp, Charles C.
dc.contributor.committeeMemberSrinivasa, Siddhartha
dc.date.updated2018-05-31T18:12:29Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record