• Login
    View Item 
    •   SMARTech Home
    • Georgia Tech Theses and Dissertations
    • Georgia Tech Theses and Dissertations
    • View Item
    •   SMARTech Home
    • Georgia Tech Theses and Dissertations
    • Georgia Tech Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Modeling Human and Robot Behavior During Dressing Tasks

    Thumbnail
    View/Open
    Learning to Navigate Cloth using Haptics.mp4 (4.132Mb)
    Learning to Dress.mp4 (96.33Mb)
    Animating Human Dressing.mp4 (16.60Mb)
    Learning_to_assist_with_dressing.mp4 (312.9Mb)
    Author
    Clegg, Alexander William
    Metadata
    Show full item record
    Abstract
    Human dressing assistance tasks present a multitude of privacy, safety, and independence concerns for the daily lives of a vast number of individuals across the world, providing strong motivation for the application of assistive robotics to these tasks. Additionally, the challenge of manually generating animations in which virtual characters interact with animated or simulated garments has resulted in the noticeable absence of such scenes in existing video games and animated films, motivating the application of automated motion synthesis techniques to this domain. However, cloth dynamics are complex and predicting the results of planned interactions with a garment can be challenging, which makes manual controller design difficult and makes the use of feedback control strategies an attractive alternative. The focus of this thesis is the development of a set of techniques for behavior modeling and motion synthesis in the space of human dressing. We first consider motion synthesis primarily in the space of self-dressing. We propose a kinematic motion synthesis technique which automatically computes the motion of a virtual character while successfully executing a dressing task with a simulated garment. Next, we explore the impact of haptic (touch) observation modes on the self-dressing task and present a deep reinforcement learning (DRL) approach to navigating simulated garments. We then present a unified DRL approach to self-dressing motion synthesis in which neural network controllers are trained via Trust Region Policy Optimization (TRPO). Finally, we investigate the extension of haptic aware feedback control and DRL to robot assisted dressing. We present a universal policy method for modeling human dressing behavior under variations in capability including: muscle weakness, Dyskinesia, and limited range of motion. Using this method and behavior model, we demonstrate the discovery of successful strategies for a robot to assist humans with a variety of capability limitations.
    URI
    http://hdl.handle.net/1853/63506
    Collections
    • College of Computing Theses and Dissertations [1156]
    • Georgia Tech Theses and Dissertations [23403]
    • School of Interactive Computing Theses and Dissertations [130]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology