Now showing items 1-20 of 32

    • Anticipation in Robot Motion 

      Gielniak, Michael J.; Thomaz, Andrea L. (Georgia Institute of TechnologyInstitute of Electrical & Electronics Engineers, 2011)
      Robots that display anticipatory motion provide their human partners with greater time to respond in interactive tasks because human partners are aware of robot intent earlier. We create anticipatory motion autonomously ...
    • Automatic Task Decomposition and State Abstraction from Demonstration 

      Cobo, Luis C.; Isbell, Charles L., Jr.; Thomaz, Andrea L. (Georgia Institute of TechnologyInternational Foundation for Autonomous Agents and Multiagent Systems, 2012-06)
      Both Learning from Demonstration (LfD) and Reinforcement Learning (RL) are popular approaches for building decision-making agents. LfD applies supervised learning to a set of human demonstrations to infer and imitate the ...
    • Batch versus Interactive Learning by Demonstration 

      Zang, Peng; Tian, Runhe; Thomaz, Andrea L.; Isbell, Charles L. (Georgia Institute of TechnologyInstitute of Electrical & Electronics Engineers, 2010)
      Agents that operate in human environments will need to be able to learn new skills from everyday people. Learning from demonstration (LfD) is a popular paradigm for this. Drawing from our interest in Socially Guided ...
    • Combining function approximation, human teachers, and training regimens for real-world RL 

      Zang, Peng; Irani, Arya; Zhou, Peng; Isbell, Charles L.; Thomaz, Andrea L. (Georgia Institute of TechnologyInternational Foundation for Autonomous Agents and Multiagent Systems, 2010)
    • Computational Benefits of Social Learning Mechanisms: Stimulus Enhancement and Emulation 

      Cakmak, Maya; DePalma, Nick; Arriaga, Rosa; Thomaz, Andrea L. (Georgia Institute of TechnologyInstitute of Electrical & Electronics Engineers, 2009)
      Social learning in robotics has largely focused on imitation learning. In this work, we take a broader view of social learning and are interested in the multifaceted ways that a social partner can influence the learning ...
    • Controlling Social Dynamics with a Parametrized Model of Floor Regulation 

      Chao, Crystal; Thomaz, Andrea L. (Georgia Institute of TechnologyBrigham Young University, 2012)
      Turn-taking is ubiquitous in human communication, yet turn-taking between humans and robots continues to be stilted and awkward for human users. The goal of our work is to build autonomous robot controllers for successfully ...
    • Designing Interactions for Robot Active Learners 

      Cakmak, Maya; Chao, Crystal; Thomaz, Andrea L. (Georgia Institute of TechnologyInstitute of Electrical & Electronics Engineers, 2010-06)
      This paper addresses some of the problems that arise when applying active learning to the context of human–robot interaction (HRI). Active learning is an attractive strategy for robot learners because it has the potential ...
    • Effective robot task learning by focusing on task-relevant objects 

      Lee, Kyu Hwa; Lee, Jinhan; Thomaz, Andrea L.; Bobick, Aaron F. (Georgia Institute of TechnologyInstitute of Electrical and Electronics Engineers, 2009-10)
      In a robot learning from demonstration framework involving environments with many objects, one of the key problems is to decide which objects are relevant to a given task. In this paper, we analyze this problem and propose ...
    • Effects of Social Exploration Mechanisms on Robot Learning 

      Cakmak, Maya; DePalma, Nick; Thomaz, Andrea L.; Arriaga, Rosa (Georgia Institute of TechnologyInstitute of Electrical & Electronics Engineers, 2009)
      Social learning in robotics has largely focused on imitation learning. Here we take a broader view and are interested in the multifaceted ways that a social partner can influence the learning process. We implement four ...
    • Enhancing Interaction Through Exaggerated Motion Synthesis 

      Gielniak, Michael J.; Thomaz, Andrea L. (Georgia Institute of Technology, 2012-03)
      Other than eye gaze and referential gestures (e.g. pointing), the relationship between robot motion and observer attention is not well understood. We explore this relationship to achieve social goals, such as influencing ...
    • Exploiting social partners in robot learning 

      Cakmak, Maya; DePalma, Nick; Arriaga, Rosa; Thomaz, Andrea L. (Georgia Institute of TechnologySpringer, 2010)
      Social learning in robotics has largely focused on imitation learning. Here we take a broader view and are interested in the multifaceted ways that a social partner can in uence the learning process. We implement four ...
    • Generating Human-like Motion for Robots 

      Gielniak, Michael J.; Liu, C. Karen; Thomaz, Andrea L. (Georgia Institute of TechnologySage Publications, 2013-07)
      Action prediction and fluidity are key elements of human-robot teamwork. If a robot’s actions are hard to understand, it can impede fluid HRI. Our goal is to improve the clarity of robot motion by making it more humanlike. ...
    • Human-like Action Segmentation for Option Learning 

      Shim, Jaeeun; Thomaz, Andrea L. (Georgia Institute of TechnologyInstitute of Electrical & Electronics Engineers, 2011)
      Robots learning interactively with a human partner has several open questions, one of which is increasing the efficiency of learning. One approach to this problem in the Reinforcement Learning domain is to use options, ...
    • Keyframe-based Learning from Demonstration Method and Evaluation 

      Akgun, Baris; Cakmak, Maya; Jiang, Karl; Thomaz, Andrea L. (Georgia Institute of TechnologySpringer, 2012-06)
      We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a human-robot interaction perspective. Our approach –Keyframe-based Learning from Demonstration (KLfD)– ...
    • Learning about Objects with Human Teachers 

      Thomaz, Andrea L.; Cakmak, Maya (Georgia Institute of TechnologyAssociation for Computing Machinery, 2009)
      A general learning task for a robot in a new environment is to learn about objects and what actions/effects they afford. To approach this, we look at ways that a human partner can intuitively help the robot learn, ...
    • Multi-Cue Contingency Detection 

      Lee, Jinhan; Chao, Crystal; Bobick, Aaron F.; Thomaz, Andrea L. (Georgia Institute of TechnologySpringer, 2012-04)
      The ability to detect a human's contingent response is an essential skill for a social robot attempting to engage new interaction partners or maintain ongoing turn-taking interactions. Prior work on contingency detection ...
    • Multimodal Real-Time Contingency Detection for HRI 

      Chu, Vivian; Bullard, Kalesha; Thomaz, Andrea L. (Georgia Institute of TechnologyInstitute of Electrical and Electronics Engineers, 2014-09)
      Our goal is to develop robots that naturally engage people in social exchanges. In this paper, we focus on the problem of recognizing that a person is responsive to a robot’s request for interaction. Inspired by human ...
    • Object Focused Q-Learning for Autonomous Agents 

      Cobo, Luis C.; Isbell, Charles L., Jr.; Thomaz, Andrea L. (Georgia Institute of TechnologyACM Press, 2013)
      We present Object Focused Q-learning (OF-Q), a novel reinforcement learning algorithm that can offer exponential speed-ups over classic Q-learning on domains composed of independent objects. An OF-Q agent treats the state ...
    • Optimality of Human Teachers for Robot Learners 

      Cakmak, Maya; Thomaz, Andrea L. (Georgia Institute of TechnologyInstitute of Electrical & Electronics Engineers, 2010)
      In this paper we address the question of how closely everyday human teachers match a theoretically optimal teacher. We present two experiments in which subjects teach a concept to our robot in a supervised fashion. In ...
    • Policy Shaping: Integrating Human Feedback with Reinforcement Learning 

      Griffith, Shane; Subramanian, Kaushik; Scholz, Jonathan; Isbell, Charles L.; Thomaz, Andrea L. (Georgia Institute of TechnologyNeural Information Processing System, 2013)
      A long term goal of Interactive Reinforcement Learning is to incorporate non- expert human feedback to solve complex tasks. Some state-of -the-art methods have approached this problem by mapping human information to ...