• Login
    View Item 
    •   SMARTech Home
    • Institute for Robotics and Intelligent Machines (IRIM)
    • IRIM Invited Speakers, Seminars, and Events
    • IRIM Seminar Series
    • View Item
    •   SMARTech Home
    • Institute for Robotics and Intelligent Machines (IRIM)
    • IRIM Invited Speakers, Seminars, and Events
    • IRIM Seminar Series
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    The Manipulation Action Grammar: A Key to Intelligent Robots

    Thumbnail
    View/Open
    aloimonos.mp4 (538.3Mb)
    aloimonos_videostream.html (985bytes)
    Transcription.txt (43.08Kb)
    Date
    2016-04-06
    Author
    Aloimonos, Yiannis
    Metadata
    Show full item record
    Abstract
    Humanoid robots will need to learn the actions that humans perform. They will need to recognize these actions when they see them and they will need to perform these actions themselves. In this presentation, it is proposed that this learning task can be achieved using the manipulation grammar. Context-free grammars have been in fashion in linguistics because they provide a simple and precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks. Also, the basic recursive structure of natural languages, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are followed by nouns and verbs, is described exactly. Similarly, for manipulation actions, every complex activity is built from smaller blocks involving hands and their movements, as well as objects, tools and the monitoring of their state. Thus, interpreting a “seen” action is like understanding language, and executing an action from knowledge in memory is like producing language. Several experiments will be shown interpreting human actions in the arts and crafts or assembly domain, through a parsing of the visual input, on the basis of the manipulation grammar. This parsing, in order to be realized, requires a network of visual processes that attend to objects and tools, segment them and recognize them, track the moving objects and hands, and monitor the state of objects to calculate goal completion. These processes will also be explained and we will conclude with demonstrations of robots learning how to perform tasks by watching videos of relevant human activities.
    URI
    http://hdl.handle.net/1853/54719
    Collections
    • IRIM Seminar Series [126]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology