Manipulating state space distributions for sample-efficient imitation-learning
Schroecker, Yannick Karl Daniel
MetadataShow full item record
Imitation learning has emerged as one of the most effective approaches to train agents to act intelligently in unstructured and unknown domains. On its own or in combination with reinforcement learning, it enables agents to copy the expert's behavior and to solve complex, long-term decision making problems. However, to utilize demonstrations effectively and learn from a finite amount of data, the agent needs to develop an understanding of the environment. This thesis investigates estimators of the state-distribution gradient as a means to influence which states the agent will see and thereby guide it to imitate the expert's behavior. Furthermore, this thesis will show that approaches which reason over future states in this way are able to learn from sparse signals and thus provide a way to effectively program agents. Specifically, this dissertation aims to validate the following thesis statement: Exploiting inherent structure in Markov chain stationary distributions allows learning agents to reason about likely future observations, and enables robust and efficient imitation learning, providing an effective and interactive way to teach agents from minimal demonstrations.
Showing items related by title, author, creator and subject.
Mehta, Nishant A. (Georgia Institute of Technology, 2013-05-15)Given the "right" representation, learning is easy. This thesis studies representation learning and meta-learning, with a special focus on sparse representations. Meta-learning is fundamental to machine learning, and it ...
Inghilleri, Niccolo (Georgia Institute of Technology, 2021-05-05)This study aims to assess the impact on skill development of a hands-on experimentation and learning device within the undergraduate aerospace control analysis curriculum at Georgia Institute of Technology. The Transportable ...
Berlind, Christopher (Georgia Institute of Technology, 2015-07-22)Traditional supervised machine learning algorithms are expected to have access to a large corpus of labeled examples, but the massive amount of data available in the modern world has made unlabeled data much easier to ...