Show simple item record

dc.contributor.advisorIsbell, Charles L.
dc.contributor.authorSimpkins, Christopher Lee
dc.date.accessioned2017-08-17T18:59:47Z
dc.date.available2017-08-17T18:59:47Z
dc.date.created2017-08
dc.date.issued2017-06-26
dc.date.submittedAugust 2017
dc.identifier.urihttp://hdl.handle.net/1853/58683
dc.description.abstractReinforcement learning is a promising solution to the intelligent agent problem, namely, given the state of the world, which action should an agent take to maximize goal attainment. However, reinforcement learning algorithms are slow to converge for larger state spaces and using reinforcement learning in agent programs requires detailed knowledge of reinforcement learning algorithms. One approach to solving the curse of dimensionality in reinforcement learning is decomposition. Modular reinforcement learning, as it is called in the literature, decomposes an agent into concurrently running reinforcement learning modules that each learn a ``selfish'' solution to a subset of the original problem. For example, a bunny agent might be decomposed into a module that avoids predators and a module that finds food. Current approaches to modular reinforcement learning support decomposition but, because the reward scales of the modules must be comparable, they are not composable -- a module written for one agent cannot be reused in another agent without modifying its reward function. This dissertation makes two contributions: (1) a command arbitration algorithm for modular reinforcement learning that enables composability by decoupling the reward scales of reinforcement learning modules, and (2) a Scala-embedded domain-specific language -- AFABL (A Friendly Adaptive Behavior Language) -- that integrates modular reinforcement learning in a way that allows programmers to use reinforcement learning without knowing much about reinforcement learning algorithms. We empirically demonstrate the reward comparability problem and show that our command arbitration algorithm solves it, and we present the results of a study in which programmers used AFABL and traditional programming to write a simple agent and adapt it to a new domain, demonstrating the promise of language-integrated reinforcement learning for practical agent software engineering.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectMachine learning
dc.subjectReinforcement learning
dc.subjectModular reinforcement learning
dc.subjectProgramming languages
dc.subjectDomain specific languages
dc.subjectSoftware engineering
dc.subjectArtificial intelligence
dc.subjectIntelligent agents
dc.titleIntegrating reinforcement learning into a programming language
dc.typeDissertation
dc.description.degreePh.D.
dc.contributor.departmentComputer Science
thesis.degree.levelDoctoral
dc.contributor.committeeMemberBodner, Douglas
dc.contributor.committeeMemberRiedl, Mark
dc.contributor.committeeMemberRugaber, Spencer
dc.contributor.committeeMemberThomaz, Andrea
dc.date.updated2017-08-17T18:59:47Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record