Show simple item record

dc.contributor.advisorErera, Alan L.
dc.contributor.advisorWhite III, Chelsea C.
dc.contributor.authorChang, Yanling
dc.date.accessioned2016-01-07T17:36:06Z
dc.date.available2016-01-07T17:36:06Z
dc.date.created2015-12
dc.date.issued2015-11-10
dc.date.submittedDecember 2015
dc.identifier.urihttp://hdl.handle.net/1853/54407
dc.description.abstractThe intent of this dissertation is to generate a set of non-dominated finite-memory policies from which one of two agents (the leader) can select a most preferred policy to control a dynamic system that is also affected by the control decisions of the other agent (the follower). The problem is described by an infinite horizon total discounted reward, partially observed Markov game (POMG). Each agent’s policy assumes that the agent knows its current and recent state values, its recent actions, and the current and recent possibly inaccurate observations of the other agent’s state. For each candidate finite-memory leader policy, we assume the follower, fully aware of the leader policy, determines a policy that optimizes the follower’s criterion. The leader-follower assumption allows the POMG to be transformed into a specially structured, partially observed Markov decision process that we use to determine the follower’s best response policy for a given leader policy. We then present a value determination procedure to evaluate the performance of the leader for a given leader policy, based on which non-dominated set of leader polices can be selected by existing heuristic approaches. We then analyze how the value of the leader’s criterion changes due to changes in the leader’s quality of observation of the follower. We give conditions that insure improved observation quality will improve the leader’s value function, assuming that changes in the observation quality do not cause the follower to change its policy. We show that discontinuities in the value of the leader’ criterion, as a function of observation quality, can occur when the change of observation quality is significant enough for the follower to change its policy. We present conditions that determine when a discontinuity may occur and conditions that guarantee a discontinuity will not degrade the leader’s performance. This framework has been used to develop a dynamic risk analysis approach for U.S. food supply chains and to compare and create supply chain designs and sequential control strategies for risk mitigation.
dc.format.mimetypeapplication/pdf
dc.publisherGeorgia Institute of Technology
dc.subjectRisk analysis
dc.subjectMarkov decision process
dc.subjectReal-time decision making
dc.subjectValue of information
dc.titleA leader-follower partially observed Markov game
dc.typeDissertation
dc.description.degreePh.D.
dc.contributor.departmentIndustrial and Systems Engineering
thesis.degree.levelDoctoral
dc.contributor.committeeMemberAyer, Turgay
dc.contributor.committeeMemberZhou, Enlu
dc.contributor.committeeMemberDieci, Luca
dc.date.updated2016-01-07T17:36:06Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record