• Login
    View Item 
    •   SMARTech Home
    • Georgia Tech Theses and Dissertations
    • Georgia Tech Theses and Dissertations
    • View Item
    •   SMARTech Home
    • Georgia Tech Theses and Dissertations
    • Georgia Tech Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Developing trust and managing uncertainty in partially observable sequential decision-making environments

    Thumbnail
    View/Open
    BISHOP-DISSERTATION-2019.pdf (4.622Mb)
    Date
    2019-10-28
    Author
    Bishop, Robert Reid
    Metadata
    Show full item record
    Abstract
    This dissertation consists of three distinct, although conceptually related, papers that are unified in their focus on data-driven, stochastic sequential decision-making environments, but differentiated in their respective applications. In Chapter 2, we discuss a special class of partially observable Markov decision processes (POMDPs) in which the sources of uncertainty can be naturally separated into a hierarchy of effects — controllable, completely observable effects and exogenous, partially observable effects. For this class of POMDPs, we provide conditions under which value and policy function structural properties are inherited from an analogous class of MDPs, and discuss specialized solution procedures. In Chapter 3, we discuss an inventory control problem in which actions are time-lagged, and there are three explicit sources of demand uncertainty — the state of the macroeconomy, product-specific demand variability, and information quality. We prove that a base stock policy — defined with respect to pipeline inventory and a Bayesian belief distribution over states of the macroeconomy — is optimal, and demonstrate how to compute these base stock levels efficiently using support vector machines and Monte Carlo simulation. Further, we show how to use these results to determine how best to strategically allocate capital toward a better information infrastructure or a more agile supply chain. Finally, in Chapter 4, we consider how to generate trust in so-called development processes, such as supply chains, certain artificial intelligence systems, and maintenance processes, in which there can be adversarial manipulation and we must hedge against the risk of misapprehension of attacker objectives and resources. We show how to model dynamic agent interaction using a partially-observable Markov game (POMG) framework, and present a heuristic solution procedure, based on self-training concepts, for determining a robust defender policy.
    URI
    http://hdl.handle.net/1853/62302
    Collections
    • Georgia Tech Theses and Dissertations [23877]
    • School of Industrial and Systems Engineering Theses and Dissertations [1457]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology