Stochastic optimal control - a forward and backward sampling approach
MetadataShow full item record
Stochastic optimal control has seen significant recent development, motivated by its success in a plethora of engineering applications, such as autonomous systems, robotics, neuroscience, and financial engineering. Despite the many theoretical and algorithmic advancements that made such a success possible, several obstacles remain; most notable are (i) the mitigation of the curse of dimensionality inherent in optimal control problems, (ii) the design of efficient algorithms that allow for fast, online computation, and (iii) the expansion of the class of optimal control problems that can be addressed by algorithms in engineering practice. The aim of this dissertation is the development of a learning stochastic control framework which capitalizes on the innate relationship between certain nonlinear partial differential equations (PDEs) and forward and backward stochastic differential equations (FBSDEs), demonstrated by a nonlinear version of the Feynman-Kac lemma. By means of this lemma, we are able to obtain a probabilistic representation of the solution to the nonlinear Hamilton-Jacobi-Bellman PDE, expressed in form of a system of decoupled FBSDEs. This system of FBSDEs can then be simulated by employing linear regression techniques. We present a novel discretization scheme for FBSDEs, and enhance the resulting algorithm with importance sampling, thereby constructing an iterative scheme that is capable of learning the optimal control without an initial guess, even in systems with highly nonlinear, underactuated dynamics. The framework we develop within this dissertation addresses several classes of stochastic optimal control, such as L2, L1, risk sensitive control, as well as some classes of differential games, in both fixed-final-time as well as first-exit settings.