Optimal covariance steering: Theory and its application to autonomous driving
MetadataShow full item record
Optimal control under uncertainty has been one of the central research topics in the control community for decades. While a number of theories have been developed to control a single state from an initial state to a target state, in some situations, it is preferable to simultaneously compute control commands for multiple states that start from an initial distribution and converge to a target distribution. This dissertation aims to develop a stochastic optimal control theory that, in addition to the mean, explicitly steers the state covariance. Specifically, we focus on the control of linear time-varying (LTV) systems with additive Gaussian noise. The task is to steer a Gaussian-distributed initial system state distribution to a target Gaussian distribution, while minimizing a state and control expectation-dependent quadratic cost under probabilistic state constraints. Notice that, in such systems, the system state keeps being Gaussian distributed. Because Gaussian distributions can be fully described by the first two moments, the proposed optimal covariance steering (OCS) theory allows us to control the whole distribution of the state and quantify the effect of uncertainty without conducting Monte-Carlo simulations. We propose to use a control policy that is an affine function of filtered disturbances, which utilizes the results of convex optimization theory and efficiently finds the solution. After the OCS theory for LTV systems is introduced, we extend the theory to vehicle path planning problems. While several path planning algorithms have been proposed, many of them have dealt with deterministic dynamics or stochastic dynamics with open-loop un- certainty, i.e., the uncertainty of the system state is not controlled and, typically, increases with time due to exogenous disturbances, which may lead to the design of potentially conservative nominal paths. A typical approach to deal with disturbances is to use a lower-level local feedback controller after the nominal path is computed. This unidirectional dependence of the feedback controller on the path planner makes the nominal path unnecessarily conservative. The path-planning approach we develop based on the OCS theory computes the nominal path based on the closed-loop evolution of the system uncertainty by simultaneously optimizing the feedforward and feedback control commands. We validate the performance using numerical simulations with single and multiple vehicle path planning problems. Furthermore, we introduce an optimal covariance steering controller for linear systems with input hard constraints. As many real-world systems have input constraints (e.g., air- craft and spacecraft have minimum/maximum thrust), this problem formulation will allow us to deal with realistic scenarios. In order to incorporate input hard constraints in the OCS theory framework, we use element-wise saturation functions and limit the effect of disturbance to the control commands. We prove that this problem formulation leads to a convex programming problem and demonstrate the effectiveness using simple numerical examples. Finally, we develop the OCS-based stochastic model predictive control (CS-SMPC) theory for stochastic linear time-invariant (LTI) systems with additive Gaussian noise subject to state and control constraints. In addition to the conventional terminal cost and terminal mean constraints, we introduce terminal covariance constraints in the stochastic model predictive control theory. The OCS theory efficiently computes the control commands that satisfy the terminal covariance constraints. The key benefit of the CS-SMPC algorithm is its ability to ensure stability and recursive feasibility of the controlled system. In addition, thanks to the efficient OCS theory, the proposed CS-SMPC theory is computationally less demanding than previous SMPC approaches. In order to verify the effectiveness, the CS-SMPC approach is also applied to the problem of self-driving vehicle control under uncertainty.