Concurrent learning for convergence in adaptive control without persistency of excitation
MetadataShow full item record
Model Reference Adaptive Control (MRAC) is a widely studied adaptive control methodology that aims to ensure that a nonlinear plant with significant modeling uncertainty behaves like a chosen reference model. MRAC methods attempt to achieve this by representing the modeling uncertainty as a weighted combination of known nonlinear functions, and using a weight update law that ensures weights take on values such that the effect of the uncertainty is mitigated. If the adaptive weights do arrive at an ideal value that best represent the uncertainty, significant performance and robustness gains can be realized. However, most MRAC adaptive laws use only instantaneous data for adaptation and can only guarantee that the weights arrive at these ideal values if and only if the plant states are Persistently Exciting (PE). The condition on PE reference input is restrictive and often infeasible to implement or monitor online. Consequently, parameter convergence cannot be guaranteed in practice for many adaptive control applications. Hence it is often observed that traditional adaptive controllers do not exhibit long-term-learning and global uncertainty parametrization. That is, they exhibit little performance gain even when the system tracks a repeated command. This thesis presents a novel approach to adaptive control that relies on using current and recorded data concurrently for adaptation. The thesis shows that for a concurrent learning adaptive controller, a verifiable condition on the linear independence of the recorded data is sufficient to guarantee that weights arrive at their ideal values even when the system states are not PE. The thesis also shows that the same condition can guarantee exponential tracking error and weight error convergence to zero, thereby allowing the adaptive controller to recover the desired transient response and robustness properties of the chosen reference models and to exhibit long-term-learning. This condition is found to be less restrictive and easier to verify online than the condition on persistently exciting exogenous input required by traditional adaptive laws that use only instantaneous data for adaptation. The concept is explored for several adaptive control architectures, including neuro-adaptive flight control, where a neural network is used as the adaptive element. The performance gains are justified theoretically using Lyapunov based arguments, and demonstrated experimentally through flight-testing on Unmanned Aerial Systems.