Browse/search for people

Publication - Professor Guido Herrmann

    Adaptive Optimal Control via Continuous-Time Q-Learning for Unknown Nonlinear Affine Systems

    Citation

    Chen, AS & Herrmann, G, 2019, ‘Adaptive Optimal Control via Continuous-Time Q-Learning for Unknown Nonlinear Affine Systems’. in: 2019 IEEE Conference on Decision and Control (CDC). Institute of Electrical and Electronics Engineers (IEEE)

    Abstract

    This paper proposes two novel adaptive optimal control algorithms for continuous-time nonlinear affine systems based on reinforcement learning: i) generalized policy iteration (GPI) and ii) Q-learning. As a result, the a priori knowledge of the system drift f (x) is not needed via GPI, which gives us a partially model-free and online solution. We then for the first time extend the idea of Q-learning to the nonlinear continuous time optimal control problem in a noniterative manner. Thisleads to a completely model-free method where neither the system drift f (x) nor the input gain g(x) is needed. For both methods, the adaptive critic and actor are continuously and simultaneously updating each other without iterative steps, which effectively avoids the hybrid structure and the need or an initial stabilizing control policy. Moreover, finite-time convergence is guaranteed by using a sliding mode technique in the new adaptive approach, where the persistent excitation (PE) condition can be directly verified online. We also prove the overall Lyapunov stability and demonstrate the effectiveness of the proposed algorithms using numerical examples.

    Full details in the University publications repository