A novel observer-based reinforcement learning for uncertain nonlinear
systems with disturbances
Abstract
This study proposes an observer-based reinforcement learning(RL) control
scheme for uncertain nonlinear systems subject to various external
disturbances. The proposed approach regards the total uncertainty
estimated by the extended state observer (ESO) as potential model
information, which is incorporated into the known part of the system
dynamics. Based on the updated known dynamics, an RL structure is
constructed to approximate the optimal solution of the HJB equation
without the persistence of the excitation (PE) condition. The
convergence of the proposed policy to a neighborhood of the optimal
policy is proven, and the stability of the system states is guaranteed.
The comparative simulation results demonstrate improved performance with
a significantly reduced cost of the developed controller, and the
sensitivity of control gain in the input channel is also relaxed.