The purpose of this paper is to utilize adaptive dynamic programming to solve an optimal consensus problem for double-integrator multi-agent systems with completely unknown dynamics. In double-integrator multi-agent systems, flocking algorithms that neglect agents' inertial effect can cause unstable group behavior. Despite the fact that an inertias-independent protocol exists, the design of its control law is decided by dynamics and inertia. However, inertia in reality is difficult to measure accurately, therefore, the control gain in the consensus protocol was solved by developing adaptive dynamic programming to enable the double-integrator systems to ensure the consensus of the agents in the presence of entirely unknown dynamics. Firstly, we demonstrate in a typical example how flocking algorithms that ignore the inertial effect of agents can lead to unstable group behavior. And even though the protocol is independent of inertia, the control gain depends quite strongly on the inertia and dynamic of the agent. Then, to address these shortcomings, an online policy iteration-based adaptive dynamic programming is designed to tackle the challenge of double-integrator multi-agent systems without dynamics. Finally, simulation results are shown to prove how effective the proposed approach is.
In this paper, an uncertain disturbance rejection control problem for the affine system in the presence of asymmetric input constraints is addressed using an event-triggered control method. The disturbance rejection control is converted to an H ∞ optimal control problem, and a Zero-sum game-based method is proposed to solve this H ∞ optimal control problem. To deal with the input constraints, a new cost function is proposed. The event-triggered controller is updated only when the triggering condition is satisfied, which can reduce the computational complexity.In order to obtain a controller that minimizes the performance index function in the worst-case disturbance, we use a critic-only network to solve the Hamilton-Jacobi-Isaacs(HJI) equation, and the critic network weight is tuned through a gradient descent method with the historical state data. The stability of the closed-loop system and the uniform ultimate boundedness of the critic network parameters are proved by the Lyapunov method. Two numerical examples are provided to verify the effectiveness of the proposed methods.