Paper

Learning For Predictive Control: A Dual Gaussian Process Approach

An important issue in model-based control design is that an accurate dynamic model of the system is generally nonlinear, complex, and costly to obtain. This limits achievable control performance in practice. Gaussian process (GP) based estimation of system models is an effective tool to learn unknown dynamics directly from input/output data. However, conventional GP-based control methods often ignore the computational cost associated with accumulating data during the operation of the system and how to handle forgetting in continuous adaption. In this paper, we present a novel Dual Gaussian Process (DGP) based model predictive control (MPC) strategy that enables efficient use of online learning based predictive control without the danger of catastrophic forgetting. The bio-inspired DGP structure is a combination of a long-term GP and a short-term GP, where the long-term GP is used to keep the learned knowledge in memory and the short-term GP is employed to rapidly compensate unknown dynamics during online operation. Furthermore, a novel recursive online update strategy for the short-term GP is proposed to successively improve the learnt model during online operation. Effectiveness of the proposed strategy is demonstrated via numerical simulations.

Results in Papers With Code
(↓ scroll down to see all results)