Efficient safe learning for controller tuning with experimental validation

26 Oct 2023  ·  Marta Zagorowska, Christopher König, Hanlin Yu, Efe C. Balta, Alisa Rupenyan, John Lygeros ·

Optimization-based controller tuning is challenging because it requires formulating optimization problems explicitly as functions of controller parameters. Safe learning algorithms overcome the challenge by creating surrogate models from measured data. To ensure safety, such data-driven algorithms often rely on exhaustive grid search, which is computationally inefficient. In this paper, we propose a novel approach to safe learning by formulating a series of optimization problems instead of a grid search. We also develop a method for initializing the optimization problems to guarantee feasibility while using numerical solvers. The performance of the new method is first validated in a simulated precision motion system, demonstrating improved computational efficiency, and illustrating the role of exploiting numerical solvers to reach the desired precision. Experimental validation on an industrial-grade precision motion system confirms that the proposed algorithm achieves 30% better tracking at sub-micrometer precision as a state-of-the-art safe learning algorithm, improves the default auto-tuning solution, and reduces the computational cost seven times compared to learning algorithms based on exhaustive search.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here