Poisoning Attacks against Data-Driven Control Methods

10 Mar 2021  ·  Alessio Russo, Alexandre Proutiere ·

This paper investigates poisoning attacks against data-driven control methods. This work is motivated by recent trends showing that, in supervised learning, slightly modifying the data in a malicious manner can drastically deteriorate the prediction ability of the trained model. We extend these analyses to the case of data-driven control methods. Specifically, we investigate how a malicious adversary can poison the data so as to minimize the performance of a controller trained using this data. We show that identifying the most impactful attack boils down to solving a bi-level non-convex optimization problem, and provide theoretical insights on the attack. We present a generic algorithm finding a local optimum of this problem and illustrate our analysis in the case of a model-reference based approach, the Virtual Reference Feedback Tuning technique, and on data-driven methods based on Willems et al. lemma. Numerical experiments reveal that minimal but well-crafted changes in the dataset are sufficient to deteriorate the performance of data-driven control methods significantly, and even make the closed-loop system unstable.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here