Attack on Grid Event Cause Analysis: An Adversarial Machine Learning Approach

19 Nov 2019  ·  Iman Niazazari, Hanif Livani ·

With the ever-increasing reliance on data for data-driven applications in power grids, such as event cause analysis, the authenticity of data streams has become crucially important. The data can be prone to adversarial stealthy attacks aiming to manipulate the data such that residual-based bad data detectors cannot detect them, and the perception of system operators or event classifiers changes about the actual event. This paper investigates the impact of adversarial attacks on convolutional neural network-based event cause analysis frameworks. We have successfully verified the ability of adversaries to maliciously misclassify events through stealthy data manipulations. The vulnerability assessment is studied with respect to the number of compromised measurements. Furthermore, a defense mechanism to robustify the performance of the event cause analysis is proposed. The effectiveness of adversarial attacks on changing the output of the framework is studied using the data generated by real-time digital simulator (RTDS) under different scenarios such as type of attacks and level of access to data.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here