Cybersecurity Anomaly Detection in Adversarial Environments

14 May 2021  ·  David A. Bierbrauer, Alexander Chang, Will Kritzer, Nathaniel D. Bastian ·

The proliferation of interconnected battlefield information-sharing devices, known as the Internet of Battlefield Things (IoBT), introduced several security challenges. Inherent to the IoBT operating environment is the practice of adversarial machine learning, which attempts to circumvent machine learning models. This work examines the feasibility of cost-effective unsupervised learning and graph-based methods for anomaly detection in the network intrusion detection system setting, and also leverages an ensemble approach to supervised learning of the anomaly detection problem. We incorporate a realistic adversarial training mechanism when training supervised models to enable strong classification performance in adversarial environments. The results indicate that the unsupervised and graph-based methods were outperformed in detecting anomalies (malicious activity) by the supervised stacking ensemble method with two levels. This model consists of three different classifiers in the first level, followed by either a Naive Bayes or Decision Tree classifier for the second level. The model maintains an F1-score above 0.97 for malicious samples across all tested level two classifiers. Notably, Naive Bayes is the fastest level two classifier averaging 1.12 seconds while Decision Tree maintains the highest AUC score of 0.98.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here