Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems

3 Nov 2022  ·  Chong Chen, Ying Gao, Leyu Shi, Siquan Huang ·

Healthcare IoMT systems are becoming intelligent, miniaturized, and more integrated into daily life. As for the distributed devices in the IoMT, federated learning has become a topical area with cloud-based training procedures when meeting data security. However, the distribution of IoMT has the risk of protection from data poisoning attacks. Poisoned data can be fabricated by falsifying medical data, which urges a security defense to IoMT systems. Due to the lack of specific labels, the filtering of malicious data is a unique unsupervised scenario. One of the main challenges is finding robust data filtering methods for various poisoning attacks. This paper introduces a Federated Data Sanitization Defense, a novel approach to protect the system from data poisoning attacks. To solve this unsupervised problem, we first use federated learning to project all the data to the subspace domain, allowing unified feature mapping to be established since the data is stored locally. Then we adopt the federated clustering to re-group their features to clarify the poisoned data. The clustering is based on the consistent association of data and its semantics. After we get the clustering of the private data, we do the data sanitization with a simple yet efficient strategy. In the end, each device of distributed ImOT is enabled to filter malicious data according to federated data sanitization. Extensive experiments are conducted to evaluate the efficacy of the proposed defense method against data poisoning attacks. Further, we consider our approach in the different poisoning ratios and achieve a high Accuracy and a low attack success rate.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here