ExIFFI and EIF+: Interpretability and Enhanced Generalizability to Extend the Extended Isolation Forest

9 Oct 2023  ·  Alessio Arcudi, Davide Frizzo, Chiara Masiero, Gian Antonio Susto ·

Anomaly Detection involves identifying unusual behaviors within complex datasets and systems. While Machine Learning algorithms and Decision Support Systems (DSSs) offer effective solutions for this task, simply pinpointing anomalies may prove insufficient in real-world applications. Users require insights into the rationale behind these predictions to facilitate root cause analysis and foster trust in the model. However, the unsupervised nature of AD presents a challenge in developing interpretable tools. This paper addresses this challenge by introducing ExIFFI, a novel interpretability approach specifically designed to explain the predictions made by Extended Isolation Forest. ExIFFI leverages feature importance to provide explanations at both global and local levels. This work also introduces EIF+, an enhanced variant of Extended Isolation Forest, conceived to improve its generalization capabilities through a different splitting hyperplanes design strategy. A comprehensive comparative analysis is conducted, employing both synthetic and real-world datasets to evaluate various unsupervised AD approaches. The analysis demonstrates the effectiveness of ExIFFI in providing explanations for AD predictions. Furthermore, the paper explores the utility of ExIFFI as a feature selection technique in unsupervised settings. Finally, this work contributes to the research community by providing open-source code, facilitating further investigation and reproducibility.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods