no code implementations • 17 May 2024 • Fabian Fumagalli, Maximilian Muschalik, Patrick Kolpaczki, Eyke Hüllermeier, Barbara Hammer
As a result, we propose KernelSHAP-IQ, a direct extension of KernelSHAP for SII, and demonstrate state-of-the-art performance for feature interactions.
1 code implementation • 22 Jan 2024 • Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier
While shallow decision trees may be interpretable, larger ensemble models like gradient-boosted trees, which often set the state of the art in machine learning problems involving tabular data, still remain black box models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 13 Jun 2023 • Maximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer, Eyke Hüllermeier
Post-hoc explanation techniques such as the well-established partial dependence plot (PDP), which investigates feature dependencies, are used in explainable artificial intelligence (XAI) to understand black-box machine learning models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 2 Mar 2023 • Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier
Existing methods for explainable artificial intelligence (XAI), including popular feature importance measures such as SAGE, are mostly restricted to the batch learning scenario.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2
no code implementations • 5 Sep 2022 • Fabian Fumagalli, Maximilian Muschalik, Eyke Hüllermeier, Barbara Hammer
Explainable Artificial Intelligence (XAI) has mainly focused on static learning scenarios so far.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1