1 code implementation • 12 Mar 2024 • Patrick Knab, Sascha Marton, Christian Bartelt
Explainable Artificial Intelligence is critical in unraveling decision-making processes in complex machine learning models.
no code implementations • 19 Feb 2024 • Jannik Brinkmann, Abhay Sheshadri, Victor Levoso, Paul Swoboda, Christian Bartelt
We anticipate that the motifs we identified in our synthetic setting can provide valuable insights into the broader operating principles of transformers and thus provide a basis for understanding more complex models.
2 code implementations • 29 Sep 2023 • Sascha Marton, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
Our method combines axis-aligned splits, which is a useful inductive bias for tabular data, with the flexibility of gradient-based optimization.
no code implementations • ICCV 2023 • Jannik Brinkmann, Paul Swoboda, Christian Bartelt
Therefore, we measure the impact of training data, model architecture, and training objectives on social biases in the learned representations of ViTs.
no code implementations • 27 Jun 2023 • Nils Wilken, Lea Cohausz, Christian Bartelt, Heiner Stuckenschmidt
In this paper, we show that it does not provide any benefit to use landmarks that are part of the initial state in a planning landmark based goal recognition approach.
1 code implementation • 5 May 2023 • Sascha Marton, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
Decision Trees (DTs) are commonly used for many machine learning tasks due to their high degree of interpretability.
no code implementations • 25 Jan 2023 • Nils Wilken, Lea Cohausz, Johannes Schaum, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
Furthermore, we show that the utilized planning landmark based approach, which was so far only evaluated on artificial benchmark domains, achieves also good recognition performance when applied to a real-world cooking scenario.
no code implementations • 18 Jul 2022 • Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
Existing methods are based on beam search in the space of feature subsets.
1 code implementation • 10 Jun 2022 • Sascha Marton, Stefan Lüdtke, Christian Bartelt, Andrej Tschalzev, Heiner Stuckenschmidt
We consider generating explanations for neural networks in cases where the network's training data is not accessible, for instance due to privacy or safety issues.
1 code implementation • 11 Oct 2021 • Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
On the other hand, mixtures of exchangeable variable models (MEVMs) are a class of tractable probabilistic models that make use of exchangeability of discrete random variables to render inference tractable.