no code implementations • ECCV 2020 • Matthias Tangemann, Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Where people look when watching videos is believed to be heavily influenced by temporal patterns.
no code implementations • 18 Mar 2024 • Lars C. Reining, Thomas S. A. Wallis
Therefore, in this study we investigated how the choice of different thresholding techniques and amount of smoothing affects the interpretability of Mooney images for human participants.
1 code implementation • NeurIPS 2021 • Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel
A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence.
no code implementations • ICLR 2021 • Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images \citep{olah2017feature} with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map.
1 code implementation • 23 Oct 2020 • Judy Borowski, Roland S. Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
Even if only a single reference image is given, synthetic images provide less information than natural images ($65\pm5\%$ vs. $73\pm4\%$).
no code implementations • NeurIPS Workshop SVRHM 2020 • Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. [45] with a simple baseline visualization, namely natural images that also strongly activate a specific feature map.
1 code implementation • 20 Apr 2020 • Christina M. Funke, Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas S. A. Wallis, Matthias Bethge
In the second case study, we highlight the difference between necessary and sufficient mechanisms in visual reasoning tasks.
no code implementations • 18 Dec 2017 • Leon A. Gatys, Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Thus, manipulating fixation patterns to guide human attention is an exciting challenge in digital image processing.
no code implementations • ICCV 2017 • Matthias Kummerer, Thomas S. A. Wallis, Leon A. Gatys, Matthias Bethge
This model achieves better performance than all models not using features pre-trained on object recognition, making it a strong baseline to assess the utility of high-level features.
no code implementations • ECCV 2018 • Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Here we show that no single saliency map can perform well under all metrics.
no code implementations • 5 Oct 2016 • Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Here we present DeepGaze II, a model that predicts where people look in images.