Search Results for author: Thomas S. A. Wallis

Found 11 papers, 3 papers with code

A psychophysical evaluation of techniques for Mooney image generation

no code implementations18 Mar 2024 Lars C. Reining, Thomas S. A. Wallis

Therefore, in this study we investigated how the choice of different thresholding techniques and amount of smoothing affects the interpretability of Mooney images for human participants.

Image Generation

How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

1 code implementation NeurIPS 2021 Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel

A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence.

Explainable artificial intelligence

Exemplary natural images explain CNN activations better than synthetic feature visualizations

no code implementations ICLR 2021 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images \citep{olah2017feature} with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map.

Informativeness

Natural Images are More Informative for Interpreting CNN Activations than State-of-the-Art Synthetic Feature Visualizations

no code implementations NeurIPS Workshop SVRHM 2020 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. [45] with a simple baseline visualization, namely natural images that also strongly activate a specific feature map.

Informativeness

Guiding human gaze with convolutional neural networks

no code implementations18 Dec 2017 Leon A. Gatys, Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge

Thus, manipulating fixation patterns to guide human attention is an exciting challenge in digital image processing.

Understanding Low- and High-Level Contributions to Fixation Prediction

no code implementations ICCV 2017 Matthias Kummerer, Thomas S. A. Wallis, Leon A. Gatys, Matthias Bethge

This model achieves better performance than all models not using features pre-trained on object recognition, making it a strong baseline to assess the utility of high-level features.

Object Recognition Saliency Prediction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.