no code implementations • 16 Jun 2020 • Shailja Thakur, Sebastian Fischmeister
To fully exploit the capabilities of complex neural networks, we propose a non-intrusive interpretability technique that uses the input and output of the model to generate a saliency map.
no code implementations • 10 Apr 2019 • Ilia Sucholutsky, Apurva Narayan, Matthias Schonlau, Sebastian Fischmeister
The output of the model will be a close reconstruction of the true data, and can be fed to algorithms that rely on clean data.