A differentiable Gaussian Prototype Layer for explainable Segmentation

25 Jun 2023  ·  Michael Gerstenberger, Steffen Maaß, Peter Eisert, Sebastian Bosse ·

We introduce a Gaussian Prototype Layer for gradient-based prototype learning and demonstrate two novel network architectures for explainable segmentation one of which relies on region proposals. Both models are evaluated on agricultural datasets. While Gaussian Mixture Models (GMMs) have been used to model latent distributions of neural networks before, they are typically fitted using the EM algorithm. Instead, the proposed prototype layer relies on gradient-based optimization and hence allows for end-to-end training. This facilitates development and allows to use the full potential of a trainable deep feature extractor. We show that it can be used as a novel building block for explainable neural networks. We employ our Gaussian Prototype Layer in (1) a model where prototypes are detected in the latent grid and (2) a model inspired by Fast-RCNN with SLIC superpixels as region proposals. The earlier achieves a similar performance as compared to the state-of-the art while the latter has the benefit of a more precise prototype localization that comes at the cost of slightly lower accuracies. By introducing a gradient-based GMM layer we combine the benefits of end-to-end training with the simplicity and theoretical foundation of GMMs which will allow to adapt existing semi-supervised learning strategies for prototypical part models in future.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here