SCA-Net: A Self-Correcting Two-Layer Autoencoder for Hyper-spectral Unmixing

10 Feb 2021  ·  Gurpreet Singh, Soumyajit Gupta, Clint Dawson ·

Hyperspectral unmixing involves separating a pixel as a weighted combination of its constituent endmembers and corresponding fractional abundances, with the current state of the art results achieved by neural models on benchmark datasets. However, these networks are severely over-parameterized and consequently, the invariant endmember spectra extracted as decoder weights have a high variance over multiple runs. These approaches perform substantial post-processing while requiring an exact specification of the number of endmembers and specialized initialization of weights from other algorithms like VCA. We show for the first time that a two-layer autoencoder (SCA), with $2FK$ parameters ($F$ features, $K$ endmembers), achieves error metrics that are scales apart ($10^{-5})$ from previously reported values $(10^{-2})$. SCA converges to this low error solution starting from a random initialization of weights. We also show that SCA, based upon a bi-orthogonal representation, performs a self-correction when the number of endmembers are over-specified. Numerical experiments on Samson, Jasper, and Urban datasets demonstrate that SCA outperforms previously reported error metrics for all the cases while being robust to noise and outliers.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods