Learning Stable Graph Neural Networks via Spectral Regularization

13 Nov 2022  ·  Zhan Gao, Elvin Isufi ·

Stability of graph neural networks (GNNs) characterizes how GNNs react to graph perturbations and provides guarantees for architecture performance in noisy scenarios. This paper develops a self-regularized graph neural network (SR-GNN) solution that improves the architecture stability by regularizing the filter frequency responses in the graph spectral domain. The SR-GNN considers not only the graph signal as input but also the eigenvectors of the underlying graph, where the signal is processed to generate task-relevant features and the eigenvectors to characterize the frequency responses at each layer. We train the SR-GNN by minimizing the cost function and regularizing the maximal frequency response close to one. The former improves the architecture performance, while the latter tightens the perturbation stability and alleviates the information loss through multi-layer propagation. We further show the SR-GNN preserves the permutation equivariance, which allows to explore the internal symmetries of graph signals and to exhibit transference on similar graph structures. Numerical results with source localization and movie recommendation corroborate our findings and show the SR-GNN yields a comparable performance with the vanilla GNN on the unperturbed graph but improves substantially the stability.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods