Adaptive Filters for Low-Latency and Memory-Efficient Graph Neural Networks

Scaling and deploying graph neural networks (GNNs) remains difficult due to their high memory consumption and inference latency. In this work we present a new type of GNN architecture that achieves state-of-the-art performance with lower memory consumption and latency, along with characteristics suited to accelerator implementation. Our proposal uses memory proportional to the number of vertices in the graph ($\mathcal{O}(V)$), in contrast to competing methods which require memory proportional to the number of edges ($\mathcal{O}(E)$); surprisingly, we find our efficient approach actually achieves higher accuracy than competing approaches across 6 large and varied datasets against strong baselines. We achieve our results by using a novel \textit{adaptive filtering} approach, which can be interpreted as enabling each vertex to have its own weight matrix, and is not directly related to attention. Following our focus on efficient hardware usage, we demonstrate that our method achieves lower latency and memory consumption for the same accuracy when compared to competing approaches.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here