Laplacian eigenvectors represent a natural generalization of the Transformer positional encodings (PE) for graphs as the eigenvectors of a discrete line (NLP graph) are the cosine and sinusoidal functions. They help encode distance-aware information (i.e., nearby nodes have similar positional features and farther nodes have dissimilar positional features).
Hence, Laplacian Positional Encoding (PE) is a general method to encode node positions in a graph. For each node, its Laplacian PE is the k smallest non-trivial eigenvectors.
Source: Benchmarking Graph Neural NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Node Classification | 27 | 6.92% |
Graph Learning | 20 | 5.13% |
Graph Representation Learning | 19 | 4.87% |
Graph Classification | 16 | 4.10% |
Link Prediction | 14 | 3.59% |
Graph Regression | 14 | 3.59% |
Molecular Property Prediction | 9 | 2.31% |
Drug Discovery | 9 | 2.31% |
Property Prediction | 9 | 2.31% |