1 code implementation • 5 Jun 2024 • Dexiong Chen, Till Hendrik Schulz, Karsten Borgwardt
Message-passing graph neural networks (GNNs), while excelling at capturing local relationships, often struggle with long-range dependencies on graphs.
Ranked #1 on Graph Classification on MNIST
1 code implementation • 1 Mar 2024 • Yuting Li, Yingyi Chen, Xuanlong Yu, Dexiong Chen, Xi Shen
In this paper, we revisit techniques for uncertainty estimation within deep neural networks and consolidate a suite of techniques to enhance their reliability.
Ranked #1 on Learning with noisy labels on ANIMAL
1 code implementation • 26 Jan 2024 • Dexiong Chen, Philip Hartout, Paolo Pellizzoni, Carlos Oliver, Karsten Borgwardt
Drawing from recent advances in graph transformers, our approach refines the self-attention mechanisms of pretrained language transformers by integrating structural information with structure extractor modules.
1 code implementation • 12 May 2023 • Dexiong Chen, Paolo Pellizzoni, Karsten Borgwardt
Attention-based graph neural networks (GNNs), such as graph attention networks (GATs), have become popular neural architectures for processing graph-structured data and learning node embeddings.
1 code implementation • 6 Jul 2022 • Dexiong Chen, Bowen Fan, Carlos Oliver, Karsten Borgwardt
Our approach integrates Multidimensional Scaling (MDS) and Wasserstein Procrustes analysis into a joint optimization problem to simultaneously generate isometric embeddings of data and learn correspondences between instances from two different datasets, while only requiring intra-dataset pairwise dissimilarities as input.
1 code implementation • 2 Jun 2022 • Carlos Oliver, Dexiong Chen, Vincent Mallet, Pericles Philippopoulos, Karsten Borgwardt
Frequent and structurally related subgraphs, also known as network motifs, are valuable features of many graph datasets.
3 code implementations • 7 Feb 2022 • Dexiong Chen, Leslie O'Bray, Karsten Borgwardt
Here, we show that the node representations generated by the Transformer with positional encoding do not necessarily capture structural similarity between them.
Ranked #4 on Graph Property Prediction on ogbg-code2
Emotion Recognition in Conversation Graph Representation Learning
1 code implementation • 10 Jun 2021 • Grégoire Mialon, Dexiong Chen, Margot Selosse, Julien Mairal
We show that viewing graphs as sets of node features and incorporating structural and positional information into a transformer architecture is able to outperform representations learned with classical graph neural networks (GNNs).
1 code implementation • ICLR 2021 • Grégoire Mialon, Dexiong Chen, Alexandre d'Aspremont, Julien Mairal
We address the problem of learning on sets of features, motivated by the need of performing pooling operations in long biological sequences of varying sizes, with long-range dependencies, and possibly few labeled data.
1 code implementation • ICML 2020 • Dexiong Chen, Laurent Jacob, Julien Mairal
On the other hand, our model can also be trained end-to-end on large-scale data, leading to new types of graph convolutional neural networks.
no code implementations • 27 Aug 2019 • Zhijun Mai, Guosheng Hu, Dexiong Chen, Fumin Shen, Heng Tao Shen
Since deep networks are capable of memorizing the entire dataset, the corrupted samples generated by vanilla MixUp with a badly chosen interpolation policy will degrade the performance of networks.
1 code implementation • NeurIPS 2019 • Dexiong Chen, Laurent Jacob, Julien Mairal
Substring kernels are classical tools for representing biological sequences or text.
1 code implementation • 30 Sep 2018 • Alberto Bietti, Grégoire Mialon, Dexiong Chen, Julien Mairal
We propose a new point of view for regularizing deep neural networks by using the norm of a reproducing kernel Hilbert space (RKHS).