Search Results for author: Margot Selosse

Found 3 papers, 3 papers with code

Self-Attention in Colors: Another Take on Encoding Graph Structure in Transformers

1 code implementation21 Apr 2023 Romain Menegaux, Emmanuel Jehanno, Margot Selosse, Julien Mairal

We introduce a novel self-attention mechanism, which we call CSA (Chromatic Self-Attention), which extends the notion of attention scores to attention _filters_, independently modulating the feature channels.

Graph Regression

GraphiT: Encoding Graph Structure in Transformers

1 code implementation10 Jun 2021 Grégoire Mialon, Dexiong Chen, Margot Selosse, Julien Mairal

We show that viewing graphs as sets of node features and incorporating structural and positional information into a transformer architecture is able to outperform representations learned with classical graph neural networks (GNNs).

A bumpy journey: exploring deep Gaussian mixture models

1 code implementation NeurIPS Workshop ICBINB 2020 Margot Selosse, Claire Gormley, Julien Jacques, Christophe Biernacki

The DGMM consists of stacking MFA layers, in the sense that the latent scores are no longer assumed to be drawn from a standard Gaussian, but rather are drawn from a mixture of factor analysers model.

Cannot find the paper you are looking for? You can Submit a new open access paper.