1 code implementation • 9 Feb 2024 • Tara Akhound-Sadegh, Jarrid Rector-Brooks, Avishek Joey Bose, Sarthak Mittal, Pablo Lemos, Cheng-Hao Liu, Marcin Sendera, Siamak Ravanbakhsh, Gauthier Gidel, Yoshua Bengio, Nikolay Malkin, Alexander Tong
Efficiently generating statistically independent samples from an unnormalized probability distribution, such as equilibrium samples of many-body systems, is a foundational problem in science.
1 code implementation • 7 Feb 2024 • Thuan Trang, Nhat Khang Ngo, Daniel Levy, Thieu N. Vo, Siamak Ravanbakhsh, Truong Son Hy
Triangular meshes are widely used to represent three-dimensional objects.
no code implementations • 14 Dec 2023 • Sékou-Oumar Kaba, Siamak Ravanbakhsh
Using symmetry as an inductive bias in deep learning has been proven to be a principled approach for sample-efficient model design.
no code implementations • 7 Nov 2023 • Tara Akhound-Sadegh, Laurence Perreault-Levasseur, Johannes Brandstetter, Max Welling, Siamak Ravanbakhsh
Symmetries have been leveraged to improve the generalization of neural networks through different mechanisms from data augmentation to equivariant architectures.
1 code implementation • 6 Nov 2023 • Mehran Shakerinava, Motahareh Sohrabi, Siamak Ravanbakhsh, Simon Lacoste-Julien
Weight-sharing is ubiquitous in deep learning.
no code implementations • 4 Oct 2023 • Vineet Jain, Siamak Ravanbakhsh
We present a novel perspective on goal-conditioned reinforcement learning by framing it within the context of denoising diffusion models.
no code implementations • 6 Sep 2023 • Daniel Levy, Sékou-Oumar Kaba, Carmelo Gonzales, Santiago Miret, Siamak Ravanbakhsh
We present a natural extension to E(n)-equivariant graph neural networks that uses multiple equivariant vectors per node.
no code implementations • 20 Jun 2023 • Arnab Kumar Mondal, Siba Smarak Panigrahi, Sai Rajeswar, Kaleem Siddiqi, Siamak Ravanbakhsh
We approach this problem from the lens of Koopman theory, where the nonlinear dynamics of the environment can be linearized in a high-dimensional latent space.
1 code implementation • 29 May 2023 • Victor Livernoche, Vineet Jain, Yashar Hezaveh, Siamak Ravanbakhsh
By simplifying DDPM in application to anomaly detection, we are naturally led to an alternative approach called Diffusion Time Estimation (DTE).
no code implementations • 15 Nov 2022 • Sékou-Oumar Kaba, Siamak Ravanbakhsh
Supervised learning with deep models has tremendous potential for applications in materials science.
no code implementations • 11 Nov 2022 • Sékou-Oumar Kaba, Arnab Kumar Mondal, Yan Zhang, Yoshua Bengio, Siamak Ravanbakhsh
Symmetry-based neural networks often constrain the architecture in order to achieve invariance or equivariance to a group of transformations.
no code implementations • 27 Jun 2022 • Mehran Shakerinava, Siamak Ravanbakhsh
A yet stronger constraint simplifies the utility function for goal-seeking agents in the form of a difference in some function of states that we call potential functions.
1 code implementation • 25 Mar 2022 • Christopher Morris, Gaurav Rattan, Sandra Kiefer, Siamak Ravanbakhsh
While (message-passing) graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs or general relational data, more expressive, higher-order graph neural networks do not scale to large graphs.
no code implementations • 19 Feb 2022 • Mehran Shakerinava, Arnab Kumar Mondal, Siamak Ravanbakhsh
We present a simple non-generative approach to deep representation learning that seeks equivariant deep embedding through simple objectives.
no code implementations • 29 Sep 2021 • Daniel Levy, Siamak Ravanbakhsh
Many real-world datasets include multiple distinct types of entities and relations, and so they are naturally best represented by heterogeneous graphs.
no code implementations • 29 Sep 2021 • Arnab Kumar Mondal, Vineet Jain, Kaleem Siddiqi, Siamak Ravanbakhsh
We study different notions of equivariance as an inductive bias in Reinforcement Learning (RL) and propose new mechanisms for recovering representations that are equivariant to both an agent’s action, and symmetry transformations of the state-action pairs.
no code implementations • 12 Jun 2021 • Mehran Shakerinava, Siamak Ravanbakhsh
We show how to model this interplay using ideas from group theory, identify the equivariant linear maps, and introduce equivariant padding that respects these symmetries.
no code implementations • NeurIPS 2020 • Renhao Wang, Marjan Albooyeh, Siamak Ravanbakhsh
While using invariant and equivariant maps, it is possible to apply deep learning to a range of primitive data structures, a formalism for dealing with hierarchy is lacking.
no code implementations • 5 Jun 2020 • Renhao Wang, Marjan Albooyeh, Siamak Ravanbakhsh
While using invariant and equivariant maps, it is possible to apply deep learning to a range of primitive data structures, a formalism for dealing with hierarchy is lacking.
no code implementations • ICML 2020 • Siamak Ravanbakhsh
Group invariant and equivariant Multilayer Perceptrons (MLP), also known as Equivariant Networks, have achieved remarkable success in learning on a variety of data structures, such as sequences, images, sets, and graphs.
no code implementations • ICML 2020 • Marjan Albooyeh, Daniele Bertolini, Siamak Ravanbakhsh
Sparse incidence tensors can represent a variety of structured data.
1 code implementation • 21 Mar 2019 • Devon Graham, Junhao Wang, Siamak Ravanbakhsh
In this paper, we propose the Equivariant Entity-Relationship Network (EERN), which is a Multilayer Perceptron equivariant to the symmetry transformations of the Entity-Relationship model.
no code implementations • 7 Dec 2018 • Bahare Fatemi, Siamak Ravanbakhsh, David Poole
Knowledge graphs are used to represent relational information in terms of triples.
1 code implementation • 15 Nov 2018 • Siyu He, Yin Li, Yu Feng, Shirley Ho, Siamak Ravanbakhsh, Wei Chen, Barnabás Póczos
We build a deep neural network, the Deep Density Displacement Model (hereafter D$^3$M), to predict the non-linear structure formation of the Universe from simple linear perturbation theory.
1 code implementation • 28 Jun 2018 • Sumedha Singla, Mingming Gong, Siamak Ravanbakhsh, Frank Sciurba, Barnabas Poczos, Kayhan N. Batmanghelich
Our model consists of three mutually dependent modules which regulate each other: (1) a discriminative network that learns a fixed-length representation from local features and maps them to disease severity; (2) an attention mechanism that provides interpretability by focusing on the areas of the anatomy that contribute the most to the prediction task; and (3) a generative network that encourages the diversity of the local latent features.
1 code implementation • ICML 2018 • Jason Hartford, Devon R Graham, Kevin Leyton-Brown, Siamak Ravanbakhsh
In experiments, our models achieved surprisingly good generalization performance on this matrix extrapolation task, both within domains (e. g., new users and new movies drawn from the same distribution used for training) and even across domains (e. g., predicting music ratings after training on movies).
Ranked #3 on Recommendation Systems on YahooMusic Monti
no code implementations • NeurIPS 2017 • Christopher Srinivasa, Inmar Givoni, Siamak Ravanbakhsh, Brendan J. Frey
We study the application of min-max propagation, a variation of belief propagation, for approximate min-max inference in factor graphs.
no code implementations • 6 Nov 2017 • Siamak Ravanbakhsh, Junier Oliva, Sebastien Fromenteau, Layne C. Price, Shirley Ho, Jeff Schneider, Barnabas Poczos
A major approach to estimating the cosmological parameters is to use the large-scale matter distribution of the Universe.
5 code implementations • NeurIPS 2017 • Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, Alexander Smola
Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong.
1 code implementation • 8 Mar 2017 • Francois Lanusse, Quanbin Ma, Nan Li, Thomas E. Collett, Chun-Liang Li, Siamak Ravanbakhsh, Rachel Mandelbaum, Barnabas Poczos
We find on our simulated data set that for a rejection rate of non-lenses of 99%, a completeness of 90% can be achieved for lenses with Einstein radii larger than 1. 4" and S/N larger than 20 on individual $g$-band LSST exposures.
Instrumentation and Methods for Astrophysics Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies
1 code implementation • ICML 2017 • Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos
We propose to study equivariance in deep neural networks through parameter symmetries.
no code implementations • 14 Nov 2016 • Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos
We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set.
no code implementations • 11 Nov 2016 • Chun-Liang Li, Siamak Ravanbakhsh, Barnabas Poczos
Due to numerical stability and quantifiability of the likelihood, RBM is commonly used with Bernoulli units.
no code implementations • 19 Sep 2016 • Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, Barnabas Poczos
To this end, we study the application of deep conditional generative models in generating realistic galaxy images.
no code implementations • 1 Jan 2016 • Siamak Ravanbakhsh, Barnabas Poczos, Jeff Schneider, Dale Schuurmans, Russell Greiner
We propose a Laplace approximation that creates a stochastic unit from any smooth monotonic activation function, using only Gaussian noise.
no code implementations • NeurIPS 2015 • Farzaneh Mirzazadeh, Siamak Ravanbakhsh, Nan Ding, Dale Schuurmans
A key bottleneck in structured output prediction is the need for inference during training and testing, usually requiring some form of dynamic programming.
no code implementations • 28 Sep 2015 • Siamak Ravanbakhsh, Barnabas Poczos, Russell Greiner
Boolean matrix factorization and Boolean matrix completion from noisy observations are desirable unsupervised data-analysis methods due to their interpretability, but hard to perform due to their NP-hardness.
no code implementations • 20 Aug 2015 • Siamak Ravanbakhsh
We contribute to three classes of approximations that improve BP for loopy graphs A) loop correction techniques; B) survey propagation, another message passing technique that surpasses BP in some settings; and C) hybrid methods that interpolate between deterministic message passing and Markov Chain Monte Carlo inference.
no code implementations • 25 Sep 2014 • Siamak Ravanbakhsh, Russell Greiner
This paper studies the form and complexity of inference in graphical models using the abstraction offered by algebraic structures.
no code implementations • 4 Sep 2014 • Siamak Ravanbakhsh, Philip Liu, Trent Bjorndahl, Rupasri Mandal, Jason R. Grant, Michael Wilson, Roman Eisner, Igor Sinelnikov, Xiaoyu Hu, Claudio Luchinat, Russell Greiner, David S. Wishart
This information can be extracted from a biofluid's NMR spectrum.
no code implementations • NeurIPS 2014 • Siamak Ravanbakhsh, Reihaneh Rabbany, Russell Greiner
The cutting plane method is an augmentative constrained optimization procedure that is often used with continuous-domain optimization techniques such as linear and convex programs.
no code implementations • 6 May 2014 • Siamak Ravanbakhsh, Russell Greiner, Brendan Frey
During the learning, to produce a sample from the current model, we start from a training data and descend in the energy landscape of the "perturbed model", for a fixed number of steps, or until a local optima is reached.
no code implementations • 26 Jan 2014 • Siamak Ravanbakhsh, Russell Greiner
We introduce an efficient message passing scheme for solving Constraint Satisfaction Problems (CSPs), which uses stochastic perturbation of Belief Propagation (BP) and Survey Propagation (SP) messages to bypass decimation and directly produce a single satisfying assignment.