Search Results for author: Luc Rey-Bellet

Found 10 papers, 3 papers with code

Nonlinear denoising score matching for enhanced learning of structured distributions

no code implementations24 May 2024 Jeremiah Birrell, Markos A. Katsoulakis, Luc Rey-Bellet, Benjamin Zhang, Wei Zhu

We present a novel method for training score-based generative models which uses nonlinear noising dynamics to improve learning of structured distributions.

Learning heavy-tailed distributions with Wasserstein-proximal-regularized $α$-divergences

no code implementations22 May 2024 Ziyu Chen, Hyemin Gu, Markos A. Katsoulakis, Luc Rey-Bellet, Wei Zhu

In this paper, we propose Wasserstein proximals of $\alpha$-divergences as suitable objective functionals for learning heavy-tailed distributions in a stable manner.

Statistical Guarantees of Group-Invariant GANs

no code implementations22 May 2023 Ziyu Chen, Markos A. Katsoulakis, Luc Rey-Bellet, Wei Zhu

Group-invariant generative adversarial networks (GANs) are a type of GANs in which the generators and discriminators are hardwired with group symmetries.

Sample Complexity of Probability Divergences under Group Symmetry

no code implementations3 Feb 2023 Ziyu Chen, Markos A. Katsoulakis, Luc Rey-Bellet, Wei Zhu

We rigorously quantify the improvement in the sample complexity of variational divergence estimations for group-invariant distributions.

Lipschitz-regularized gradient flows and generative particle algorithms for high-dimensional scarce data

1 code implementation31 Oct 2022 Hyemin Gu, Panagiota Birmpa, Yannis Pantazis, Luc Rey-Bellet, Markos A. Katsoulakis

We build a new class of generative algorithms capable of efficiently learning an arbitrary target distribution from possibly scarce, high-dimensional data and subsequently generate new samples.

Data Integration Representation Learning

Function-space regularized Rényi divergences

1 code implementation10 Oct 2022 Jeremiah Birrell, Yannis Pantazis, Paul Dupuis, Markos A. Katsoulakis, Luc Rey-Bellet

We propose a new family of regularized R\'enyi divergences parametrized not only by the order $\alpha$ but also by a variational function space.

Structure-preserving GANs

no code implementations2 Feb 2022 Jeremiah Birrell, Markos A. Katsoulakis, Luc Rey-Bellet, Wei Zhu

Generative adversarial networks (GANs), a class of distribution-learning methods based on a two-player game between a generator and a discriminator, can generally be formulated as a minmax problem based on the variational representation of a divergence between the unknown and the generated distributions.

Model Uncertainty and Correctability for Directed Graphical Models

no code implementations17 Jul 2021 Panagiota Birmpa, Jinchao Feng, Markos A. Katsoulakis, Luc Rey-Bellet

Probabilistic graphical models are a fundamental tool in probabilistic modeling, machine learning and artificial intelligence.

BIG-bench Machine Learning Materials Screening +1

$(f,Γ)$-Divergences: Interpolating between $f$-Divergences and Integral Probability Metrics

no code implementations11 Nov 2020 Jeremiah Birrell, Paul Dupuis, Markos A. Katsoulakis, Yannis Pantazis, Luc Rey-Bellet

We develop a rigorous and general framework for constructing information-theoretic divergences that subsume both $f$-divergences and integral probability metrics (IPMs), such as the $1$-Wasserstein distance.

Image Generation Uncertainty Quantification

Variational Representations and Neural Network Estimation of Rényi Divergences

1 code implementation7 Jul 2020 Jeremiah Birrell, Paul Dupuis, Markos A. Katsoulakis, Luc Rey-Bellet, Jie Wang

We further show that this R\'enyi variational formula holds over a range of function spaces; this leads to a formula for the optimizer under very weak assumptions and is also key in our development of a consistency theory for R\'enyi divergence estimators.

Cannot find the paper you are looking for? You can Submit a new open access paper.