no code implementations • ICML 2020 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
no code implementations • 12 Feb 2024 • Frank Cole, Yulong Lu
While score-based generative models (SGMs) have achieved remarkable success in enormous image generation tasks, their mathematical foundations are still limited.
no code implementations • 8 Nov 2023 • Yahong Yang, Yulong Lu
This paper establishes the nearly optimal rate of approximation for deep neural networks (DNNs) when applied to Korobov functions, effectively overcoming the curse of dimensionality.
no code implementations • 17 Dec 2022 • Yulong Lu
Finding the mixed Nash equilibria (MNE) of a two-player zero sum continuous game is an important and challenging problem in machine learning.
1 code implementation • 9 Dec 2022 • Wuzhe Xu, Yulong Lu, Li Wang
Deep operator network (DeepONet) has demonstrated great success in various learning tasks, including learning solution operators of partial differential equations.
no code implementations • 1 Nov 2022 • Yulong Lu, Dejan Slepčev, Lihan Wang
Motivated by the challenge of sampling Gibbs measures with nonconvex potentials, we study a continuum birth-death dynamics.
no code implementations • 25 Jan 2022 • Ziang Chen, Jianfeng Lu, Yulong Lu, Shengxuan Zhou
Spectral Barron spaces have received considerable interest recently as it is the natural function space for approximation theory of two-layer neural networks with a dimension-free convergence rate.
no code implementations • NeurIPS 2021 • Ziang Chen, Jianfeng Lu, Yulong Lu
Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments.
no code implementations • 4 May 2021 • Jianfeng Lu, Yulong Lu
We prove that the convergence rate of the generalization error is independent of the dimension $d$, under the a priori assumption that the ground state lies in a spectral Barron space.
no code implementations • 5 Jan 2021 • Jianfeng Lu, Yulong Lu, Min Wang
This paper concerns the a priori generalization analysis of the Deep Ritz Method (DRM) [W. E and B. Yu, 2017], a popular neural-network-based method for solving high dimensional partial differential equations.
no code implementations • NeurIPS 2020 • Yulong Lu, Jianfeng Lu
In particular, the size of neural network can grow exponentially in $d$ when $1$-Wasserstein distance is used as the discrepancy, whereas for both MMD and KSD the size of neural network only depends on $d$ at most polynomially.
no code implementations • 11 Mar 2020 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a new continuum limit of deep residual networks, which enjoys a good landscape in the sense that every local minimizer is global.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
no code implementations • 23 May 2019 • Yulong Lu, Jianfeng Lu, James Nolen
A fundamental problem in Bayesian inference and statistical machine learning is to efficiently sample from multimodal distributions.
no code implementations • 2 Feb 2019 • Yuanyuan Feng, Tingran Gao, Lei LI, Jian-Guo Liu, Yulong Lu
Diffusion approximation provides weak approximation for stochastic gradient descent algorithms in a finite time horizon.
no code implementations • 10 May 2018 • Jianfeng Lu, Yulong Lu, James Nolen
We study an interacting particle system in $\mathbf{R}^d$ motivated by Stein variational gradient descent [Q. Liu and D. Wang, NIPS 2016], a deterministic algorithm for sampling from a given probability density with unknown normalization.