no code implementations • ICML 2020 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
no code implementations • 29 Apr 2024 • Kaizhao Liu, Jose Blanchet, Lexing Ying, Yiping Lu
Bootstrap is a popular methodology for simulating input uncertainty.
no code implementations • 14 Mar 2024 • Yihang Chen, Fanghui Liu, Yiping Lu, Grigorios G. Chrysos, Volkan Cevher
To derive the generalization bounds under this setting, our analysis necessitates a shift from the conventional time-invariant Gram matrix employed in the lazy training regime to a time-variant, distribution-dependent version.
no code implementations • 10 Dec 2023 • Yinuo Ren, Yiping Lu, Lexing Ying, Grant M. Rotskoff
Inferring a diffusion equation from discretely-observed measurements is a statistical challenge of significant importance in a variety of fields, from single-molecule tracking in biophysical systems to modeling financial instruments.
no code implementations • 25 May 2023 • Jose Blanchet, Haoxuan Chen, Yiping Lu, Lexing Ying
We demonstrate that this kind of quadrature rule can improve the Monte Carlo rate and achieve the minimax optimal rate under a sufficient smoothness assumption.
no code implementations • 28 Nov 2022 • Yiping Lu, Jiajin Li, Lexing Ying, Jose Blanchet
The optimal design of experiments typically involves solving an NP-hard combinatorial optimization problem.
no code implementations • 28 Sep 2022 • Jikai Jin, Yiping Lu, Jose Blanchet, Lexing Ying
Learning mappings between infinite-dimensional function spaces has achieved empirical success in many disciplines of machine learning, including generative modeling, functional data analysis, causal inference, and multi-agent reinforcement learning.
no code implementations • 19 Sep 2022 • Yiping Lu, Wenlong Ji, Zachary Izzo, Lexing Ying
In this paper, we propose importance tempering to improve the decision boundary and achieve consistently better results for overparameterized models.
no code implementations • 9 Jun 2022 • Huishuai Zhang, Da Yu, Yiping Lu, Di He
Adversarial examples, which are usually generated for specific inputs with a specific model, are ubiquitous for neural networks.
no code implementations • 15 May 2022 • Yiping Lu, Jose Blanchet, Lexing Ying
In this paper, we study the statistical limits in terms of Sobolev norms of gradient descent for solving inverse problem from randomly sampled noisy observations using a general class of objective functions.
no code implementations • ICLR 2022 • Yiping Lu, Haoxuan Chen, Jianfeng Lu, Lexing Ying, Jose Blanchet
In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs).
no code implementations • ICLR 2022 • Wenlong Ji, Yiping Lu, Yiliang Zhang, Zhun Deng, Weijie J. Su
We prove that gradient flow on this model converges to critical points of a minimum-norm separation problem exhibiting neural collapse in its global minimizer.
no code implementations • NeurIPS Workshop DLDE 2021 • Yiping Lu, Haoxuan Chen, Jianfeng Lu, Lexing Ying, Jose Blanchet
In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs).
no code implementations • NeurIPS 2021 • Wenlong Ji, Yiping Lu, Yiliang Zhang, Zhun Deng, Weijie J Su
In this paper, we derive a landscape analysis to the surrogate model to study the inductive bias of the neural features and parameters from neural networks with cross-entropy.
no code implementations • 11 Mar 2020 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a new continuum limit of deep residual networks, which enjoys a good landscape in the sense that every local minimizer is global.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
1 code implementation • 2 Oct 2019 • Bin Dong, Jikai Hou, Yiping Lu, Zhihua Zhang
Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping.
no code implementations • 25 Sep 2019 • Bin Dong, Jikai Hou, Yiping Lu, Zhihua Zhang
Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping.
2 code implementations • ICLR 2020 • Yiping Lu, Zhuohan Li, Di He, Zhiqing Sun, Bin Dong, Tao Qin, Li-Wei Wang, Tie-Yan Liu
In this paper, we provide a novel perspective towards understanding the architecture: we show that the Transformer can be mathematically interpreted as a numerical Ordinary Differential Equation (ODE) solver for a convection-diffusion equation in a multi-particle dynamic system.
2 code implementations • NeurIPS 2019 • Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, Bin Dong
Adversarial training, typically formulated as a robust optimization problem, is an effective way of improving the robustness of deep networks.
no code implementations • 28 Jan 2019 • Bin Dong, Haocheng Ju, Yiping Lu, Zuoqiang Shi
For that, we introduce a new regularization by combining the low dimension manifold regularization with a higher order Curvature Regularization, and we call this new regularization CURE for short.
2 code implementations • 30 Nov 2018 • Zichao Long, Yiping Lu, Bin Dong
Numerical experiments show that the PDE-Net 2. 0 has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.
no code implementations • ICLR 2019 • Xiaoshuai Zhang, Yiping Lu, Jiaying Liu, Bin Dong
In this paper, we propose a new control framework called the moving endpoint control to restore images corrupted by different degradation levels in one model.
no code implementations • ICML 2018 • Yiping Lu, Aoxiao Zhong, Quanzheng Li, Bin Dong
We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations.
5 code implementations • ICML 2018 • Zichao Long, Yiping Lu, Xianzhong Ma, Bin Dong
In this paper, we present an initial attempt to learn evolution PDEs from data.