1 code implementation • 2 Mar 2023 • Amirhossein Reisizadeh, Haochuan Li, Subhro Das, Ali Jadbabaie
This is in clear contrast to the well-established assumption in folklore non-convex optimization, a. k. a.
no code implementations • 17 Oct 2022 • Pouria Mahdavinia, Yuyang Deng, Haochuan Li, Mehrdad Mahdavi
Despite the established convergence theory of Optimistic Gradient Descent Ascent (OGDA) and Extragradient (EG) methods for the convex-concave minimax problems, little is known about the theoretical guarantees of these methods in nonconvex settings.
no code implementations • 3 Jul 2022 • Haochuan Li, Farzan Farnia, Subhro Das, Ali Jadbabaie
In this paper, we aim to bridge this gap by analyzing the \emph{local convergence} of general \emph{nonconvex-nonconcave} minimax problems.
no code implementations • 3 Apr 2022 • Ali Jadbabaie, Haochuan Li, Jian Qian, Yi Tian
In this paper, we study a linear bandit optimization problem in a federated setting where a large collection of distributed agents collaboratively learn a common linear bandit model.
no code implementations • 12 Oct 2021 • Jingzhao Zhang, Haochuan Li, Suvrit Sra, Ali Jadbabaie
This work examines the deep disconnect between existing theoretical analyses of gradient-based algorithms and the practice of training deep neural networks.
no code implementations • NeurIPS 2021 • Haochuan Li, Yi Tian, Jingzhao Zhang, Ali Jadbabaie
We provide a first-order oracle complexity lower bound for finding stationary points of min-max optimization problems where the objective function is smooth, nonconvex in the minimization variable, and strongly concave in the maximization variable.
no code implementations • 18 Jan 2021 • Haochuan Li, Lawrence M. Widrow
We develop a novel method to simultaneously determine the vertical potential, force and stellar $z-v_z$ phase space distribution function (DF) in our local patch of the Galaxy.
Astrophysics of Galaxies
no code implementations • NeurIPS 2019 • Ruiqi Gao, Tianle Cai, Haochuan Li, Li-Wei Wang, Cho-Jui Hsieh, Jason D. Lee
Neural networks are vulnerable to adversarial examples, i. e. inputs that are imperceptibly perturbed from natural data and yet incorrectly classified by the network.
no code implementations • 9 Nov 2018 • Simon S. Du, Jason D. Lee, Haochuan Li, Li-Wei Wang, Xiyu Zhai
Gradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex.
no code implementations • 2 Apr 2017 • Kun He, Jingbo Wang, Haochuan Li, Yao Shu, Mengxiao Zhang, Man Zhu, Li-Wei Wang, John E. Hopcroft
Toward a deeper understanding on the inner work of deep neural networks, we investigate CNN (convolutional neural network) using DCN (deconvolutional network) and randomization technique, and gain new insights for the intrinsic property of this network architecture.