no code implementations • 21 Oct 2023 • Michael I. Jordan, Tianyi Lin, Zhengyuan Zhou
Online gradient descent (OGD) is well known to be doubly optimal under strong convexity or monotonicity assumptions: (1) in the single-agent setting, it achieves an optimal regret of $\Theta(\log T)$ for strongly convex cost functions; and (2) in the multi-agent setting of strongly monotone games, with each agent employing OGD, we obtain last-iterate convergence of the joint action to a unique Nash equilibrium at an optimal rate of $\Theta(\frac{1}{T})$.
no code implementations • 21 Oct 2023 • Tianyi Lin, Marco Cuturi, Michael I. Jordan
Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.
no code implementations • 29 Jun 2023 • Yang Cai, Michael I. Jordan, Tianyi Lin, Argyris Oikonomou, Emmanouil-Vasileios Vlatakis-Gkaragkounis
Numerous applications in machine learning and data analytics can be formulated as equilibrium computation over Riemannian manifolds.
no code implementations • 16 Feb 2023 • Michael I. Jordan, Guy Kornowski, Tianyi Lin, Ohad Shamir, Manolis Zampetakis
In particular, we prove a lower bound of $\Omega(d)$ for any deterministic algorithm.
no code implementations • 23 Oct 2022 • Tianyi Lin, Panayotis Mertikopoulos, Michael I. Jordan
Specifically, we show that the proposed methods generate iterates that remain within a bounded set and that the averaged iterates converge to an $\epsilon$-saddle point within $O(\epsilon^{-2/3})$ iterations in terms of a restricted gap function.
no code implementations • 12 Sep 2022 • Tianyi Lin, Zeyu Zheng, Michael I. Jordan
Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of efficient solution methods with finite-time convergence guarantee: the lack of computationally tractable optimality criterion and the lack of computationally powerful oracles.
3 code implementations • 4 Jul 2022 • Leonid Boytsov, David Akinpelu, Tianyi Lin, Fangwei Gao, Yutian Zhao, Jeffrey Huang, Eric Nyberg
Most other models had poor zero-shot performance (sometimes at a random baseline level) but outstripped MaxP by as much 13-28\% after finetuning.
no code implementations • 4 Jun 2022 • Michael I. Jordan, Tianyi Lin, Emmanouil-Vasileios Vlatakis-Gkaragkounis
From optimal transport to robust dimensionality reduction, a plethora of machine learning applications can be cast into the min-max optimization problems over Riemannian manifolds.
no code implementations • 15 May 2022 • Tianyi Lin, Aldo Pacchiano, Yaodong Yu, Michael I. Jordan
Motivated by applications to online learning in sparse estimation and Bayesian optimization, we consider the problem of online unconstrained nonsubmodular minimization with delayed costs in both full information and bandit feedback settings.
no code implementations • 6 May 2022 • Tianyi Lin, Michael. I. Jordan
Our method with restarting attains a linear rate for smooth and uniformly monotone VIs and a local superlinear rate for smooth and strongly monotone VIs.
no code implementations • 7 Apr 2022 • Michael I. Jordan, Tianyi Lin, Manolis Zampetakis
We consider the problem of computing an equilibrium in a class of \textit{nonlinear generalized Nash equilibrium problems (NGNEPs)} in which the strategy sets for each player are defined by equality and inequality constraints that may depend on the choices of rival players.
1 code implementation • 6 Dec 2021 • Wenjia Ba, Tianyi Lin, Jiawei Zhang, Zhengyuan Zhou
Leveraging self-concordant barrier functions, we first construct a new bandit learning algorithm and show that it achieves the single-agent optimal regret of $\tilde{\Theta}(n\sqrt{T})$ under smooth and strongly concave reward functions ($n \geq 1$ is the problem dimension).
no code implementations • 27 Apr 2021 • Yaodong Yu, Tianyi Lin, Eric Mazumdar, Michael I. Jordan
Distributionally robust supervised learning (DRSL) is emerging as a key paradigm for building reliable machine learning systems for real-world applications -- reflecting the need for classifiers and predictive models that are robust to the distribution shifts that arise from phenomena such as selection bias or nonstationarity.
no code implementations • 24 Mar 2021 • Wenshuo Guo, Michael I. Jordan, Tianyi Lin
Bayesian regression games are a special class of two-player general-sum Bayesian games in which the learner is partially informed about the adversary's objective through a Bayesian prior.
no code implementations • 22 Jun 2020 • Tianyi Lin, Zeyu Zheng, Elynn Y. Chen, Marco Cuturi, Michael. I. Jordan
Yet, the behavior of minimum Wasserstein estimators is poorly understood, notably in high-dimensional regimes or under model misspecification.
no code implementations • NeurIPS 2020 • Tianyi Lin, Chenyou Fan, Nhat Ho, Marco Cuturi, Michael. I. Jordan
Projection robust Wasserstein (PRW) distance, or Wasserstein projection pursuit (WPP), is a robust variant of the Wasserstein distance.
no code implementations • ICML 2020 • Tianyi Lin, Zhengyuan Zhou, Panayotis Mertikopoulos, Michael. I. Jordan
In this paper, we consider multi-agent learning via online gradient descent in a class of games called $\lambda$-cocoercive games, a fairly broad class of games that admits many Nash equilibria and that properly includes unconstrained strongly monotone games.
no code implementations • NeurIPS 2020 • Tianyi Lin, Nhat Ho, Xi Chen, Marco Cuturi, Michael. I. Jordan
We study the fixed-support Wasserstein barycenter problem (FS-WBP), which consists in computing the Wasserstein barycenter of $m$ discrete probability measures supported on a finite metric space of size $n$.
no code implementations • 5 Feb 2020 • Tianyi Lin, Chi Jin, Michael. I. Jordan
This paper presents the first algorithm with $\tilde{O}(\sqrt{\kappa_{\mathbf x}\kappa_{\mathbf y}})$ gradient complexity, matching the lower bound up to logarithmic factors.
no code implementations • 30 Sep 2019 • Tianyi Lin, Nhat Ho, Marco Cuturi, Michael. I. Jordan
This provides a first \textit{near-linear time} complexity bound guarantee for approximating the MOT problem and matches the best known complexity bound for the Sinkhorn algorithm in the classical OT setting when $m = 2$.
no code implementations • ICML 2020 • Tianyi Lin, Chi Jin, Michael. I. Jordan
We consider nonconvex-concave minimax problems, $\min_{\mathbf{x}} \max_{\mathbf{y} \in \mathcal{Y}} f(\mathbf{x}, \mathbf{y})$, where $f$ is nonconvex in $\mathbf{x}$ but concave in $\mathbf{y}$ and $\mathcal{Y}$ is a convex and bounded set.
no code implementations • 1 Jun 2019 • Tianyi Lin, Nhat Ho, Michael. I. Jordan
We prove that APDAMD achieves the complexity bound of $\widetilde{O}(n^2\sqrt{\delta}\varepsilon^{-1})$ in which $\delta>0$ stands for the regularity of $\phi$.
no code implementations • 16 Apr 2019 • Nhat Ho, Tianyi Lin, Michael. I. Jordan
We also conduct experiments on real datasets and the numerical results demonstrate the effectiveness of our algorithms.
no code implementations • 19 Jan 2019 • Tianyi Lin, Nhat Ho, Michael. I. Jordan
We show that a greedy variant of the classical Sinkhorn algorithm, known as the \emph{Greenkhorn algorithm}, can be improved to $\widetilde{\mathcal{O}}(n^2\varepsilon^{-2})$, improving on the best known complexity bound of $\widetilde{\mathcal{O}}(n^2\varepsilon^{-3})$.
Data Structures and Algorithms
no code implementations • 22 Oct 2018 • Tianyi Lin, Zhiyue Hu, Xin Guo
As topic sparsity of individual documents in online social media increases, so does the difficulty of analyzing the online text sources using traditional methods.
1 code implementation • 1 Jun 2018 • Tianyi Lin, Chenyou Fan, Mengdi Wang, Michael. I. Jordan
Convex composition optimization is an emerging topic that covers a wide range of applications arising from stochastic optimal control, reinforcement learning and multi-stage stochastic programming.
1 code implementation • 31 May 2018 • Tianyi Lin, Shiqian Ma, Yinyu Ye, Shuzhong Zhang
Due its connection to Newton's method, IPM is often classified as second-order method -- a genre that is attached with stability and accuracy at the expense of scalability.
Optimization and Control
no code implementations • 7 Feb 2018 • Tianyi Lin, Chenyou Fan, Mengdi Wang
We consider the nonsmooth convex composition optimization problem where the objective is a composition of two finite-sum functions and analyze stochastic compositional variance reduced gradient (SCVRG) methods for them.
no code implementations • 22 Jan 2018 • Linbo Qiao, Tianyi Lin, Qi Qin, Xicheng Lu
In this paper, we propose a stochastic Primal-Dual Hybrid Gradient (PDHG) approach for solving a wide spectrum of regularized stochastic minimization problems, where the regularization term is composite with a linear function.
no code implementations • 20 Aug 2017 • Tianyi Lin, Linbo Qiao, Teng Zhang, Jiashi Feng, Bofeng Zhang
This optimization model abstracts a number of important applications in artificial intelligence and machine learning, such as fused Lasso, fused logistic regression, and a class of graph-guided regularized minimization.
no code implementations • 19 May 2017 • Xin Guo, Johnny Hong, Tianyi Lin, Nan Yang
Wasserstein Generative Adversarial Networks (WGANs) provide a versatile class of models, which have attracted great attention in various applications.
no code implementations • 9 May 2016 • Bo Jiang, Tianyi Lin, Shiqian Ma, Shuzhong Zhang
In particular, we consider in this paper some constrained nonconvex optimization models in block decision variables, with or without coupled affine constraints.
no code implementations • 16 May 2015 • Tianyi Lin, Shiqian Ma, Shuzhong Zhang
The alternating direction method of multipliers (ADMM) has been successfully applied to solve structured convex optimization problems due to its superior practical performance.
no code implementations • 27 Jan 2013 • Tianyi Lin, Shiqian Ma, Shuzhong Zhang
The classical alternating direction type methods usually assume that the two convex functions have relatively easy proximal mappings.