no code implementations • ICML 2020 • Yuanyu Wan, Wei-Wei Tu, Lijun Zhang
To deal with complicated constraints via locally light computation in distributed online learning, recent study has presented a projection-free algorithm called distributed online conditional gradient (D-OCG), and achieved an $O(T^{3/4})$ regret bound, where $T$ is the number of prediction rounds.
no code implementations • 14 Feb 2024 • Yuanyu Wan, Chang Yao, Mingli Song, Lijun Zhang
Previous studies have established a regret bound of $O(T^{3/4}+d^{1/3}T^{2/3})$ for this problem, where $d$ is the maximum delay, by simply feeding delayed loss values to the classical bandit gradient descent (BGD) algorithm.
no code implementations • 14 Feb 2024 • Yuanyu Wan, Tong Wei, Mingli Song, Lijun Zhang
Previous studies have established $O(n^{5/4}\rho^{-1/2}\sqrt{T})$ and ${O}(n^{3/2}\rho^{-1}\log T)$ regret bounds for convex and strongly convex functions respectively, where $n$ is the number of local learners, $\rho<1$ is the spectral gap of the communication matrix, and $T$ is the time horizon.
1 code implementation • 5 Aug 2023 • Yuwen Wang, Shunyu Liu, KaiXuan Chen, Tongtian Zhu, Ji Qiao, Mengjie Shi, Yuanyu Wan, Mingli Song
Graph Lottery Ticket (GLT), a combination of core subgraph and sparse subnetwork, has been proposed to mitigate the computational cost of deep Graph Neural Networks (GNNs) on large input graphs while preserving original performance.
no code implementations • 29 May 2023 • Yucheng Liao, Yuanyu Wan, Chang Yao, Mingli Song
We investigate the problem of online learning with monotone and continuous DR-submodular reward functions, which has received great attention recently.
no code implementations • 20 May 2023 • Yuanyu Wan, Chang Yao, Mingli Song, Lijun Zhang
Despite its simplicity, our novel analysis shows that the dynamic regret of DOGD can be automatically bounded by $O(\sqrt{\bar{d}T}(P_T+1))$ under mild assumptions, and $O(\sqrt{dT}(P_T+1))$ in the worst case, where $\bar{d}$ and $d$ denote the average and maximum delay respectively, $T$ is the time horizon, and $P_T$ is the path length of comparators.
no code implementations • 19 May 2023 • Yibo Wang, Wenhao Yang, Wei Jiang, Shiyin Lu, Bing Wang, Haihong Tang, Yuanyu Wan, Lijun Zhang
Specifically, we first provide a novel dynamic regret analysis for an existing projection-free method named $\text{BOGD}_\text{IP}$, and establish an $\mathcal{O}(T^{3/4}(1+P_T))$ dynamic regret bound, where $P_T$ denotes the path-length of the comparator sequence.
no code implementations • 11 Feb 2023 • Yuanyu Wan, Lijun Zhang, Mingli Song
In this way, we first show that the dynamic regret bound of OFW can be improved to $O(\sqrt{T(1+V_T)})$ for smooth functions.
no code implementations • 11 Apr 2022 • Yuanyu Wan, Yibo Wang, Chang Yao, Wei-Wei Tu, Lijun Zhang
Projection-free online learning, which eschews the projection operation via less expensive computations such as linear optimization (LO), has received much interest recently due to its efficiency in handling high-dimensional problems with complex constraints.
no code implementations • 12 Feb 2022 • Zhilin Zhao, Longbing Cao, Yuanyu Wan
CO$_2$ extracts knowledge by training an offline expert for each offline interval and update an online expert by an off-the-shelf online optimization method in the online interval.
no code implementations • NeurIPS 2021 • Guanghui Wang, Yuanyu Wan, Tianbao Yang, Lijun Zhang
To control the switching cost, we introduce the problem of online convex optimization with continuous switching constraint, where the goal is to achieve a small regret given a budget on the \emph{overall} switching cost.
no code implementations • 21 Mar 2021 • Yuanyu Wan, Wei-Wei Tu, Lijun Zhang
Specifically, we first extend the delayed variant of OGD for strongly convex functions, and establish a better regret bound of $O(d\log T)$, where $d$ is the maximum delay.
no code implementations • 20 Mar 2021 • Yuanyu Wan, Guanghui Wang, Wei-Wei Tu, Lijun Zhang
In this paper, we propose an improved variant of D-OCG, namely D-BOCG, which can attain the same $O(T^{3/4})$ regret bound with only $O(\sqrt{T})$ communication rounds for convex losses, and a better regret bound of $O(T^{2/3}(\log T)^{1/3})$ with fewer $O(T^{1/3}(\log T)^{2/3})$ communication rounds for strongly convex losses.
no code implementations • 16 Oct 2020 • Yuanyu Wan, Lijun Zhang
In this paper, we study the special case of online learning over strongly convex sets, for which we first prove that OFW enjoys a better regret bound of $O(T^{2/3})$ for general convex losses.
no code implementations • 8 Sep 2020 • Yuanyu Wan, Lijun Zhang
In this paper, we propose to reduce the time complexity by exploiting the sparsity of the input matrices.
no code implementations • 27 Jun 2018 • Yuanyu Wan, Jin-Feng Yi, Lijun Zhang
Then, for each partially observed column, we recover it by finding a vector which lies in the recovered column space and consists of the observed entries.