no code implementations • 15 Apr 2024 • Junfan Li, Zenglin Xu, Zheshun Wu, Irwin King
We consider online model selection with decentralized data over $M$ clients, and study the necessity of collaboration among clients.
no code implementations • 21 Dec 2023 • Zheshun Wu, Zenglin Xu, Dun Zeng, Junfan Li, Jie Liu
To address these challenges, we conduct a thorough theoretical convergence analysis for DFL and derive a convergence bound.
1 code implementation • 12 Dec 2023 • Yun Liao, Junfan Li, Shizhong Liao, QinGhua Hu, Jianwu Dang
In this paper, we study the mistake bound of online kernel learning on a budget.
1 code implementation • 14 Jun 2023 • Junfan Li, Shizhong Liao
The trade-off between regret and computational cost is a fundamental problem for online kernel regression, and previous algorithms worked on the trade-off can not keep optimal regret bounds at a sublinear computational complexity.
1 code implementation • 9 Mar 2023 • Junfan Li, Shizhong Liao
We apply the two algorithms to online kernel selection with time constraint and prove new regret bounds matching or improving the previous $O(\sqrt{T\ln{K}} +\Vert f\Vert^2_{\mathcal{H}_i}\max\{\sqrt{T},\frac{T}{\sqrt{\mathcal{R}}}\})$ expected bound where $\mathcal{R}$ is the time budget.
1 code implementation • 26 Dec 2022 • Junfan Li, Shizhong Liao
If the eigenvalues of the kernel matrix decay exponentially, then our algorithm enjoys a regret of $O(\sqrt{\mathcal{A}_T})$ at a computational complexity of $O(\ln^2{T})$.