no code implementations • 16 Nov 2023 • Slavomír Hanzely
Machine learning assumes a pivotal role in our data-driven world.
no code implementations • 22 May 2023 • Slavomír Hanzely
In this paper, we propose the first sketch-and-project Newton method with fast $\mathcal O(k^{-2})$ global convergence rate for self-concordant functions.
no code implementations • 17 Jan 2023 • Konstantin Mishchenko, Slavomír Hanzely, Peter Richtárik
As a special case, our theory allows us to show the convergence of First-Order Model-Agnostic Meta-Learning (FO-MAML) to the vicinity of a solution of Moreau objective.
no code implementations • 7 Jun 2022 • Rustem Islamov, Xun Qian, Slavomír Hanzely, Mher Safaryan, Peter Richtárik
Despite their high computation and communication costs, Newton-type methods remain an appealing option for distributed training due to their robustness against ill-conditioned convex problems.
no code implementations • 2 Mar 2021 • Zhize Li, Slavomír Hanzely, Peter Richtárik
Avoiding any full gradient computations (which are time-consuming steps) is important in many applications as the number of data samples $n$ usually is very large.
no code implementations • NeurIPS 2021 • Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik
Our first contribution is establishing the first lower bounds for this formulation, for both the communication complexity and the local oracle complexity.