Search Results for author: Slavomír Hanzely

Found 6 papers, 0 papers with code

Sketch-and-Project Meets Newton Method: Global $\mathcal O(k^{-2})$ Convergence with Low-Rank Updates

no code implementations22 May 2023 Slavomír Hanzely

In this paper, we propose the first sketch-and-project Newton method with fast $\mathcal O(k^{-2})$ global convergence rate for self-concordant functions.

Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes

no code implementations17 Jan 2023 Konstantin Mishchenko, Slavomír Hanzely, Peter Richtárik

As a special case, our theory allows us to show the convergence of First-Order Model-Agnostic Meta-Learning (FO-MAML) to the vicinity of a solution of Moreau objective.

Meta-Learning Personalized Federated Learning

Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation

no code implementations7 Jun 2022 Rustem Islamov, Xun Qian, Slavomír Hanzely, Mher Safaryan, Peter Richtárik

Despite their high computation and communication costs, Newton-type methods remain an appealing option for distributed training due to their robustness against ill-conditioned convex problems.

Federated Learning Vocal Bursts Type Prediction

ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation

no code implementations2 Mar 2021 Zhize Li, Slavomír Hanzely, Peter Richtárik

Avoiding any full gradient computations (which are time-consuming steps) is important in many applications as the number of data samples $n$ usually is very large.

Federated Learning

Lower Bounds and Optimal Algorithms for Personalized Federated Learning

no code implementations NeurIPS 2021 Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik

Our first contribution is establishing the first lower bounds for this formulation, for both the communication complexity and the local oracle complexity.

Personalized Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.