2 code implementations • 4 Apr 2024 • Andrei Semenov, Vladimir Ivanov, Aleksandr Beznosikov, Alexander Gasnikov
We propose a novel architecture and method of explainable classification with Concept Bottleneck Models (CBMs).
Ranked #1 on Image Classification on CUB-200-2011
no code implementations • 15 Jan 2024 • Daniil Medyakov, Gleb Molodtsov, Aleksandr Beznosikov, Alexander Gasnikov
Therefore, a large amount of research has recently been directed at solving this problem.
1 code implementation • 15 Jan 2024 • Mikhail Rudakov, Aleksandr Beznosikov, Yaroslav Kholodov, Alexander Gasnikov
We analyze compression methods such as quantization and TopK compression, and also experiment with error compensation techniques.
no code implementations • 9 Oct 2023 • Aleksei Ustimenko, Aleksandr Beznosikov
In this work, we consider rather general and broad class of Markov chains, Ito chains, that look like Euler-Maryama discretization of some Stochastic Differential Equation.
no code implementations • NeurIPS 2023 • Aleksandr Beznosikov, Sergey Samsonov, Marina Sheshukova, Alexander Gasnikov, Alexey Naumov, Eric Moulines
We present a unified approach for the theoretical analysis of first-order gradient methods for stochastic optimization and variational inequalities.
no code implementations • 23 Apr 2023 • Aleksandr Beznosikov, David Dobre, Gauthier Gidel
Moreover, our second approach does not require either large batches or full deterministic gradients, which is a typical weakness of many techniques for finite-sum problems.
no code implementations • NeurIPS 2023 • Aleksandr Beznosikov, Martin Takáč, Alexander Gasnikov
The methods presented in this paper have the best theoretical guarantees of communication complexity and are significantly ahead of other methods for distributed variational inequalities.
no code implementations • 12 Oct 2022 • Aleksandr Beznosikov, Alexander Gasnikov
In this paper we consider the problem of stochastic finite-sum cocoercive variational inequalities.
no code implementations • 29 Aug 2022 • Aleksandr Beznosikov, Boris Polyak, Eduard Gorbunov, Dmitry Kovalev, Alexander Gasnikov
This paper is a survey of methods for solving smooth (strongly) monotone stochastic variational inequalities.
no code implementations • 19 Jun 2022 • Aleksandr Beznosikov, Alexander Gasnikov
Variational inequalities are an important tool, which includes minimization, saddles, games, fixed-point problems.
no code implementations • 16 Jun 2022 • Aleksandr Beznosikov, Aibek Alanov, Dmitry Kovalev, Martin Takáč, Alexander Gasnikov
Methods with adaptive scaling of different features play a key role in solving saddle point problems, primarily due to Adam's popularity for solving adversarial machine learning problems, including GANS training.
no code implementations • 1 Jun 2022 • Abdurakhmon Sadiev, Aleksandr Beznosikov, Abdulla Jasem Almansoori, Dmitry Kamzolov, Rachael Tappenden, Martin Takáč
There are several algorithms for such problems, but existing methods often work poorly when the problem is badly scaled and/or ill-conditioned, and a primary goal of this work is to introduce methods that alleviate this issue.
no code implementations • 30 May 2022 • Dmitry Kovalev, Aleksandr Beznosikov, Ekaterina Borodich, Alexander Gasnikov, Gesualdo Scutari
Finally the method is extended to distributed saddle-problems (under function similarity) by means of solving a class of variational inequalities, achieving lower communication and computation complexity bounds.
1 code implementation • 15 Feb 2022 • Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, Nicolas Loizou
Although variants of the new methods are known for solving minimization problems, they were never considered or analyzed for solving min-max problems and VIPs.
no code implementations • 6 Feb 2022 • Dmitry Kovalev, Aleksandr Beznosikov, Abdurakhmon Sadiev, Michael Persiianov, Peter Richtárik, Alexander Gasnikov
Our algorithms are the best among the available literature not only in the decentralized stochastic case, but also in the decentralized deterministic and non-distributed stochastic cases.
no code implementations • NeurIPS 2021 • Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, Alexander Gasnikov
We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type--master/workers (thus centralized) architectures and mesh (thus decentralized) networks.
no code implementations • 26 Nov 2021 • Aleksandr Beznosikov, Martin Takáč
The StochAstic Recursive grAdient algoritHm (SARAH) algorithm is a variance reduced variant of the Stochastic Gradient Descent (SGD) algorithm that needs a gradient of the objective function from time to time.
no code implementations • 7 Oct 2021 • Aleksandr Beznosikov, Peter Richtárik, Michael Diskin, Max Ryabinin, Alexander Gasnikov
Due to these considerations, it is important to equip existing methods with strategies that would allow to reduce the volume of transmitted information during training while obtaining a model of comparable quality.
1 code implementation • 22 Jul 2021 • Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, Alexander Gasnikov
We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type - master/workers (thus centralized) architectures and meshed (thus decentralized) networks.
no code implementations • 15 Jun 2021 • Aleksandr Beznosikov, Pavel Dvurechensky, Anastasia Koloskova, Valentin Samokhin, Sebastian U Stich, Alexander Gasnikov
We extend the stochastic extragradient method to this very general setting and theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone (when a Minty solution exists) settings.
no code implementations • 14 Jun 2021 • Ekaterina Borodich, Aleksandr Beznosikov, Abdurakhmon Sadiev, Vadim Sushko, Nikolay Savelyev, Martin Takáč, Alexander Gasnikov
Personalized Federated Learning (PFL) has witnessed remarkable advancements, enabling the development of innovative machine learning applications that preserve the privacy of training data.
no code implementations • 25 Oct 2020 • Aleksandr Beznosikov, Valentin Samokhin, Alexander Gasnikov
This paper focuses on the distributed optimization of stochastic saddle point problems.
no code implementations • 21 Sep 2020 • Abdurakhmon Sadiev, Aleksandr Beznosikov, Pavel Dvurechensky, Alexander Gasnikov
In particular, our analysis shows that in the case when the feasible set is a direct product of two simplices, our convergence rate for the stochastic term is only by a $\log n$ factor worse than for the first-order methods.
no code implementations • 12 May 2020 • Aleksandr Beznosikov, Abdurakhmon Sadiev, Alexander Gasnikov
In the second part of the paper, we analyze the case when such an assumption cannot be made, we propose a general approach on how to modernize the method to solve this problem, and also we apply this approach to particular cases of some classical sets.
no code implementations • 27 Feb 2020 • Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan
In the last few years, various communication compression techniques have emerged as an indispensable tool helping to alleviate the communication bottleneck in distributed learning.