Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities

31 Dec 2020  ·  Petr Ostroukhov, Rinat Kamalov, Pavel Dvurechensky, Alexander Gasnikov ·

In this paper we propose three $p$-th order tensor methods for $\mu$-strongly-convex-strongly-concave saddle point problems (SPP). The first method is based on the assumption of $p$-th order smoothness of the objective and it achieves a convergence rate of $O \left( \left( \frac{L_p R^{p - 1}}{\mu} \right)^\frac{2}{p + 1} \log \frac{\mu R^2}{\varepsilon_G} \right)$, where $R$ is an estimate of the initial distance to the solution, and $\varepsilon_G$ is the error in terms of duality gap. Under additional assumptions of first and second order smoothness of the objective we connect the first method with a locally superlinear converging algorithm and develop a second method with the complexity of $O \left( \left( \frac{L_p R^{p - 1}}{\mu} \right)^\frac{2}{p + 1}\log \frac{L_2 R \max \left\{ 1, \frac{L_1}{\mu} \right\}}{\mu} + \log \frac{\log \frac{L_1^3}{2 \mu^2 \varepsilon_G}}{\log \frac{L_1 L_2}{\mu^2}} \right)$. The third method is a modified version of the second method, and it solves gradient norm minimization SPP with $\tilde O \left( \left( \frac{L_p R^p}{\varepsilon_\nabla} \right)^\frac{2}{p + 1} \right)$ oracle calls, where $\varepsilon_\nabla$ is an error in terms of norm of the gradient of the objective. Since we treat SPP as a particular case of variational inequalities, we also propose three methods for strongly monotone variational inequalities with the same complexity as the described above.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Optimization and Control