no code implementations • 3 Aug 2023 • Dan Garber, Atara Kaplan
For a smooth objective function, when initialized in certain proximity of an optimal solution which satisfies SC, standard projected gradient methods only require SVD computations (for projecting onto the tensor nuclear norm ball) of rank that matches the tubal rank of the optimal solution.
no code implementations • 23 Jun 2022 • Dan Garber, Atara Kaplan
Low-rank and nonsmooth matrix optimization problems capture many fundamental tasks in statistics and machine learning.
no code implementations • NeurIPS 2021 • Dan Garber, Atara Kaplan
Low-rank and nonsmooth matrix optimization problems capture many fundamental tasks in statistics and machine learning.
no code implementations • 18 Dec 2020 • Dan Garber, Atara Kaplan
In this work we propose an efficient implementations of MEG, both with deterministic and stochastic gradients, which are tailored for optimization with low-rank matrices, and only use a single low-rank SVD computation on each iteration.
no code implementations • 27 Sep 2018 • Dan Garber, Atara Kaplan
However, such problems are highly challenging to solve in large-scale: the low-rank promoting term prohibits efficient implementations of proximal methods for composite optimization and even simple subgradient methods.
no code implementations • 15 Feb 2018 • Dan Garber, Shoham Sabach, Atara Kaplan
Motivated by robust matrix recovery problems such as Robust Principal Component Analysis, we consider a general optimization problem of minimizing a smooth and strongly convex loss function applied to the sum of two blocks of variables, where each block of variables is constrained or regularized individually.