no code implementations • 27 Feb 2024 • Nikos Zarifis, Puqian Wang, Ilias Diakonikolas, Jelena Diakonikolas
We give an efficient learning algorithm, achieving a constant factor approximation to the optimal loss, that succeeds under a range of distributions (including log-concave distributions) and a broad class of monotone and Lipschitz link functions.
no code implementations • 28 Jun 2023 • Ilias Diakonikolas, Jelena Diakonikolas, Daniel M. Kane, Puqian Wang, Nikos Zarifis
Our main result is a lower bound for Statistical Query (SQ) algorithms and low-degree polynomial tests suggesting that the quadratic dependence on $1/\epsilon$ in the sample complexity is inherent for computationally efficient algorithms.
no code implementations • 13 Jun 2023 • Puqian Wang, Nikos Zarifis, Ilias Diakonikolas, Jelena Diakonikolas
We study the problem of learning a single neuron with respect to the $L_2^2$-loss in the presence of adversarial label noise.
no code implementations • 28 Jan 2021 • Jelena Diakonikolas, Puqian Wang
We introduce a novel potential function-based framework to study the convergence of standard methods for making the gradients small in smooth convex optimization and convex-concave min-max optimization.