Search Results for author: Alkis Kalavasis

Found 15 papers, 0 papers with code

On the Computational Landscape of Replicable Learning

no code implementations24 May 2024 Alkis Kalavasis, Amin Karbasi, Grigoris Velegkas, Felix Zhou

To obtain this result, we design a replicable lifting framework inspired by Blanc, Lange, Malik, and Tan [2023] that transforms in a black-box manner efficient replicable PAC learners under the uniform marginal distribution over the Boolean hypercube to replicable PAC learners under any marginal distribution, with sample and time complexity that depends on a certain measure of the complexity of the distribution.

Transfer Learning Beyond Bounded Density Ratios

no code implementations18 Mar 2024 Alkis Kalavasis, Ilias Zadik, Manolis Zampetakis

We also provide a discrete analogue of our transfer inequality on the Boolean Hypercube $\{-1, 1\}^n$, and study its connections with the recent problem of Generalization on the Unseen of Abbe, Bengio, Lotfi and Rizk (ICML, 2023).

In-Context Learning Out-of-Distribution Generalization +1

Replicable Learning of Large-Margin Halfspaces

no code implementations21 Feb 2024 Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen, Grigoris Velegkas, Felix Zhou

Departing from the requirement of polynomial time algorithms, using the DP-to-Replicability reduction of Bun, Gaboardi, Hopkins, Impagliazzo, Lei, Pitassi, Sorrell, and Sivakumar [STOC, 2023], we show how to obtain a replicable algorithm for large-margin halfspaces with improved sample complexity with respect to the margin parameter $\tau$, but running time doubly exponential in $1/\tau^2$ and worse sample complexity dependence on $\epsilon$ than one of our previous algorithms.

Learning Hard-Constrained Models with One Sample

no code implementations6 Nov 2023 Andreas Galanis, Alkis Kalavasis, Anthimos Vardis Kandiros

For general $H$-colorings, we show that standard conditions that guarantee sampling, such as Dobrushin's condition, are insufficient for one-sample learning; on the positive side, we provide a general condition that is sufficient to guarantee linear-time learning and obtain applications for proper colorings and permissive models.

Statistical Indistinguishability of Learning Algorithms

no code implementations23 May 2023 Alkis Kalavasis, Amin Karbasi, Shay Moran, Grigoris Velegkas

When two different parties use the same learning rule on their own data, how can we test whether the distributions of the two outcomes are similar?

Perfect Sampling from Pairwise Comparisons

no code implementations23 Nov 2022 Dimitris Fotakis, Alkis Kalavasis, Christos Tzamos

We design a Markov chain whose stationary distribution coincides with $\mathcal{D}$ and give an algorithm to obtain exact samples using the technique of Coupling from the Past.

Learning and Covering Sums of Independent Random Variables with Unbounded Support

no code implementations24 Oct 2022 Alkis Kalavasis, Konstantinos Stavropoulos, Manolis Zampetakis

In this work, we address two questions: (i) Are there general families of SIIRVs with unbounded support that can be learned with sample complexity independent of both $n$ and the maximal element of the support?

Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes

no code implementations5 Oct 2022 Alkis Kalavasis, Grigoris Velegkas, Amin Karbasi

Second, we consider the problem of multiclass classification with structured data (such as data lying on a low dimensional manifold or satisfying margin conditions), a setting which is captured by partial concept classes (Alon, Hanneke, Holzman and Moran, FOCS '21).

Replicable Bandits

no code implementations4 Oct 2022 Hossein Esfandiari, Alkis Kalavasis, Amin Karbasi, Andreas Krause, Vahab Mirrokni, Grigoris Velegkas

Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter.

Multi-Armed Bandits

Differentially Private Regression with Unbounded Covariates

no code implementations19 Feb 2022 Jason Milionis, Alkis Kalavasis, Dimitris Fotakis, Stratis Ioannidis

We provide computationally efficient, differentially private algorithms for the classical regression settings of Least Squares Fitting, Binary Regression and Linear Regression with unbounded covariates.

regression

Label Ranking through Nonparametric Regression

no code implementations4 Nov 2021 Dimitris Fotakis, Alkis Kalavasis, Eleni Psaroudaki

We introduce a generative model for Label Ranking, in noiseless and noisy nonparametric regression settings, and provide sample complexity bounds for learning algorithms in both cases.

regression

Efficient Algorithms for Learning from Coarse Labels

no code implementations22 Aug 2021 Dimitris Fotakis, Alkis Kalavasis, Vasilis Kontonis, Christos Tzamos

Our main algorithmic result is that essentially any problem learnable from fine grained labels can also be learned efficiently when the coarse data are sufficiently informative.

Aggregating Incomplete and Noisy Rankings

no code implementations2 Nov 2020 Dimitris Fotakis, Alkis Kalavasis, Konstantinos Stavropoulos

We consider the problem of learning the true ordering of a set of alternatives from largely incomplete and noisy rankings.

Efficient Parameter Estimation of Truncated Boolean Product Distributions

no code implementations5 Jul 2020 Dimitris Fotakis, Alkis Kalavasis, Christos Tzamos

A stunning consequence is that virtually any statistical task (e. g., learning in total variation distance, parameter estimation, uniformity or identity testing) that can be performed efficiently for Boolean product distributions, can also be performed from truncated samples, with a small increase in sample complexity.

Cannot find the paper you are looking for? You can Submit a new open access paper.