no code implementations • 11 Mar 2024 • Nikita Tsoy, Anna Mihalkova, Teodora Todorova, Nikola Konstantinov
In this paper, we study the question of when and how a server could design a FL protocol provably beneficial for all participants.
1 code implementation • 20 Dec 2022 • Florian E. Dorner, Momchil Peychev, Nikola Konstantinov, Naman Goel, Elliott Ash, Martin Vechev
While existing research has started to address this gap, current methods are based on hardcoded word replacements, resulting in specifications with limited expressivity or ones that fail to fully align with human intuition (e. g., in cases of asymmetric counterfactuals).
1 code implementation • 24 Jun 2022 • Dimitar I. Dimitrov, Mislav Balunović, Nikola Konstantinov, Martin Vechev
On the popular FEMNIST dataset, we demonstrate that on average we successfully recover >45% of the client's images from realistic FedAvg updates computed on 10 local epochs of 10 batches each with 5 images, compared to only <10% using the baseline.
1 code implementation • 22 Jun 2021 • Eugenia Iofinova, Nikola Konstantinov, Christoph H. Lampert
In this work we address the problem of fair learning from unreliable training data in the robust multisource setting, where the available training data comes from multiple sources, a fraction of which might not be representative of the true data distribution.
no code implementations • 11 Feb 2021 • Nikola Konstantinov, Christoph H. Lampert
Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems.
no code implementations • 11 Feb 2021 • Nikola Konstantinov, Christoph H. Lampert
Given the abundance of applications of ranking in recent years, addressing fairness concerns around automated ranking systems becomes necessary for increasing the trust among end-users.
no code implementations • ICML 2020 • Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph H. Lampert
We study the problem of learning from multiple untrusted data sources, a scenario of increasing practical relevance given the recent emergence of crowdsourcing and collaborative learning paradigms.
2 code implementations • 29 Jan 2019 • Nikola Konstantinov, Christoph Lampert
Modern machine learning methods often require more data for training than a single expert can provide.
no code implementations • NeurIPS 2018 • Dan Alistarh, Torsten Hoefler, Mikael Johansson, Sarit Khirirat, Nikola Konstantinov, Cédric Renggli
Distributed training of massive machine learning models, in particular deep neural networks, via Stochastic Gradient Descent (SGD) is becoming commonplace.
no code implementations • 23 Mar 2018 • Dan Alistarh, Christopher De Sa, Nikola Konstantinov
Stochastic Gradient Descent (SGD) is a fundamental algorithm in machine learning, representing the optimization backbone for training several classic models, from regression to neural networks.