no code implementations • 7 Feb 2024 • Marlon Tobaben, Gauri Pradhan, Yuan He, Joonas Jälkö, Antti Honkela
We apply a state-of-the-art membership inference attack (MIA) to systematically test the practical privacy vulnerability of fine-tuning large image classification models. We focus on understanding the properties of data sets and samples that make them vulnerable to membership inference.
no code implementations • 6 Feb 2024 • Ossi Räisä, Joonas Jälkö, Antti Honkela
The remaining subsampling-induced variance decreases with larger batch sizes, so large batches reduce the effective total gradient variance.
1 code implementation • 9 Aug 2023 • Lukas Prediger, Joonas Jälkö, Antti Honkela, Samuel Kaski
Consider a setting where multiple parties holding sensitive data aim to collaboratively learn population level statistics, but pooling the sensitive data sets is not possible.
no code implementations • 28 Oct 2022 • Joonas Jälkö, Lukas Prediger, Antti Honkela, Samuel Kaski
Using this as prior knowledge we establish a link between the gradients of the variational parameters, and propose an efficient while simple fix for the problem to obtain a less noisy gradient estimator, which we call $\textit{aligned}$ gradients.
2 code implementations • 28 May 2022 • Ossi Räisä, Joonas Jälkö, Samuel Kaski, Antti Honkela
For example, confidence intervals become too narrow, which we demonstrate with a simple experiment.
no code implementations • 27 Oct 2021 • tejas kulkarni, Joonas Jälkö, Samuel Kaski, Antti Honkela
In recent years, local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.
no code implementations • 1 Nov 2020 • tejas kulkarni, Joonas Jälkö, Antti Koskela, Samuel Kaski, Antti Honkela
Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets.
no code implementations • 19 Oct 2020 • Razane Tajeddine, Joonas Jälkö, Samuel Kaski, Antti Honkela
We modify a secure multiparty computation (MPC) framework to combine MPC with differential privacy (DP), in order to use differentially private MPC effectively to learn a probabilistic generative model under DP on such vertically partitioned data.
1 code implementation • 12 Jun 2020 • Antti Koskela, Joonas Jälkö, Lukas Prediger, Antti Honkela
We carry out an error analysis of the method in terms of moment bounds of the privacy loss distribution which leads to rigorous lower and upper bounds for the true $(\varepsilon,\delta)$-values.
2 code implementations • 10 Dec 2019 • Joonas Jälkö, Eemil Lagerspetz, Jari Haukka, Sasu Tarkoma, Antti Honkela, Samuel Kaski
Differential privacy allows quantifying privacy loss resulting from accessing sensitive personal data.
1 code implementation • 7 Jun 2019 • Antti Koskela, Joonas Jälkö, Antti Honkela
The privacy loss of DP algorithms is commonly reported using $(\varepsilon,\delta)$-DP.
1 code implementation • NeurIPS 2019 • Mikko A. Heikkilä, Joonas Jälkö, Onur Dikmen, Antti Honkela
Recent developments in differentially private (DP) machine learning and DP Bayesian learning have enabled learning under strong privacy guarantees for the training data subjects.
2 code implementations • 27 Oct 2016 • Joonas Jälkö, Onur Dikmen, Antti Honkela
It is built on top of doubly stochastic variational inference, a recent advance which provides a variational solution to a large class of models.