1 code implementation • 7 Dec 2023 • Karima Makhlouf, Heber H. Arcolezi, Sami Zhioua, Ghassen Ben Brahim, Catuscia Palamidessi
Automated decision systems are increasingly used to make consequential decisions in people's lives.
no code implementations • 7 Nov 2023 • Rūta Binkytė, Carlos Pinzón, Szilvia Lestyán, Kangsoo Jung, Héber H. Arcolezi, Catuscia Palamidessi
It is based on the application of controlled noise at the interface between the server that stores and processes the data, and the data consumers.
no code implementations • 2 Oct 2023 • Filippo Galli, Catuscia Palamidessi, Tommaso Cucinotta
Training differentially private machine learning models requires constraining an individual's contribution to the optimization process.
no code implementations • 1 Sep 2023 • Filippo Galli, Kangsoo Jung, Sayan Biswas, Catuscia Palamidessi, Tommaso Cucinotta
FL was proposed as a stepping-stone towards privacy-preserving machine learning, but it has been shown vulnerable to issues such as leakage of private information, lack of personalization of the model, and the possibility of having a trained model that is fairer to some groups than to others.
1 code implementation • 15 Jul 2023 • Héber H. Arcolezi, Selene Cerna, Catuscia Palamidessi
This paper investigates the utility gain of using Iterative Bayesian Update (IBU) for private discrete distribution estimation using data obfuscated with Locally Differentially Private (LDP) mechanisms.
no code implementations • 6 Jul 2023 • Ruta Binkyte, Daniele Gorla, Catuscia Palamidessi
We consider the problem of unfair discrimination between two groups and propose a pre-processing method to achieve fairness.
1 code implementation • 25 Apr 2023 • Héber H. Arcolezi, Karima Makhlouf, Catuscia Palamidessi
However, as the collection of multiple sensitive information becomes more prevalent across various industries, collecting a single sensitive attribute under LDP may not be sufficient.
no code implementations • 16 Sep 2022 • Guilherme Alves, Fabien Bernier, Miguel Couceiro, Karima Makhlouf, Catuscia Palamidessi, Sami Zhioua
Fairness requirements to be satisfied while learning models created several types of tensions among the different notions of fairness and other desirable properties such as privacy and classification accuracy.
no code implementations • 14 Jun 2022 • Rūta Binkytė-Sadauskienė, Karima Makhlouf, Carlos Pinzón, Sami Zhioua, Catuscia Palamidessi
Existing causal approaches to fairness in the literature do not address this problem and assume that the causal model is available.
no code implementations • 7 Jun 2022 • Filippo Galli, Sayan Biswas, Kangsoo Jung, Tommaso Cucinotta, Catuscia Palamidessi
To cope with the issue of protecting the privacy of the clients and allowing for personalized model training to enhance the fairness and utility of the system, we propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy which enables personalized models under the framework of FL.
1 code implementation • CVPR 2022 • Ganesh Del Grosso, Hamid Jalalzai, Georg Pichler, Catuscia Palamidessi, Pablo Piantanida
The use of personal data for training machine learning systems comes with a privacy threat and measuring the level of privacy of a model is one of the major challenges in machine learning today.
no code implementations • 11 Mar 2022 • Karima Makhlouf, Sami Zhioua, Catuscia Palamidessi
This paper is a compilation of the major identifiability results which are of particular relevance for machine learning fairness.
no code implementations • 14 Jul 2021 • Carlos Pinzón, Catuscia Palamidessi, Pablo Piantanida, Frank Valencia
One of the main concerns about fairness in machine learning (ML) is that, in order to achieve it, one may have to trade off some accuracy.
1 code implementation • NeurIPS 2021 • Federica Granese, Marco Romanelli, Daniele Gorla, Catuscia Palamidessi, Pablo Piantanida
Deep neural networks (DNNs) have shown to perform very well on large scale object recognition problems and lead to widespread use for real-world applications, including situations where DNN are implemented as "black boxes".
no code implementations • 9 May 2021 • Ganesh Del Grosso, Georg Pichler, Catuscia Palamidessi, Pablo Piantanida
We present a novel formalism, generalizing membership and attribute inference attack setups previously studied in the literature and connecting them to memorization and generalization.
no code implementations • 22 Dec 2020 • Mário S. Alvim, Konstantinos Chatzikokolakis, Yusuke Kawamoto, Catuscia Palamidessi
A common goal in the areas of secure information flow and privacy is to build effective defenses against unwanted leakage of information.
no code implementations • 19 Oct 2020 • Karima Makhlouf, Sami Zhioua, Catuscia Palamidessi
Addressing the problem of fairness is crucial to safely use machine learning algorithms to support decisions with a critical impact on people's lives such as job hiring, child maltreatment, disease diagnosis, loan granting, etc.
no code implementations • 30 Jun 2020 • Karima Makhlouf, Sami Zhioua, Catuscia Palamidessi
Fairness emerged as an important requirement to guarantee that Machine Learning (ML) predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities.
1 code implementation • 9 May 2020 • Marco Romanelli, Konstantinos Chatzikokolakis, Catuscia Palamidessi, Pablo Piantanida
A feature of our approach is that it does not require to estimate the conditional probabilities, and that it is suitable for a large class of ML algorithms.
no code implementations • 27 Jan 2020 • Catuscia Palamidessi, Marco Romanelli
Many algorithms for feature selection in the literature have adopted the Shannon-entropy-based mutual information.
1 code implementation • 1 Apr 2019 • Marco Romanelli, Konstantinos Chatzikokolakis, Catuscia Palamidessi
The idea is to set up two nets: the generator, that tries to produce an optimal obfuscation mechanism to protect the data, and the classifier, that tries to de-obfuscate the data.
1 code implementation • 4 Feb 2019 • Giovanni Cherubin, Konstantinos Chatzikokolakis, Catuscia Palamidessi
The state-of-the-art method for estimating these leakage measures is the frequentist paradigm, which approximates the system's internals by looking at the frequencies of its inputs and outputs.
Cryptography and Security
2 code implementations • 10 Dec 2012 • Miguel E. Andrés, Nicolás E. Bordenabe, Konstantinos Chatzikokolakis, Catuscia Palamidessi
The growing popularity of location-based systems, allowing unknown/untrusted servers to easily collect huge amounts of information regarding users' location, has recently started raising serious privacy concerns.
Cryptography and Security C.2.0; K.4.1