no code implementations • ACL (NL4XAI, INLG) 2020 • Eirini Ntoutsi
In this talk, we will focus on the fairness aspect.
no code implementations • 23 Apr 2024 • Alaa Elobaid, Nathan Ramoly, Lara Younes, Symeon Papadopoulos, Eirini Ntoutsi, Ioannis Kompatsiaris
Biometric Verification (BV) systems often exhibit accuracy disparities across different demographic groups, leading to biases in BV applications.
1 code implementation • 3 Apr 2024 • Vasilis Gkolemis, Christos Diou, Eirini Ntoutsi, Theodore Dalamagas, Bernd Bischl, Julia Herbinger, Giuseppe Casalicchio
Effector implements well-established global effect methods, assesses the heterogeneity of each method and, based on that, provides regional effects.
1 code implementation • 16 Feb 2024 • Siamak Ghodsi, Seyed Amjad Seyedi, Eirini Ntoutsi
Conventional fair graph clustering methods face two primary challenges: i) They prioritize balanced clusters at the expense of cluster cohesion by imposing rigid constraints, ii) Existing methods of both individual and group-level fairness in graph partitioning mostly rely on eigen decompositions and thus, generally lack interpretability.
1 code implementation • 20 Oct 2023 • Arjun Roy, Christos Koutlis, Symeon Papadopoulos, Eirini Ntoutsi
The generalization capacity of Multi-Task Learning (MTL) becomes limited when unrelated tasks negatively impact each other by updating shared parameters with conflicting gradients, resulting in negative transfer and a reduction in MTL accuracy compared to single-task learning (STL).
no code implementations • 21 Sep 2023 • Vasilis Gkolemis, Anargiros Tzerefos, Theodore Dalamagas, Eirini Ntoutsi, Christos Diou
Consequently, RAMs fit one component per subregion of each feature instead of one component per feature.
1 code implementation • 20 Sep 2023 • Vasilis Gkolemis, Theodore Dalamagas, Eirini Ntoutsi, Christos Diou
RHALE quantifies the heterogeneity by considering the standard deviation of the local effects and automatically determines an optimal variable-size bin-splitting.
1 code implementation • 2 Jun 2023 • Siamak Ghodsi, Eirini Ntoutsi
This paper presents MASC, a data augmentation approach that leverages affinity clustering to balance the representation of non-protected and protected groups of a target dataset by utilizing instances of the same protected attributes from similar datasets that are categorized in the same cluster as the target dataset by sharing instances of the protected attribute.
no code implementations • 12 Feb 2023 • Arjun Roy, Jan Horstmann, Eirini Ntoutsi
AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age.
no code implementations • 11 Feb 2023 • Yi Cai, Arthur Zimek, Eirini Ntoutsi, Gerhard Wunder
The importance of neighborhood construction in local explanation methods has been already highlighted in the literature.
no code implementations • 9 Jan 2023 • Tai Le Quy, Gunnar Friege, Eirini Ntoutsi
These models are believed to be practical tools for analyzing students' data and ensuring fairness in EDS.
1 code implementation • 17 Sep 2022 • Vasileios Iosifidis, Symeon Papadopoulos, Bodo Rosenhahn, Eirini Ntoutsi
Class imbalance poses a major challenge for machine learning as most supervised learning models might exhibit bias towards the majority class and under-perform in the minority class.
1 code implementation • 7 Sep 2022 • Yi Cai, Arthur Zimek, Gerhard Wunder, Eirini Ntoutsi
Hate speech detection is a common downstream application of natural language processing (NLP) in the real world.
no code implementations • 22 Aug 2022 • Tai Le Quy, Thi Huyen Nguyen, Gunnar Friege, Eirini Ntoutsi
Predicting students' academic performance is one of the key tasks of educational data mining (EDM).
no code implementations • 23 Jun 2022 • Siamak Ghodsi, Harith Alani, Eirini Ntoutsi
With the ever growing involvement of data-driven AI-based decision making technologies in our daily social lives, the fairness of these systems is becoming a crucial phenomenon.
no code implementations • 20 Jun 2022 • Tai Le Quy, Gunnar Friege, Eirini Ntoutsi
Group work is a prevalent activity in educational settings, where students are often divided into topic-specific groups based on their preferences.
no code implementations • 16 Jun 2022 • Arjun Roy, Eirini Ntoutsi
We introduce the L2T-FMT algorithm that is a teacher-student network trained collaboratively; the student learns to solve the fair MTL problem while the teacher instructs the student to learn from either accuracy or fairness, depending on what is harder to learn for each task.
no code implementations • 17 Apr 2022 • Xuejiao Tang, Tai Le Quy, Eirini Ntoutsi, Kea Turner, Vasile Palade, Israat Haque, Peng Xu, Chris Brown, Wenbin Zhang
Given a question-image input, the Visual Commonsense Reasoning (VCR) model can predict an answer with the corresponding rationale, which requires inference ability from the real world.
no code implementations • 4 Jan 2022 • Vasileios Iosifidis, Arjun Roy, Eirini Ntoutsi
Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race.
1 code implementation • 1 Oct 2021 • Tai Le Quy, Arjun Roy, Vasileios Iosifidis, Wenbin Zhang, Eirini Ntoutsi
For a deeper understanding of bias in the datasets, we investigate the interesting relationships using exploratory analysis.
1 code implementation • 30 Sep 2021 • Yi Cai, Arthur Zimek, Eirini Ntoutsi
The importance of the neighborhood for training a local surrogate model to approximate the local decision boundary of a black box classifier has been already highlighted in the literature.
no code implementations • 13 Aug 2021 • Vasileios Iosifidis, Wenbin Zhang, Eirini Ntoutsi
Data-driven learning algorithms are employed in many online applications, in which data become available over time, like network monitoring, stock price prediction, job applications, etc.
1 code implementation • 6 Aug 2021 • Xuejiao Tang, Wenbin Zhang, Yi Yu, Kea Turner, Tyler Derr, Mengyu Wang, Eirini Ntoutsi
While image understanding on recognition-level has achieved remarkable advancements, reliable visual scene understanding requires comprehensive image understanding on recognition-level but also cognition-level, which calls for exploiting the multi-source information as well as learning different levels of understanding and extensive commonsense knowledge.
no code implementations • 16 Jul 2021 • Simone Fabbrizzi, Symeon Papadopoulos, Eirini Ntoutsi, Ioannis Kompatsiaris
Hence, this work aims to: i) describe the biases that might manifest in visual datasets; ii) review the literature on methods for bias discovery and quantification in visual datasets; iii) discuss existing attempts to collect bias-aware visual datasets.
no code implementations • 27 Apr 2021 • Arjun Roy, Vasileios Iosifidis, Eirini Ntoutsi
Recent studies showed that datasets used in fairness-aware machine learning for multiple protected attributes (referred to as multi-discrimination hereafter) are often imbalanced.
no code implementations • 25 Apr 2021 • Tai Le Quy, Arjun Roy, Gunnar Friege, Eirini Ntoutsi
To this end, we introduce the fair-capacitated clustering problem that partitions the data into clusters of similar instances while ensuring cluster fairness and balancing cluster cardinalities.
1 code implementation • 12 Apr 2021 • Philip Naumann, Eirini Ntoutsi
Recently, methods have been proposed that also consider the order in which actions are applied, leading to the so-called sequential counterfactual generation problem.
no code implementations • 30 Mar 2021 • Tai Le Quy, Sergej Zerr, Eirini Ntoutsi, Wolfgang Nejdl
An important step towards improving the performance of these energy disaggregation methods is to improve the quality of the data sets.
no code implementations • 29 Dec 2020 • Amir Abolfazli, Eirini Ntoutsi
Online class imbalance learning deals with data streams that are affected by both concept drift and class imbalance.
1 code implementation • 5 Apr 2020 • Tongxin Hu, Vasileios Iosifidis, Wentong Liao, Hang Zhang, Michael YingYang, Eirini Ntoutsi, Bodo Rosenhahn
In this paper, we propose FairNN a neural network that performs joint feature representation and classification for fairness-aware learning.
no code implementations • 3 Feb 2020 • Vasileios Iosifidis, Besnik Fetahu, Eirini Ntoutsi
In the post-processing step, we tackle the problem of class overlapping by shifting the decision boundary in the direction of fairness.
1 code implementation • 17 Sep 2019 • Vasileios Iosifidis, Eirini Ntoutsi
The widespread use of ML-based decision making in domains with high societal impact such as recidivism, job hiring and loan credit has raised a lot of concerns regarding potential discrimination.
no code implementations • 16 Jul 2019 • Vasileios Iosifidis, Thi Ngoc Han Tran, Eirini Ntoutsi
The wide spread usage of automated data-driven decision support systems has raised a lot of concerns regarding accountability and fairness of the employed models in the absence of human supervision.
1 code implementation • 16 Jul 2019 • Wenbin Zhang, Eirini Ntoutsi
However, there is a growing concern about the accountability and fairness of the employed models by the fact that often the available historic data is intrinsically discriminatory, i. e., the proportion of members sharing one or more sensitive attributes is higher than the proportion in the population as a whole when receiving positive classification, which leads to a lack of fairness in decision support system.
no code implementations • 3 Sep 2015 • Max Zimmermann, Eirini Ntoutsi, Myra Spiliopoulou
In the experiments, we evaluate the classifier performance over time by varying: (a) the class distribution of the opinionated stream, while assuming that the set of the words in the vocabulary is fixed but their polarities may change with the class distribution; and (b) the number of unknown words arriving at each moment, while the class polarity may also change.