Search Results for author: Ewoenam Tokpo

Found 2 papers, 1 papers with code

Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models

no code implementations NAACL 2022 Pieter Delobelle, Ewoenam Tokpo, Toon Calders, Bettina Berendt

We survey the literature on fairness metrics for pre-trained language models and experimentally evaluate compatibility, including both biases in language models and in their downstream tasks.

Attribute Fairness

How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification

1 code implementation30 Jan 2023 Ewoenam Tokpo, Pieter Delobelle, Bettina Berendt, Toon Calders

Considering that the end use of these language models is for downstream tasks like text classification, it is important to understand how these intrinsic bias mitigation strategies actually translate to fairness in downstream tasks and the extent of this.

Fairness text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.