1 code implementation • 17 Jan 2022 • Alex Bäuerle, Aybuke Gul Turker, Ken Burke, Osman Aka, Timo Ropinski, Christina Greer, Mani Varadarajan
With our approach, different models and datasets for large label spaces can be systematically and visually analyzed and compared to make informed fairness assessments tackling problematic bias.
no code implementations • 5 Mar 2021 • Osman Aka, Ken Burke, Alex Bäuerle, Christina Greer, Margaret Mitchell
By treating a classification model's predictions for a given image as a set of labels analogous to a bag of words, we rank the biases that a model has learned with respect to different identity labels.