1 code implementation • 7 Jul 2023 • Shiva Omrani Sabbaghi, Robert Wolfe, Aylin Caliskan
Adapting the projection-based approach to embedding association tests that quantify bias, we find that language models exhibit the most biased attitudes against gender identity, social class, and sexual orientation signals in language.
1 code implementation • 3 Jun 2022 • Shiva Omrani Sabbaghi, Aylin Caliskan
We demonstrate that word embeddings learn the association between a noun and its grammatical gender in grammatically gendered languages, which can skew social gender bias measurements.