no code implementations • 4 Apr 2024 • Tianwei Chen, Yusuke Hirota, Mayu Otani, Noa Garcia, Yuta Nakashima
We investigate the impact of deep generative models on potential social biases in upcoming computer vision models.
1 code implementation • CVPR 2023 • Yusuke Hirota, Yuta Nakashima, Noa Garcia
From this observation, we hypothesize that there are two types of gender bias affecting image captioning models: 1) bias that exploits context to predict gender, and 2) bias in the probability of generating certain (often stereotypical) words because of gender.
1 code implementation • CVPR 2023 • Noa Garcia, Yusuke Hirota, Yankun Wu, Yuta Nakashima
The increasing tendency to collect large and uncurated datasets to train vision-and-language models has raised concerns about fair representations.
no code implementations • 17 May 2022 • Yusuke Hirota, Yuta Nakashima, Noa Garcia
Our findings suggest that there are dangers associated to using VQA datasets without considering and dealing with the potentially harmful stereotypes.
1 code implementation • CVPR 2022 • Yusuke Hirota, Yuta Nakashima, Noa Garcia
We study societal bias amplification in image captioning.
no code implementations • 25 Jun 2021 • Yusuke Hirota, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Ittetsu Taniguchi, Takao Onoye
This paper delves into the effectiveness of textual representations for image understanding in the specific context of VQA.