Search Results for author: Yanhong Bai

Found 3 papers, 0 papers with code

FairMonitor: A Dual-framework for Detecting Stereotypes and Biases in Large Language Models

no code implementations6 May 2024 Yanhong Bai, Jiabao Zhao, Jinxin Shi, Zhentao Xie, Xingjiao Wu, Liang He

Detecting stereotypes and biases in Large Language Models (LLMs) is crucial for enhancing fairness and reducing adverse impacts on individuals or groups when these models are applied.

Fairness

FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models

no code implementations21 Aug 2023 Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He

Detecting stereotypes and biases in Large Language Models (LLMs) can enhance fairness and reduce adverse impacts on individuals or groups when these LLMs are applied.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.