no code implementations • 5 Feb 2024 • Kien Do, Dung Nguyen, Hung Le, Thao Le, Dang Nguyen, Haripriya Harikumar, Truyen Tran, Santu Rana, Svetha Venkatesh
To overcome this challenge, we propose to approximate \frac{1}{p(u|b)} using a biased classifier trained with "bias amplification" losses.
no code implementations • 2 Feb 2024 • Thao Le, Tim Miller, Liz Sonenberg, Ronal Singh
Prior research on AI-assisted human decision-making has explored several different explainable AI (XAI) approaches.
no code implementations • 10 Mar 2023 • Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg
In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction.
no code implementations • 6 Jun 2022 • Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg
In this paper, we show that counterfactual explanations of confidence scores help users better understand and better trust an AI model's prediction in human-subject studies.