no code implementations • 27 Feb 2024 • Jamie Duell, Monika Seisenberger, Hsuan Fu, Xiuyi Fan
In this context, we introduce Quantified Uncertainty Counterfactual Explanations (QUCE), a method designed to mitigate out-of-distribution traversal by minimizing path uncertainty.
no code implementations • 23 May 2023 • Gaoxia Zhu, Xiuyi Fan, Chenyu Hou, Tianlong Zhong, Peter Seow, Annabel Chen Shen-Hsing, Preman Rajalingam, Low Kin Yew, Tan Lay Poh
Weekly surveys were conducted on collaborative interdisciplinary problem-solving, physical and cognitive engagement, and individual reflections on ChatGPT use.
no code implementations • 6 Apr 2023 • Li Rong Wang, Hsuan Fu, Xiuyi Fan
We study the impacts of business cycles on machine learning (ML) predictions.
no code implementations • 1 Sep 2022 • Xiuyi Fan
As rules in classical structured argumentation frameworks, p-rules form deduction systems.
no code implementations • 23 Mar 2022 • Veera Raghava Reddy Kovvuri, Siyuan Liu, Monika Seisenberger, Berndt Müller, Xiuyi Fan
Feature attribution XAI algorithms enable their users to gain insight into the underlying patterns of large datasets through their feature importance calculation.
Explainable Artificial Intelligence (XAI) Feature Importance
no code implementations • 18 Jan 2022 • Xiuyi Fan, Francesca Toni
It is widely acknowledged that transparency of automated decision making is crucial for deployability of intelligent systems, and explaining the reasons why some decisions are "good" and some are not is a way to achieving this transparency.
no code implementations • 29 Dec 2021 • Jamie Duell, Monika Seisenberger, Gert Aarts, ShangMing Zhou, Xiuyi Fan
In other words, although contribution towards a certain prediction is highlighted by feature attribution methods, the relation between features and the consequence of intervention is not studied.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
no code implementations • 2 Nov 2021 • Siyuan Liu, Mehmet Orcun Yalcin, Hsuan Fu, Xiuyi Fan
Since the onset of the the COVID-19 pandemic, many countries across the world have implemented various non-pharmaceutical interventions (NPIs) to contain the spread of virus, as well as economic support policies (ESPs) to save their economies.
no code implementations • 20 May 2021 • Orcun Yalcin, Xiuyi Fan, Siyuan Liu
In this work, we develop a method to quantitatively evaluate the correctness of XAI algorithms by creating datasets with known explanation ground truth.
no code implementations • 5 May 2020 • Xiuyi Fan, Siyuan Liu, Jiarong Chen, Thomas C. Henderson
We compute the top one and two measures that are most effective for the countries and regions studied during the period.
2 code implementations • 5 May 2020 • Xiuyi Fan, Siyuan Liu, Thomas C. Henderson
The overarching goal of Explainable AI is to develop systems that not only exhibit intelligent behaviours, but also are able to explain their rationale and reveal insights.