no code implementations • 20 May 2023 • Jindi Zhang, Luning Wang, Dan Su, Yongxiang Huang, Caleb Chen Cao, Lei Chen
Machine learning systems produce biased results towards certain demographic groups, known as the fairness problem.
no code implementations • 5 May 2023 • Guoyang Liu, Jindi Zhang, Antoni B. Chan, Janet H. Hsiao
We examined whether embedding human attention knowledge into saliency-based explainable AI (XAI) methods for computer vision models could enhance their plausibility and faithfulness.
no code implementations • 30 Mar 2022 • Jindi Zhang
In this thesis, we study the detection methods against the attacks on onboard sensors and the linkage between attacked deep learning models and driving safety for autonomous vehicles.
no code implementations • Findings (ACL) 2022 • Dan Su, Xiaoguang Li, Jindi Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Pascale Fung
Long-form question answering (LFQA) aims to generate a paragraph-length answer for a given question.
Ranked #1 on Question Answering on KILT: ELI5
no code implementations • 20 Oct 2021 • Jindi Zhang, Yifan Zhang, Kejie Lu, JianPing Wang, Kui Wu, Xiaohua Jia, Bin Liu
In our study, we use real data sets and the state-of-the-art machine learning model to evaluate our attack detection scheme and the results confirm the effectiveness of our detection method.
1 code implementation • 6 Aug 2021 • Jindi Zhang, Yang Lou, JianPing Wang, Kui Wu, Kejie Lu, Xiaohua Jia
In this paper, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models.