Search Results for author: Stephen Obadinma

Found 6 papers, 2 papers with code

The FAIIR Tool: A Conversational AI Agent Assistant for Youth Mental Health Service Provision

no code implementations28 May 2024 Stephen Obadinma, Alia Lachana, Maia Norman, Jocelyn Rankin, Joanna Yu, Xiaodan Zhu, Darren Mastropaolo, Deval Pandya, Roxana Sultan, Elham Dolatabadi

World's healthcare systems and mental health agencies face both a growing demand for youth mental health services, alongside a simultaneous challenge of limited resources.

Calibration Attacks: A Comprehensive Study of Adversarial Attacks on Model Confidence

no code implementations5 Jan 2024 Stephen Obadinma, Xiaodan Zhu, Hongyu Guo

In this work, we highlight and perform a comprehensive study on calibration attacks, a form of adversarial attacks that aim to trap victim models to be heavily miscalibrated without altering their predicted labels, hence endangering the trustworthiness of the models and follow-up decision making based on their confidence.

Decision Making

Effectiveness of Data Augmentation for Parameter Efficient Tuning with Limited Data

no code implementations5 Mar 2023 Stephen Obadinma, Hongyu Guo, Xiaodan Zhu

In this paper, we examine the effectiveness of several popular task-agnostic data augmentation techniques, i. e., EDA, Back Translation, and Mixup, when using two general parameter efficient tuning methods, P-tuning v2 and LoRA, under data scarcity.

Data Augmentation Sentence +1

How Curriculum Learning Impacts Model Calibration

no code implementations29 Sep 2021 Stephen Obadinma, Xiaodan Zhu, Hongyu Guo

Our studies suggest the following: most of the time curriculum learning has a negligible effect on calibration, but in certain cases under the context of limited training time and noisy data, curriculum learning can substantially reduce calibration error in a manner that cannot be explained by dynamically sampling the dataset.

Cannot find the paper you are looking for? You can Submit a new open access paper.