no code implementations • 1 Apr 2024 • Parker Seegmiller, Joseph Gatto, Omar Sharif, Madhusudan Basak, Sarah Masud Preum
Large language models (LLMs) have been shown to be proficient in correctly answering questions in the context of online discourse.
no code implementations • 5 Mar 2024 • Joseph Gatto, Parker Seegmiller, Omar Sharif, Sarah M. Preum
Our approach leverages the intuition that Mad Libs, which are categorically masked documents used as a part of a popular game, can be generated and solved by LLMs to produce data for DocEAE.
no code implementations • 5 Mar 2024 • Joseph Gatto, Madhusudan Basak, Yash Srivastava, Philip Bohlman, Sarah M. Preum
We detail (i) a method of claim identification -- the task of identifying if a post title contains a claim and (ii) an opinion mining-driven evaluation framework for stance detection using LLMs.
1 code implementation • 30 Oct 2023 • Joseph Gatto, Omar Sharif, Sarah Masud Preum
Chain-of-Thought (COT) prompting has recently been shown to improve performance on stance detection tasks -- alleviating some of these issues.
no code implementations • 18 Sep 2023 • Joseph Gatto, Sarah M. Preum
Our error analysis shows that AMR-infused language models perform better on complex texts and generally show less predictive variance in the presence of changing complexity.
no code implementations • 12 Sep 2023 • Joseph Gatto, Omar Sharif, Parker Seegmiller, Philip Bohlman, Sarah Masud Preum
Additionally, we show generative LLMs significantly outperform existing encoder-based STS models when characterizing the semantic similarity between two texts with complex semantic relationships dependent on world knowledge.
no code implementations • 16 Mar 2023 • Parker Seegmiller, Joseph Gatto, Madhusudan Basak, Diane Cook, Hassan Ghasemzadeh, John Stankovic, Sarah Preum
Medications often impose temporal constraints on everyday patient activity.
no code implementations • 27 Jan 2023 • William Romano, Omar Sharif, Madhusudan Basak, Joseph Gatto, Sarah Preum
Lastly, we found that a large language model (ChatGPT) outperforms unsupervised keyphrase extraction models, and we evaluate its efficacy in this task.
no code implementations • 17 Jan 2023 • Parker Seegmiller, Joseph Gatto, Abdullah Mamun, Hassan Ghasemzadeh, Diane Cook, John Stankovic, Sarah Masud Preum
It also addresses the challenges of accurately predicting RHBs central to MTCs (e. g., medication intake).
no code implementations • 6 Oct 2022 • Joseph Gatto, Parker Seegmiller, Garrett Johnston, Sarah M. Preum
The processing of entities in natural language is essential to many medical NLP systems.
no code implementations • 22 Sep 2022 • Joseph Gatto, Madhusudan Basak, Sarah M. Preum
This is the task of health conflict detection (HCD).
no code implementations • 27 Nov 2019 • Joseph Gatto, Ravi Lanka, Yumi Iwashita, Adrian Stoica
Have you ever wondered how your feature space is impacting the prediction of a specific sample in your dataset?