no code implementations • 1 Apr 2024 • Parker Seegmiller, Joseph Gatto, Omar Sharif, Madhusudan Basak, Sarah Masud Preum
Large language models (LLMs) have been shown to be proficient in correctly answering questions in the context of online discourse.
1 code implementation • 30 Oct 2023 • Joseph Gatto, Omar Sharif, Sarah Masud Preum
Chain-of-Thought (COT) prompting has recently been shown to improve performance on stance detection tasks -- alleviating some of these issues.
1 code implementation • 23 Oct 2023 • Parker Seegmiller, Sarah Masud Preum
We adopt a statistical depth to measure distributions of transformer-based text embeddings, transformer-based text embedding (TTE) depth, and introduce the practical use of this depth for both modeling and distributional inference in NLP pipelines.
no code implementations • 12 Sep 2023 • Joseph Gatto, Omar Sharif, Parker Seegmiller, Philip Bohlman, Sarah Masud Preum
Additionally, we show generative LLMs significantly outperform existing encoder-based STS models when characterizing the semantic similarity between two texts with complex semantic relationships dependent on world knowledge.
no code implementations • 17 Jan 2023 • Parker Seegmiller, Joseph Gatto, Abdullah Mamun, Hassan Ghasemzadeh, Diane Cook, John Stankovic, Sarah Masud Preum
It also addresses the challenges of accurately predicting RHBs central to MTCs (e. g., medication intake).