no code implementations • 29 Apr 2024 • Haeun Yu, Pepa Atanasova, Isabelle Augenstein
Our findings further suggest the potential of a synergistic approach of combining the diverse findings of IA and NA for a more holistic understanding of an LM's parametric knowledge.
1 code implementation • 20 Oct 2023 • Sagnik Ray Choudhury, Pepa Atanasova, Isabelle Augenstein
Reasoning over spans of tokens from different parts of the input is essential for natural language understanding (NLU) tasks such as fact-checking (FC), machine reading comprehension (MRC) or natural language inference (NLI).
2 code implementations • 4 Jun 2023 • Momchil Hardalov, Pepa Atanasova, Todor Mihaylov, Galia Angelova, Kiril Simov, Petya Osenova, Ves Stoyanov, Ivan Koychev, Preslav Nakov, Dragomir Radev
We run the first systematic evaluation of pre-trained language models for Bulgarian, comparing and contrasting results across the nine tasks in the benchmark.
1 code implementation • 29 May 2023 • Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, Isabelle Augenstein
Explanations of neural models aim to reveal a model's decision-making process for its predictions.
no code implementations • 9 Nov 2022 • Pepa Atanasova
A major concern of Machine Learning (ML) models is their opacity.
no code implementations • 5 Apr 2022 • Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein
To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions.
no code implementations • 13 Dec 2021 • Shailza Jolly, Pepa Atanasova, Isabelle Augenstein
In addition, we show the applicability of our approach in a completely unsupervised setting.
no code implementations • 25 Sep 2021 • Tamer Elsayed, Preslav Nakov, Alberto Barrón-Cedeño, Maram Hasanain, Reem Suwaileh, Giovanni Da San Martino, Pepa Atanasova
We present an overview of the second edition of the CheckThat!
no code implementations • 8 Sep 2021 • Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein
When such annotations are not available, explanations are often selected as those portions of the input that maximise a downstream task's performance, which corresponds to optimising an explanation's Faithfulness to a given model.
1 code implementation • EMNLP 2020 • Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein
Recent developments in machine learning have introduced models that approach human performance at the cost of increased architectural complexity.
1 code implementation • EMNLP 2020 • Pepa Atanasova, Dustin Wright, Isabelle Augenstein
However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in.
1 code implementation • 10 Sep 2020 • Wojciech Ostrowski, Arnav Arora, Pepa Atanasova, Isabelle Augenstein
We: 1) construct a small annotated dataset, PolitiHop, of evidence sentences for claim verification; 2) compare it to existing multi-hop datasets; and 3) study how to transfer knowledge from more extensive in- and out-of-domain resources to PolitiHop.
no code implementations • SEMEVAL 2020 • Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, Çağrı Çöltekin
We present the results and main findings of SemEval-2020 Task 12 on Multilingual Offensive Language Identification in Social Media (OffensEval 2020).
no code implementations • Findings (ACL) 2021 • Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Marcos Zampieri, Preslav Nakov
The widespread use of offensive content in social media has led to an abundance of research in detecting language such as hate speech, cyberbullying, and cyber-aggression.
no code implementations • ACL 2020 • Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein
Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims.
no code implementations • 20 Nov 2019 • Luna De Bruyne, Pepa Atanasova, Isabelle Augenstein
Emotion lexica are commonly used resources to combat data poverty in automatic emotion detection.
no code implementations • RANLP 2019 • Slavena Vasileva, Pepa Atanasova, Lluís Màrquez, Alberto Barrón-Cedeño, Preslav Nakov
We propose a multi-task deep-learning approach for estimating the check-worthiness of claims in political debates.
1 code implementation • 4 Aug 2019 • Pepa Atanasova, Preslav Nakov, Lluís Màrquez, Alberto Barrón-Cedeño, Georgi Karadzhov, Tsvetomila Mihaylova, Mitra Mohtarami, James Glass
We study the problem of automatic fact-checking, paying special attention to the impact of contextual and discourse information.
no code implementations • SEMEVAL 2019 • Tsvetomila Mihaylova, Georgi Karadjov, Pepa Atanasova, Ramy Baly, Mitra Mohtarami, Preslav Nakov
For subtask A, all systems improved over the majority class baseline.
no code implementations • 25 May 2019 • Pepa Atanasova, Georgi Karadzhov, Yasen Kiprov, Preslav Nakov, Fabrizio Sebastiani
While typically a user would expect a single response at any utterance, a system could also return multiple options for the user to select from, based on different system understandings of the user's intent.