1 code implementation • IWCS (ACL) 2021 • Aikaterini-Lida Kalouli, Rebecca Kehlbeck, Rita Sevastjanova, Oliver Deussen, Daniel Keim, Miriam Butt
Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer.
no code implementations • 14 May 2024 • Kenza Amara, Rita Sevastjanova, Mennatallah El-Assady
The NLP community has begun to take a keen interest in gaining a deeper understanding of text generation, leading to the development of model-agnostic explainable artificial intelligence (xAI) methods tailored to this task.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
no code implementations • 12 Mar 2024 • Thilo Spinner, Rebecca Kehlbeck, Rita Sevastjanova, Tobias Stähle, Daniel A. Keim, Oliver Deussen, Mennatallah El-Assady
Large language models (LLMs) are widely deployed in various downstream tasks, e. g., auto-completion, aided writing, or chat-based text generation.
no code implementations • 14 Feb 2024 • Kenza Amara, Rita Sevastjanova, Mennatallah El-Assady
We adopt a model-based evaluation to compare SyntaxShap and its weighted form to state-of-the-art explainability methods adapted to text generation tasks, using diverse metrics including faithfulness, complexity, coherency, and semantic alignment of the explanations to the model.
no code implementations • 17 Oct 2023 • Thilo Spinner, Rebecca Kehlbeck, Rita Sevastjanova, Tobias Stähle, Daniel A. Keim, Oliver Deussen, Andreas Spitz, Mennatallah El-Assady
We quantitatively show the value of exposing the beam search tree and present five detailed analysis scenarios addressing the identified challenges.
no code implementations • COLING 2022 • Aikaterini-Lida Kalouli, Rita Sevastjanova, Christin Beck, Maribel Romero
With the success of contextualized language models, much research explores what these models really learn and in which cases they still fail.
no code implementations • 17 Aug 2022 • Rita Sevastjanova, Eren Cakmak, Shauli Ravfogel, Ryan Cotterell, Mennatallah El-Assady
The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces.
no code implementations • 14 Jul 2022 • Rita Sevastjanova, Mennatallah El-Assady
Language models learn and represent language differently than humans; they learn the form and not the meaning.
no code implementations • ACL 2021 • Rita Sevastjanova, Aikaterini-Lida Kalouli, Christin Beck, Hanna Sch{\"a}fer, Mennatallah El-Assady
Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn.
no code implementations • COLING 2020 • Aikaterini-Lida Kalouli, Rita Sevastjanova, Valeria de Paiva, Richard Crouch, Mennatallah El-Assady
Advances in Natural Language Inference (NLI) have helped us understand what state-of-the-art models really learn and what their generalization power is.
no code implementations • WS 2019 • Aikaterini-Lida Kalouli, Rebecca Kehlbeck, Rita Sevastjanova, Katharina Kaiser, Georg A. Kaiser, Miriam Butt
The study of language change through parallel corpora can be advantageous for the analysis of complex interactions between time, text domain and language.
no code implementations • 29 Jul 2019 • Fabian Sperrle, Rita Sevastjanova, Rebecca Kehlbeck, Mennatallah El-Assady
The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.
no code implementations • ACL 2019 • Mennatallah El-Assady, Wolfgang Jentner, Fabian Sperrle, Rita Sevastjanova, Annette Hautli-Janisz, Miriam Butt, Daniel Keim
We present a modular framework for the rapid-prototyping of linguistic, web-based, visual analytics applications.