no code implementations • 15 Nov 2023 • Alexandra Chronopoulou, Jonas Pfeiffer, Joshua Maynez, Xinyi Wang, Sebastian Ruder, Priyanka Agrawal
Parameter-efficient fine-tuning (PEFT) using labeled task data can significantly improve the performance of large language models (LLMs) on the downstream task.
1 code implementation • 26 May 2023 • Yihong Liu, Alexandra Chronopoulou, Hinrich Schütze, Alexander Fraser
By conducting extensive experiments on different language pairs, including similar and distant, high and low-resource languages, we find that our method alleviates the copying problem, thus improving the translation performance on low-resource languages.
no code implementations • 22 May 2023 • Proyag Pal, Brian Thompson, Yogesh Virkar, Prashant Mathur, Alexandra Chronopoulou, Marcello Federico
To translate speech for automatic dubbing, machine translation needs to be isochronous, i. e. translated speech needs to be aligned with the source in terms of speech durations.
1 code implementation • 22 May 2023 • Wen Lai, Alexandra Chronopoulou, Alexander Fraser
Despite advances in multilingual neural machine translation (MNMT), we argue that there are still two major challenges in this area: data imbalance and representation degeneration.
1 code implementation • 25 Feb 2023 • Alexandra Chronopoulou, Brian Thompson, Prashant Mathur, Yogesh Virkar, Surafel M. Lakew, Marcello Federico
Automatic dubbing (AD) is the task of translating the original speech in a video into target language speech.
no code implementations • 14 Feb 2023 • Alexandra Chronopoulou, Matthew E. Peters, Alexander Fraser, Jesse Dodge
We also explore weight averaging of adapters trained on the same domain with different hyper-parameters, and show that it preserves the performance of a PLM on new domains while obtaining strong in-domain results.
1 code implementation • 21 Oct 2022 • Wen Lai, Alexandra Chronopoulou, Alexander Fraser
We consider a very challenging scenario: adapting the MNMT model both to a new domain and to a new language pair at the same time.
no code implementations • 30 Sep 2022 • Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser
Training a new adapter on each language pair or training a single adapter on all language pairs without updating the pretrained model has been proposed as a parameter-efficient alternative.
1 code implementation • NAACL 2022 • Alexandra Chronopoulou, Matthew E. Peters, Jesse Dodge
The remarkable success of large language models has been driven by dense models trained on massive unlabeled, unstructured corpora.
1 code implementation • NAACL 2021 • Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser
Successful methods for unsupervised neural machine translation (UNMT) employ crosslingual pretraining via self-supervision, often in the form of a masked language modeling or a sequence generation task, which requires the model to align the lexical- and high-level representations of the two languages.
1 code implementation • WMT (EMNLP) 2020 • Alexandra Chronopoulou, Dario Stojanovski, Viktor Hangya, Alexander Fraser
Our core unsupervised neural machine translation (UNMT) system follows the strategy of Chronopoulou et al. (2020), using a monolingual pretrained language generation model (on German) and fine-tuning it on both German and Upper Sorbian, before initializing a UNMT model, which is trained with online backtranslation.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Giorgos Vernikos, Katerina Margatina, Alexandra Chronopoulou, Ion Androutsopoulos
To address this issue, we introduce a new regularization technique, AFTER; domain Adversarial Fine-Tuning as an Effective Regularizer.
1 code implementation • EMNLP 2020 • Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser
Using a language model (LM) pretrained on two languages with large monolingual data in order to initialize an unsupervised neural machine translation (UNMT) system yields state-of-the-art results.
1 code implementation • NAACL 2019 • Alexandra Chronopoulou, Christos Baziotis, Alexandros Potamianos
A growing number of state-of-the-art transfer learning methods employ language models pretrained on large generic corpora.
1 code implementation • WS 2018 • Alexandra Chronopoulou, Aikaterini Margatina, Christos Baziotis, Alexandros Potamianos
In this paper we present our approach to tackle the Implicit Emotion Shared Task (IEST) organized as part of WASSA 2018 at EMNLP 2018.
3 code implementations • SEMEVAL 2018 • Christos Baziotis, Nikos Athanasiou, Alexandra Chronopoulou, Athanasia Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, Shrikanth Narayanan, Alexandros Potamianos
In this paper we present deep-learning models that submitted to the SemEval-2018 Task~1 competition: "Affect in Tweets".