no code implementations • 23 Aug 2021 • Leonard Dahlmann, Tomer Lancewicki
We successfully optimize a Query-Title Relevance (QTR) classifier for deployment via a compact model, which we name BERT Bidirectional Long Short-Term Memory (BertBiLSTM).
no code implementations • WMT (EMNLP) 2020 • Jingjing Huo, Christian Herold, Yingbo Gao, Leonard Dahlmann, Shahram Khadivi, Hermann Ney
Context-aware neural machine translation (NMT) is a promising direction to improve the translation quality by making use of the additional context, e. g., document-level translation, or having meta-information.
no code implementations • IWSLT (EMNLP) 2018 • Shen Yan, Leonard Dahlmann, Pavel Petrushkov, Sanjika Hewavitharana, Shahram Khadivi
Pre-training a model with word weights improves fine-tuning up to 1. 24% BLEU absolute and 1. 64% TER, respectively.
no code implementations • MTSummit 2017 • Shahram Khadivi, Patrick Wilken, Leonard Dahlmann, Evgeny Matusov
In this paper, we discuss different methods which use meta information and richer context that may accompany source language input to improve machine translation quality.
no code implementations • EMNLP 2017 • Leonard Dahlmann, Evgeny Matusov, Pavel Petrushkov, Shahram Khadivi
In this paper, we introduce a hybrid search for attention-based neural machine translation (NMT).