no code implementations • 2 Mar 2023 • Tamali Banerjee, Rudra Murthy V, Pushpak Bhattacharyya
We aim to investigate whether UNMT approaches with self-supervised pre-training are robust to word-order divergence between language pairs.
no code implementations • MTSummit 2021 • Tamali Banerjee, Rudra Murthy V, Pushpak Bhattacharyya
In this paper, we show that initializing the embedding layer of UNMT models with cross-lingual embeddings shows significant improvements in BLEU score over existing approaches with embeddings randomly initialized.
no code implementations • MTSummit 2021 • Tamali Banerjee, Rudra Murthy V, Pushpak Bhattacharyya
We hypothesise that the reason behind \textit{scrambled translation problem} is 'shuffling noise' which is introduced in every input sentence as a denoising strategy.
no code implementations • WS 2018 • Tamali Banerjee, Pushpak Bhattacharyya
We explore the use of two independent subsystems Byte Pair Encoding (BPE) and Morfessor as basic units for subword-level neural machine translation (NMT).