no code implementations • MTSummit 2021 • Raj Dabre, Aizhan Imankulova, Masahiro Kaneko
To this end and in this paper and we propose wait-k simultaneous document-level NMT where we keep the context encoder as it is and replace the source sentence encoder and target language decoder with their wait-k equivalents.
1 code implementation • NAACL 2022 • Masahiro Kaneko, Aizhan Imankulova, Danushka Bollegala, Naoaki Okazaki
Unfortunately, it was reported that MLMs also learn discriminative biases regarding attributes such as gender and race.
1 code implementation • Findings (ACL) 2021 • Zhousi Chen, Longtu Zhang, Aizhan Imankulova, Mamoru Komachi
We propose two fast neural combinatory models for constituency parsing: binary and multi-branching.
2 code implementations • NAACL 2021 • Rob van der Goot, Ibrahim Sharaf, Aizhan Imankulova, Ahmet Üstün, Marija Stepanović, Alan Ramponi, Siti Oryza Khairunnisa, Mamoru Komachi, Barbara Plank
To tackle the challenge, we propose a joint learning approach, with English SLU training data and non-English auxiliary tasks from raw text, syntax and translation for transfer.
no code implementations • 15 Apr 2021 • Raj Dabre, Aizhan Imankulova, Masahiro Kaneko, Abhisek Chakrabarty
Parallel corpora are indispensable for training neural machine translation (NMT) models, and parallel corpora for most language pairs do not exist or are scarce.
no code implementations • COLING 2020 • Ikumi Yamashita, Satoru Katsumata, Masahiro Kaneko, Aizhan Imankulova, Mamoru Komachi
Cross-lingual transfer learning from high-resource languages (the source models) is effective for training models of low-resource languages (the target models) for various tasks.
1 code implementation • Asian Chapter of the Association for Computational Linguistics 2020 • Siti Oryza Khairunnisa, Aizhan Imankulova, Mamoru Komachi
In recent years, named entity recognition (NER) tasks in the Indonesian language have undergone extensive development.
no code implementations • WS 2020 • Masahiro Kaneko, Aizhan Imankulova, Tosho Hirasawa, Mamoru Komachi
We introduce our TMU system that is submitted to The 4th Workshop on Neural Generation and Translation (WNGT2020) to English-to-Japanese (En→Ja) track on Simultaneous Translation And Paraphrase for Language Education (STAPLE) shared task.
1 code implementation • WMT (EMNLP) 2020 • Aizhan Imankulova, Masahiro Kaneko, Tosho Hirasawa, Mamoru Komachi
Simultaneous translation involves translating a sentence before the speaker's utterance is completed in order to realize real-time understanding in multiple languages.
no code implementations • WS 2019 • Aizhan Imankulova, Masahiro Kaneko, Mamoru Komachi
We introduce our system that is submitted to the News Commentary task (Japanese{\textless}-{\textgreater}Russian) of the 6th Workshop on Asian Translation.
1 code implementation • WS 2019 • Aizhan Imankulova, Raj Dabre, Atsushi Fujita, Kenji Imamura
This paper proposes a novel multilingual multistage fine-tuning approach for low-resource neural machine translation (NMT), taking a challenging Japanese--Russian pair for benchmarking.
1 code implementation • WS 2017 • Aizhan Imankulova, Takayuki Sato, Mamoru Komachi
Large-scale parallel corpora are indispensable to train highly accurate machine translators.
Language Modelling Low-Resource Neural Machine Translation +2