no code implementations • 10 Jan 2020 • Poorya Zaremoodi, Gholamreza Haffari
We effectively and efficiently learn the training schedule policy within the imitation learning framework using an oracle policy algorithm that dynamically sets the importance weights of auxiliary tasks based on their contributions to the generalisability of the main NMT task.
no code implementations • WS 2019 • Poorya Zaremoodi, Gholamreza Haffari
The role of training schedule becomes even more crucial in \textit{biased-MTL} where the goal is to improve one (or a subset) of tasks the most, e. g. translation quality.
no code implementations • COLING 2018 • Poorya Zaremoodi, Gholamreza Haffari
Incorporating syntactic information in Neural Machine Translation (NMT) can lead to better reorderings, particularly useful when the language pairs are syntactically highly divergent or when the training bitext is not large.
no code implementations • ACL 2018 • Poorya Zaremoodi, Wray Buntine, Gholamreza Haffari
The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state.
no code implementations • NAACL 2018 • Poorya Zaremoodi, Gholamreza Haffari
Neural machine translation requires large amounts of parallel training text to learn a reasonable-quality translation model.
no code implementations • 19 Nov 2017 • Poorya Zaremoodi, Gholamreza Haffari
In this paper, we propose a forest-to-sequence Attentional Neural Machine Translation model to make use of exponentially many parse trees of the source sentence to compensate for the parser errors.