Paper

A baseline revisited: Pushing the limits of multi-segment models for context-aware translation

This paper addresses the task of contextual translation using multi-segment models. Specifically we show that increasing model capacity further pushes the limits of this approach and that deeper models are more suited to capture context dependencies. Furthermore, improvements observed with larger models can be transferred to smaller models using knowledge distillation. Our experiments show that this approach achieves competitive performance across several languages and benchmarks, without additional language-specific tuning and task specific architectures.

Results in Papers With Code
(↓ scroll down to see all results)