Domain Adaptation of Neural Machine Translation by Lexicon Induction

Junjie Hu, Mengzhou Xia, Graham Neubig, Jaime Carbonell


Abstract
It has been previously noted that neural machine translation (NMT) is very sensitive to domain shift. In this paper, we argue that this is a dual effect of the highly lexicalized nature of NMT, resulting in failure for sentences with large numbers of unknown words, and lack of supervision for domain-specific words. To remedy this problem, we propose an unsupervised adaptation method which fine-tunes a pre-trained out-of-domain NMT model using a pseudo-in-domain corpus. Specifically, we perform lexicon induction to extract an in-domain lexicon, and construct a pseudo-parallel in-domain corpus by performing word-for-word back-translation of monolingual in-domain target sentences. In five domains over twenty pairwise adaptation settings and two model architectures, our method achieves consistent improvements without using any in-domain parallel sentences, improving up to 14 BLEU over unadapted models, and up to 2 BLEU over strong back-translation baselines.
Anthology ID:
P19-1286
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2989–3001
Language:
URL:
https://aclanthology.org/P19-1286
DOI:
10.18653/v1/P19-1286
Bibkey:
Cite (ACL):
Junjie Hu, Mengzhou Xia, Graham Neubig, and Jaime Carbonell. 2019. Domain Adaptation of Neural Machine Translation by Lexicon Induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2989–3001, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Domain Adaptation of Neural Machine Translation by Lexicon Induction (Hu et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1286.pdf
Supplementary:
 P19-1286.Supplementary.pdf
Terminologies: