Latent Variable Model for Multi-modal Translation

Iacer Calixto, Miguel Rios, Wilker Aziz


Abstract
In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and Kadar, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the KL term to promote models with non-negligible mutual information between inputs and latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data).
Anthology ID:
P19-1642
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6392–6405
Language:
URL:
https://aclanthology.org/P19-1642
DOI:
10.18653/v1/P19-1642
Bibkey:
Cite (ACL):
Iacer Calixto, Miguel Rios, and Wilker Aziz. 2019. Latent Variable Model for Multi-modal Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6392–6405, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Latent Variable Model for Multi-modal Translation (Calixto et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1642.pdf
Code
 iacercalixto/variational_mmt
Data
Flickr30k
Terminologies: