International Conference on Spoken Language Translation (2018)


up

pdf (full)
bib (full)
Proceedings of the 15th International Conference on Spoken Language Translation

pdf bib
Proceedings of the 15th International Conference on Spoken Language Translation
Marco Turchi | Jan Niehues | Marcello Frederico

pdf bib
Unsupervised Parallel Sentence Extraction from Comparable Corpora
Viktor Hangya | Fabienne Braune | Yuliya Kalasouskaya | Alexander Fraser

Mining parallel sentences from comparable corpora is of great interest for many downstream tasks. In the BUCC 2017 shared task, systems performed well by training on gold standard parallel sentences. However, we often want to mine parallel sentences without bilingual supervision. We present a simple approach relying on bilingual word embeddings trained in an unsupervised fashion. We incorporate orthographic similarity in order to handle words with similar surface forms. In addition, we propose a dynamic threshold method to decide if a candidate sentence-pair is parallel which eliminates the need to fine tune a static value for different datasets. Since we do not employ any language specific engineering our approach is highly generic. We show that our approach is effective, on three language-pairs, without the use of any bilingual signal which is important because parallel sentence mining is most useful in low resource scenarios.

pdf bib
Analyzing Knowledge Distillation in Neural Machine Translation
Dakun Zhang | Josep Crego | Jean Senellart

Knowledge distillation has recently been successfully applied to neural machine translation. It allows for building shrunk networks while the resulting systems retain most of the quality of the original model. Despite the fact that many authors report on the benefits of knowledge distillation, few have discussed the actual reasons why it works, especially in the context of neural MT. In this paper, we conduct several experiments aimed at understanding why and how distillation impacts accuracy on an English-German translation task. We show that translation complexity is actually reduced when building a distilled / synthesised bi-text when compared to the reference bi-text. We further remove noisy data from synthesised translations and merge filtered synthesised data together with original reference, thus achieving additional gains in terms of accuracy.

pdf bib
Multi-Source Neural Machine Translation with Data Augmentation
Yuta Nishimura | Katsuhito Sudoh | Graham Neubig | Satoshi Nakamura

Multi-source translation systems translate from multiple languages to a single target language. By using information from these multiple sources, these systems achieve large gains in accuracy. To train these systems, it is necessary to have corpora with parallel text in multiple sources and the target language. However, these corpora are rarely complete in practice due to the difficulty of providing human translations in all of the relevant languages. In this paper, we propose a data augmentation approach to fill such incomplete parts using multi-source neural machine translation (NMT). In our experiments, results varied over different language combinations but significant gains were observed when using a source language similar to the target language.

pdf bib
The USTC-NEL Speech Translation system at IWSLT 2018USTC-NEL Speech Translation system at IWSLT 2018
Dan Liu | Junhua Liu | Wu Guo | Shifu Xiong | Zhiqiang Ma | Rui Song | Chongliang Wu | Quan Liu

This paper describes the USTC-NEL (short for National Engineering Laboratory for Speech and Language Information Processing University of science and technology of china) system to the speech translation task of the IWSLT Evaluation 2018. The system is a conventional pipeline system which contains 3 modules : speech recognition, post-processing and machine translation. We train a group of hybrid-HMM models for our speech recognition, and for machine translation we train transformer based neural machine translation models with speech recognition output style text as input. Experiments conducted on the IWSLT 2018 task indicate that, compared to baseline system from KIT, our system achieved 14.9 BLEU improvement.

pdf bib
The ADAPT System Description for the IWSLT 2018 Basque to English Translation TaskADAPT System Description for the IWSLT 2018 Basque to English Translation Task
Alberto Poncelas | Andy Way | Kepa Sarasola

In this paper we present the ADAPT system built for the Basque to English Low Resource MT Evaluation Campaign. Basque is a low-resourced, morphologically-rich language. This poses a challenge for Neural Machine Translation models which usually achieve better performance when trained with large sets of data. Accordingly, we used synthetic data to improve the translation quality produced by a model built using only authentic data. Our proposal uses back-translated data to : (a) create new sentences, so the system can be trained with more data ; and (b) translate sentences that are close to the test set, so the model can be fine-tuned to the document to be translated.

pdf bib
The MeMAD Submission to the IWSLT 2018 Speech Translation TaskMeMAD Submission to the IWSLT 2018 Speech Translation Task
Umut Sulubacak | Jörg Tiedemann | Aku Rouhe | Stig-ArneGrönroos | Mikko Kurimo

This paper describes the MeMAD project entry to the IWSLT Speech Translation Shared Task, addressing the translation of English audio into German text. Between the pipeline and end-to-end model tracks, we participated only in the former, with three contrastive systems. We tried also the latter, but were not able to finish our end-to-end model in time. All of our systems start by transcribing the audio into text through an automatic speech recognition (ASR) model trained on the TED-LIUM English Speech Recognition Corpus (TED-LIUM). Afterwards, we feed the transcripts into English-German text-based neural machine translation (NMT) models. Our systems employ three different translation models trained on separate training sets compiled from the English-German part of the TED Speech Translation Corpus (TED-TRANS) and the OPENSUBTITLES2018 section of the OPUS collection. In this paper, we also describe the experiments leading up to our final systems. Our experiments indicate that using OPENSUBTITLES2018 in training significantly improves translation performance. We also experimented with various preand postprocessing routines for the NMT module, but we did not have much success with these. Our best-scoring system attains a BLEU score of 16.45 on the test set for this year’s task.

pdf bib
Samsung and University of Edinburgh’s System for the IWSLT 2018 Low Resource MT TaskSamsung and University of Edinburgh’s System for the IWSLT 2018 Low Resource MT Task
Philip Williams | Marcin Chochowski | Pawel Przybysz | Rico Sennrich | Barry Haddow | Alexandra Birch

This paper describes the joint submission to the IWSLT 2018 Low Resource MT task by Samsung R&D Institute, Poland, and the University of Edinburgh. We focused on supplementing the very limited in-domain Basque-English training data with out-of-domain data, with synthetic data, and with data for other language pairs. We also experimented with a variety of model architectures and features, which included the development of extensions to the Nematus toolkit. Our submission was ultimately produced by a system combination in which we reranked translations from our strongest individual system using multiple weaker systems.

pdf bib
The AFRL IWSLT 2018 Systems : What Worked, What Did n’tAFRL IWSLT 2018 Systems: What Worked, What Didn’t
Brian Ore | Eric Hansen | Katherine Young | Grant Erdmann | Jeremy Gwinnup

This report summarizes the Air Force Research Laboratory (AFRL) machine translation (MT) and automatic speech recognition (ASR) systems submitted to the spoken language translation (SLT) and low-resource MT tasks as part of the IWSLT18 evaluation campaign.

pdf bib
CUNI Basque-to-English Submission in IWSLT18CUNI Basque-to-English Submission in IWSLT18
Tom Kocmi | Dušan Variš | Ondřej Bojar

We present our submission to the IWSLT18 Low Resource task focused on the translation from Basque-to-English. Our submission is based on the current state-of-the-art self-attentive neural network architecture, Transformer. We further improve this strong baseline by exploiting available monolingual data using the back-translation technique. We also present further improvements gained by a transfer learning, a technique that trains a model using a high-resource language pair (Czech-English) and then fine-tunes the model using the target low-resource language pair (Basque-English).

pdf bib
Data Selection with Feature Decay Algorithms Using an Approximated Target Side
Alberto Poncelas | Gideon Maillette de Buy Wenniger | Andy Way

Data selection techniques applied to neural machine translation (NMT) aim to increase the performance of a model by retrieving a subset of sentences for use as training data. One of the possible data selection techniques are transductive learning methods, which select the data based on the test set, i.e. the document to be translated. A limitation of these methods to date is that using the source-side test set does not by itself guarantee that sentences are selected with correct translations, or translations that are suitable given the test-set domain. Some corpora, such as subtitle corpora, may contain parallel sentences with inaccurate translations caused by localization or length restrictions. In order to try to fix this problem, in this paper we propose to use an approximated target-side in addition to the source-side when selecting suitable sentence-pairs for training a model. This approximated target-side is built by pre-translating the source-side. In this work, we explore the performance of this general idea for one specific data selection approach called Feature Decay Algorithms (FDA). We train German-English NMT models on data selected by using the test set (source), the approximated target side, and a mixture of both. Our findings reveal that models built using a combination of outputs of FDA (using the test set and an approximated target side) perform better than those solely using the test set.

pdf bib
Multi-paraphrase Augmentation to Leverage Neural Caption Translation
Johanes Effendi | Sakriani Sakti | Katsuhito Sudoh | Satoshi Nakamura

Paraphrasing has been proven to improve translation quality in machine translation (MT) and has been widely studied alongside with the development of statistical MT (SMT). In this paper, we investigate and utilize neural paraphrasing to improve translation quality in neural MT (NMT), which has not yet been much explored. Our first contribution is to propose a new way of creating a multi-paraphrase corpus through visual description. After that, we also proposed to construct neural paraphrase models which initiate expert models and utilize them to leverage NMT. Here, we diffuse the image information by using image-based paraphrasing without using the image itself. Our proposed image-based multi-paraphrase augmentation strategies showed improvement against a vanilla NMT baseline.