Marc Dymetman


2019

pdf bib
Global Autoregressive Models for Data-Efficient Sequence Learning
Tetiana Parshakova | Jean-Marc Andreoli | Marc Dymetman
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

Standard autoregressive seq2seq models are easily trained by max-likelihood, but tend to show poor results under small-data conditions. We introduce a class of seq2seq models, GAMs (Global Autoregressive Models), which combine an autoregressive component with a log-linear component, allowing the use of global a priori features to compensate for lack of data. We train these models in two steps. In the first step, we obtain an unnormalized GAM that maximizes the likelihood of the data, but is improper for fast inference or evaluation. In the second step, we use this GAM to train (by distillation) a second autoregressive model that approximates the normalized distribution associated with the GAM, and can be used for fast inference and evaluation. Our experiments focus on language modelling under synthetic conditions and show a strong perplexity reduction of using the second autoregressive model over the standard one.a priori features to compensate for lack of data. We train these models in two steps. In the first step, we obtain an unnormalized GAM that maximizes the likelihood of the data, but is improper for fast inference or evaluation. In the second step, we use this GAM to train (by distillation) a second autoregressive model that approximates the normalized distribution associated with the GAM, and can be used for fast inference and evaluation. Our experiments focus on language modelling under synthetic conditions and show a strong perplexity reduction of using the second autoregressive model over the standard one.

2017

pdf bib
A surprisingly effective out-of-the-box char2char model on the E2E NLG Challenge datasetE2E NLG Challenge dataset
Shubham Agarwal | Marc Dymetman
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue

We train a char2char model on the E2E NLG Challenge data, by exploiting out-of-the-box the recently released tfseq2seq framework, using some of the standard options offered by this tool. With minimal effort, and in particular without delexicalization, tokenization or lowercasing, the obtained raw predictions, according to a small scale human evaluation, are excellent on the linguistic side and quite reasonable on the adequacy side, the primary downside being the possible omissions of semantic material. However, in a significant number of cases (more than 70 %), a perfect solution can be found in the top-20 predictions, indicating promising directions for solving the remaining issues.