Zekang Li
2019
Answer-Supervised Question Reformulation for Enhancing Conversational Machine Comprehension
Qian Li
|
Hui Su
|
Cheng Niu
|
Daling Wang
|
Zekang Li
|
Shi Feng
|
Yifei Zhang
Proceedings of the 2nd Workshop on Machine Reading for Question Answering
In conversational machine comprehension, it has become one of the research hotspots integrating conversational history information through question reformulation for obtaining better answers. However, the existing question reformulation models are trained only using supervised question labels annotated by annotators without considering any feedback information from answers. In this paper, we propose a novel Answer-Supervised Question Reformulation (ASQR) model for enhancing conversational machine comprehension with reinforcement learning technology. ASQR utilizes a pointer-copy-based question reformulation model as an agent, takes an action to predict the next word, and observes a reward for the whole sentence state after generating the end-of-sequence token. The experimental results on QuAC dataset prove that our ASQR model is more effective in conversational machine comprehension. Moreover, pretraining is essential in reinforcement learning models, so we provide a high-quality annotated dataset for question reformulation by sampling a part of QuAC dataset.
Incremental Transformer with Deliberation Decoder for Document Grounded Conversations
Zekang Li
|
Cheng Niu
|
Fandong Meng
|
Yang Feng
|
Qian Li
|
Jie Zhou
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Document Grounded Conversations is a task to generate dialogue responses when chatting about the content of a given document. Obviously, document knowledge plays a critical role in Document Grounded Conversations, while existing dialogue models do not exploit this kind of knowledge effectively enough. In this paper, we propose a novel Transformer-based architecture for multi-turn document grounded conversations. In particular, we devise an Incremental Transformer to encode multi-turn utterances along with knowledge in related documents. Motivated by the human cognitive process, we design a two-pass decoder (Deliberation Decoder) to improve context coherence and knowledge correctness. Our empirical study on a real-world Document Grounded Dataset proves that responses generated by our model significantly outperform competitive baselines on both context coherence and knowledge relevance.