Ahmed Kholy
2019
Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog
Zhe Gan
|
Yu Cheng
|
Ahmed Kholy
|
Linjie Li
|
Jingjing Liu
|
Jianfeng Gao
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
This paper presents a new model for visual dialog, Recurrent Dual Attention Network (ReDAN), using multi-step reasoning to answer a series of questions about an image. In each question-answering turn of a dialog, ReDAN infers the answer progressively through multiple reasoning steps. In each step of the reasoning process, the semantic representation of the question is updated based on the image and the previous dialog history, and the recurrently-refined representation is used for further reasoning in the subsequent step. On the VisDial v1.0 dataset, the proposed ReDAN model achieves a new state-of-the-art of 64.47 % NDCG score. Visualization on the reasoning process further demonstrates that ReDAN can locate context-relevant visual and textual clues via iterative refinement, which can lead to the correct answer step-by-step.