Hisako Asano
2019
Multi-style Generative Reading Comprehension
Kyosuke Nishida
|
Itsumi Saito
|
Kosuke Nishida
|
Kazutoshi Shinoda
|
Atsushi Otsuka
|
Hisako Asano
|
Junji Tomita
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
This study tackles generative reading comprehension (RC), which consists of answering questions based on textual evidence and natural language generation (NLG). We propose a multi-style abstractive summarization model for question answering, called Masque. The proposed model has two key characteristics. First, unlike most studies on RC that have focused on extracting an answer span from the provided passages, our model instead focuses on generating a summary from the question and multiple passages. This serves to cover various answer styles required for real-world applications. Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved. This also enables our model to give an answer in the target style. Experiments show that our model achieves state-of-the-art performance on the Q&A task and the Q&A + NLG task of MS MARCO 2.1 and the summary task of NarrativeQA. We observe that the transfer of the style-independent NLG capability to the target style is the key to its success.
2018
Natural Language Inference with Definition Embedding Considering Context On the Fly
Kosuke Nishida
|
Kyosuke Nishida
|
Hisako Asano
|
Junji Tomita
Proceedings of The Third Workshop on Representation Learning for NLP
Natural language inference (NLI) is one of the most important tasks in NLP. In this study, we propose a novel method using word dictionaries, which are pairs of a word and its definition, as external knowledge. Our neural definition embedding mechanism encodes input sentences with the definitions of each word of the sentences on the fly. It can encode the definition of words considering the context of input sentences by using an attention mechanism. We evaluated our method using WordNet as a dictionary and confirmed that our method performed better than baseline models when using the full or a subset of 100d GloVe as word embeddings.
Commonsense Knowledge Base Completion and Generation
Itsumi Saito
|
Kyosuke Nishida
|
Hisako Asano
|
Junji Tomita
Proceedings of the 22nd Conference on Computational Natural Language Learning
This study focuses on acquisition of commonsense knowledge. A previous study proposed a commonsense knowledge base completion (CKB completion) method that predicts a confidence score of for triplet-style knowledge for improving the coverage of CKBs. To improve the accuracy of CKB completion and expand the size of CKBs, we formulate a new commonsense knowledge base generation task (CKB generation) and propose a joint learning method that incorporates both CKB completion and CKB generation. Experimental results show that the joint learning method improved completion accuracy and the generation model created reasonable knowledge. Our generation model could also be used to augment data and improve the accuracy of completion.
Search
Co-authors
- Kyosuke Nishida 3
- Junji Tomita 3
- Kosuke Nishida 2
- Itsumi Saito 2
- Kazutoshi Shinoda 1
- show all...