Proceedings of the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answering

Ioannis A. Kakadiaris, George Paliouras, Anastasia Krithara (Editors)


Anthology ID:
W18-53
Month:
November
Year:
2018
Address:
Brussels, Belgium
Venues:
BioASQ | EMNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/W18-53
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/W18-53.pdf

pdf bib
Proceedings of the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answering
Ioannis A. Kakadiaris | George Paliouras | Anastasia Krithara

pdf bib
Results of the sixth edition of the BioASQ ChallengeBioASQ Challenge
Anastasios Nentidis | Anastasia Krithara | Konstantinos Bougiatiotis | Georgios Paliouras | Ioannis Kakadiaris

This paper presents the results of the sixth edition of the BioASQ challenge. The BioASQ challenge aims at the promotion of systems and methodologies through the organization of a challenge on two tasks : semantic indexing and question answering. In total, 26 teams with more than 90 systems participated in this year’s challenge. As in previous years, the best systems were able to outperform the strong baselines. This suggests that state-of-the-art systems are continuously improving, pushing the frontier of research.

pdf bib
AttentionMeSH : Simple, Effective and Interpretable Automatic MeSH IndexerAttentionMeSH: Simple, Effective and Interpretable Automatic MeSH Indexer
Qiao Jin | Bhuwan Dhingra | William Cohen | Xinghua Lu

There are millions of articles in PubMed database. To facilitate information retrieval, curators in the National Library of Medicine (NLM) assign a set of Medical Subject Headings (MeSH) to each article. MeSH is a hierarchically-organized vocabulary, containing about 28 K different concepts, covering the fields from clinical medicine to information sciences. Several automatic MeSH indexing models have been developed to improve the time-consuming and financially expensive manual annotation, including the NLM official tool Medical Text Indexer, and the winner of BioASQ Task5a challenge DeepMeSH. However, these models are complex and not interpretable. We propose a novel end-to-end model, AttentionMeSH, which utilizes deep learning and attention mechanism to index MeSH terms to biomedical text. The attention mechanism enables the model to associate textual evidence with annotations, thus providing interpretability at the word level. The model also uses a novel masking mechanism to enhance accuracy and speed. In the final week of BioASQ Chanllenge Task6a, we ranked 2nd by average MiF using an on-construction model. After the contest, we achieve close to state-of-the-art MiF performance of 0.684 using our final model. Human evaluations show AttentionMeSH also provides high level of interpretability, retrieving about 90 % of all expert-labeled relevant words given an MeSH-article pair at 20 output.

pdf bib
UNCC QA : Biomedical Question Answering systemUNCC QA: Biomedical Question Answering system
Abhishek Bhandwaldar | Wlodek Zadrozny

In this paper, we detail our submission to the BioASQ competition’s Biomedical Semantic Question and Answering task. Our system uses extractive summarization techniques to generate answers and has scored highest ROUGE-2 and Rogue-SU4 in all test batch sets. Our contributions are named-entity based method for answering factoid and list questions, and an extractive summarization techniques for building paragraph-sized summaries, based on lexical chains. Our system got highest ROUGE-2 and ROUGE-SU4 scores for ideal-type answers in all test batch sets. We also discuss the limitations of the described system, such lack of the evaluation on other criteria (e.g. manual). Also, for factoid- and list -type question our system got low accuracy (which suggests that our algorithm needs to improve in the ranking of entities).

pdf bib
An Adaption of BIOASQ Question Answering dataset for Machine Reading systems by Manual Annotations of Answer Spans.BIOASQ Question Answering dataset for Machine Reading systems by Manual Annotations of Answer Spans.
Sanjay Kamath | Brigitte Grau | Yue Ma

BIOASQ Task B Phase B challenge focuses on extracting answers from snippets for a given question. The dataset provided by the organizers contains answers, but not all their variants. Henceforth a manual annotation was performed to extract all forms of correct answers. This article shows the impact of using all occurrences of correct answers for training on the evaluation scores which are improved significantly.