Proceedings of the First Workshop on Interactive and Executable Semantic Parsing

Ben Bogin, Srinivasan Iyer, Victoria Lin, Dragomir Radev, Alane Suhr, Panupong, Caiming Xiong, Pengcheng Yin, Tao Yu, Rui Zhang, Victor Zhong (Editors)


Anthology ID:
2020.intexsempar-1
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | intexsempar
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/2020.intexsempar-1
DOI:
Bib Export formats:
BibTeX MODS XML EndNote

pdf bib
Proceedings of the First Workshop on Interactive and Executable Semantic Parsing
Ben Bogin | Srinivasan Iyer | Victoria Lin | Dragomir Radev | Alane Suhr | Panupong | Caiming Xiong | Pengcheng Yin | Tao Yu | Rui Zhang | Victor Zhong

pdf bib
Improving Sequence-to-Sequence Semantic Parser for Task Oriented Dialog
Chaoting Xuan

Task Oriented Parsing (TOP) attempts to map utterances to compositional requests, including multiple intents and their slots. Previous work focus on a tree-based hierarchical meaning representation, and applying constituency parsing techniques to address TOP. In this paper, we propose a new format of meaning representation that is more compact and amenable to sequence-to-sequence (seq-to-seq) models. A simple copy-augmented seq-to-seq parser is built and evaluated over a public TOP dataset, resulting in 3.44 % improvement over prior best seq-to-seq parser (exact match accuracy), which is also comparable to constituency parsers’ performance.

pdf bib
ColloQL : Robust Text-to-SQL Over Search QueriesColloQL: Robust Text-to-SQL Over Search Queries
Karthik Radhakrishnan | Arvind Srikantan | Xi Victoria Lin

Translating natural language utterances to executable queries is a helpful technique in making the vast amount of data stored in relational databases accessible to a wider range of non-tech-savvy end users. Prior work in this area has largely focused on textual input that is linguistically correct and semantically unambiguous. However, real-world user queries are often succinct, colloquial, and noisy, resembling the input of a search engine. In this work, we introduce data augmentation techniques and a sampling-based content-aware BERT model (ColloQL) to achieve robust text-to-SQL modeling over natural language search (NLS) questions. Due to the lack of evaluation data, we curate a new dataset of NLS questions and demonstrate the efficacy of our approach. ColloQL’s superior performance extends to well-formed text, achieving an 84.9 % (logical) and 90.7 % (execution) accuracy on the WikiSQL dataset, making it, to the best of our knowledge, the highest performing model that does not use execution guided decoding.