Tim O’Gorman


2021

pdf bib
Improved Latent Tree Induction with Distant Supervision via Span Constraints
Zhiyang Xu | Andrew Drozdov | Jay Yoon Lee | Tim O’Gorman | Subendhu Rongali | Dylan Finkbeiner | Shilpa Suresh | Mohit Iyyer | Andrew McCallum
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

For over thirty years, researchers have developed and analyzed methods for latent tree induction as an approach for unsupervised syntactic parsing. Nonetheless, modern systems still do not perform well enough compared to their supervised counterparts to have any practical use as structural annotation of text. In this work, we present a technique that uses distant supervision in the form of span constraints (i.e. phrase bracketing) to improve performance in unsupervised constituency parsing. Using a relatively small number of span constraints we can substantially improve the output from DIORA, an already competitive unsupervised parsing system. Compared with full parse tree annotation, span constraints can be acquired with minimal effort, such as with a lexicon derived from Wikipedia, to find exact text matches. Our experiments show span constraints based on entities improves constituency parsing on English WSJ Penn Treebank by more than 5 F1. Furthermore, our method extends to any domain where span constraints are easily attainable, and as a case study we demonstrate its effectiveness by parsing biomedical text from the CRAFT dataset.

2020

pdf bib
Unsupervised Parsing with S-DIORA : Single Tree Encoding for Deep Inside-Outside Recursive AutoencodersS-DIORA: Single Tree Encoding for Deep Inside-Outside Recursive Autoencoders
Andrew Drozdov | Subendhu Rongali | Yi-Pei Chen | Tim O’Gorman | Mohit Iyyer | Andrew McCallum
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

The deep inside-outside recursive autoencoder (DIORA ; Drozdov et al. 2019) is a self-supervised neural model that learns to induce syntactic tree structures for input sentences * without access to labeled training data *. In this paper, we discover that while DIORA exhaustively encodes all possible binary trees of a sentence with a soft dynamic program, its vector averaging approach is locally greedy and can not recover from errors when computing the highest scoring parse tree in bottom-up chart parsing. To fix this issue, we introduce S-DIORA, an improved variant of DIORA that encodes a single tree rather than a softly-weighted mixture of trees by employing a hard argmax operation and a beam at each cell in the chart. Our experiments show that through * fine-tuning * a pre-trained DIORA with our new algorithm, we improve the state of the art in * unsupervised * constituency parsing on the English WSJ Penn Treebank by 2.2-6 % F1, depending on the data used for fine-tuning.

pdf bib
Cross-lingual annotation : a road map for low- and no-resource languages
Meagan Vigus | Jens E. L. Van Gysel | Tim O’Gorman | Andrew Cowell | Rosa Vallejos | William Croft
Proceedings of the Second International Workshop on Designing Meaning Representations

This paper presents a road map for the annotation of semantic categories in typologically diverse languages, with potentially few linguistic resources, and often no existing computational resources. Past semantic annotation efforts have focused largely on high-resource languages, or relatively low-resource languages with a large number of native speakers. However, there are certain typological traits, namely the synthesis of multiple concepts into a single word, that are more common in languages with a smaller speech community. For example, what is expressed as a sentence in a more analytic language like English, may be expressed as a single word in a more synthetic language like Arapaho. This paper proposes solutions for annotating analytic and synthetic languages in a comparable way based on existing typological research, and introduces a road map for the annotation of languages with a dearth of resources.

pdf bib
An Instance Level Approach for Shallow Semantic Parsing in Scientific Procedural TextAn Instance Level Approach for Shallow Semantic Parsing in Scientific Procedural Text
Daivik Swarup | Ahsaas Bajaj | Sheshera Mysore | Tim O’Gorman | Rajarshi Das | Andrew McCallum
Findings of the Association for Computational Linguistics: EMNLP 2020

In specific domains, such as procedural scientific text, human labeled data for shallow semantic parsing is especially limited and expensive to create. Fortunately, such specific domains often use rather formulaic writing, such that the different ways of expressing relations in a small number of grammatically similar labeled sentences may provide high coverage of semantic structures in the corpus, through an appropriately rich similarity metric. In light of this opportunity, this paper explores an instance-based approach to the relation prediction sub-task within shallow semantic parsing, in which semantic labels from structurally similar sentences in the training set are copied to test sentences. Candidate similar sentences are retrieved using SciBERT embeddings. For labels where it is possible to copy from a similar sentence we employ an instance level copy network, when this is not possible, a globally shared parametric model is employed. Experiments show our approach outperforms both baseline and prior methods by 0.75 to 3 F1 absolute in the Wet Lab Protocol Corpus and 1 F1 absolute in the Materials Science Procedural Text Corpus.

2019

pdf bib
Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning
Stephan Oepen | Omri Abend | Jan Hajic | Daniel Hershcovich | Marco Kuhlmann | Tim O’Gorman | Nianwen Xue
Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning

pdf bib
MRP 2019 : Cross-Framework Meaning Representation ParsingMRP 2019: Cross-Framework Meaning Representation Parsing
Stephan Oepen | Omri Abend | Jan Hajic | Daniel Hershcovich | Marco Kuhlmann | Tim O’Gorman | Nianwen Xue | Jayeol Chun | Milan Straka | Zdenka Uresova
Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning

The 2019 Shared Task at the Conference for Computational Language Learning (CoNLL) was devoted to Meaning Representation Parsing (MRP) across frameworks. Five distinct approaches to the representation of sentence meaning in the form of directed graph were represented in the training and evaluation data for the task, packaged in a uniform abstract graph representation and serialization. The task received submissions from eighteen teams, of which five do not participate in the official ranking because they arrived after the closing deadline, made use of additional training data, or involved one of the task co-organizers. All technical information regarding the task, including system submissions, official results, and links to supporting resources and software are available from the task web site at : http://mrp.nlpl.eu

2018

pdf bib
AMR Beyond the Sentence : the Multi-sentence AMR corpusAMR Beyond the Sentence: the Multi-sentence AMR corpus
Tim O’Gorman | Michael Regan | Kira Griffitt | Ulf Hermjakob | Kevin Knight | Martha Palmer
Proceedings of the 27th International Conference on Computational Linguistics

There are few corpora that endeavor to represent the semantic content of entire documents. We present a corpus that accomplishes one way of capturing document level semantics, by annotating coreference and similar phenomena (bridging and implicit roles) on top of gold Abstract Meaning Representations of sentence-level semantics. We present a new corpus of this annotation, with analysis of its quality, alongside a plausible baseline for comparison. It is hoped that this Multi-Sentence AMR corpus (MS-AMR) may become a feasible method for developing rich representations of document meaning, useful for tasks such as information extraction and question answering.

2017

pdf bib
Double Trouble : The Problem of Construal in Semantic Annotation of Adpositions
Jena D. Hwang | Archna Bhatia | Na-Rae Han | Tim O’Gorman | Vivek Srikumar | Nathan Schneider
Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)

We consider the semantics of prepositions, revisiting a broad-coverage annotation scheme used for annotating all 4,250 preposition tokens in a 55,000 word corpus of English. Attempts to apply the scheme to adpositions and case markers in other languages, as well as some problematic cases in English, have led us to reconsider the assumption that an adposition’s lexical contribution is equivalent to the role / relation that it mediates. Our proposal is to embrace the potential for construal in adposition use, expressing such phenomena directly at the token level to manage complexity and avoid sense proliferation. We suggest a framework to represent both the scene role and the adposition’s lexical function so they can be annotated at scalesupporting automatic, statistical processing of domain-general languageand discuss how this representation would allow for a simpler inventory of labels.