Proceedings of the 1st and 2nd Workshops on Natural Logic Meets Machine Learning (NALOMA)

Aikaterini-Lida Kalouli, Lawrence S. Moss (Editors)


Anthology ID:
2021.naloma-1
Month:
June
Year:
2021
Address:
Groningen, the Netherlands (online)
Venues:
IWCS | NALOMA
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/2021.naloma-1
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2021.naloma-1.pdf

pdf bib
Proceedings of the 1st and 2nd Workshops on Natural Logic Meets Machine Learning (NALOMA)
Aikaterini-Lida Kalouli | Lawrence S. Moss

pdf bib
Attentive Tree-structured Network for Monotonicity Reasoning
Zeming Chen

Many state-of-art neural models designed for monotonicity reasoning perform poorly on downward inference. To address this shortcoming, we developed an attentive tree-structured neural network. It consists of a tree-based long-short-term-memory network (Tree-LSTM) with soft attention. It is designed to model the syntactic parse tree information from the sentence pair of a reasoning task. A self-attentive aggregator is used for aligning the representations of the premise and the hypothesis. We present our model and evaluate it using the Monotonicity Entailment Dataset (MED). We show and attempt to explain that our model outperforms existing models on MED.

pdf bib
Transferring Representations of Logical Connectives
Aaron Traylor | Ellie Pavlick | Roman Feiman

In modern natural language processing pipelines, it is common practice to pretrain a generative language model on a large corpus of text, and then to finetune the created representations by continuing to train them on a discriminative textual inference task. However, it is not immediately clear whether the logical meaning necessary to model logical entailment is captured by language models in this paradigm. We examine this pretrain-finetune recipe with language models trained on a synthetic propositional language entailment task, and present results on test sets probing models’ knowledge of axioms of first order logic.

pdf bib
Monotonic Inference for Underspecified Episodic Logic
Gene Kim | Mandar Juvekar | Lenhart Schubert

We present a method of making natural logic inferences from Unscoped Logical Form of Episodic Logic. We establish a correspondence between inference rules of scope resolved Episodic Logic and the natural logic treatment by Snchez Valencia (1991a), and hence demonstrate the ability to handle foundational natural logic inferences from prior literature as well as more general nested monotonicity inferences.

pdf bib
Bayesian Classification and Inference in a Probabilistic Type Theory with RecordsBayesian Classification and Inference in a Probabilistic Type Theory with Records
Staffan Larsson | Robin Cooper

We propose a probabilistic account of semantic inference and classification formulated in terms of probabilistic type theory with records, building on Cooper et. al. (2014) and Cooper et. al. We suggest probabilistic type theoretic formulations of Naive Bayes Classifiers and Bayesian Networks. A central element of these constructions is a type-theoretic version of a random variable. We illustrate this account with a simple language game combining probabilistic classification of perceptual input with probabilistic (semantic) inference.