Proceedings of the First Workshop on Trustworthy Natural Language Processing

Yada Pruksachatkun, Anil Ramakrishna, Kai-Wei Chang, Satyapriya Krishna, Jwala Dhamala, Tanaya Guha, Xiang Ren (Editors)


Anthology ID:
2021.trustnlp-1
Month:
June
Year:
2021
Address:
Online
Venues:
NAACL | TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/2021.trustnlp-1
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2021.trustnlp-1.pdf

pdf bib
Proceedings of the First Workshop on Trustworthy Natural Language Processing
Yada Pruksachatkun | Anil Ramakrishna | Kai-Wei Chang | Satyapriya Krishna | Jwala Dhamala | Tanaya Guha | Xiang Ren

pdf bib
Interpretability Rules : Jointly Bootstrapping a Neural Relation Extractorwith an Explanation Decoder
Zheng Tang | Mihai Surdeanu

We introduce a method that transforms a rule-based relation extraction (RE) classifier into a neural one such that both interpretability and performance are achieved. Our approach jointly trains a RE classifier with a decoder that generates explanations for these extractions, using as sole supervision a set of rules that match these relations. Our evaluation on the TACRED dataset shows that our neural RE classifier outperforms the rule-based one we started from by 9 F1 points ; our decoder generates explanations with a high BLEU score of over 90 % ; and, the joint learning improves the performance of both the classifier and decoder.

pdf bib
Measuring Biases of Word Embeddings : What Similarity Measures and Descriptive Statistics to Use?
Hossein Azarpanah | Mohsen Farhadloo

Word embeddings are widely used in Natural Language Processing (NLP) for a vast range of applications. However, it has been consistently proven that these embeddings reflect the same human biases that exist in the data used to train them. Most of the introduced bias indicators to reveal word embeddings’ bias are average-based indicators based on the cosine similarity measure. In this study, we examine the impacts of different similarity measures as well as other descriptive techniques than averaging in measuring the biases of contextual and non-contextual word embeddings. We show that the extent of revealed biases in word embeddings depends on the descriptive statistics and similarity measures used to measure the bias. We found that over the ten categories of word embedding association tests, Mahalanobis distance reveals the smallest bias, and Euclidean distance reveals the largest bias in word embeddings. In addition, the contextual models reveal less severe biases than the non-contextual word embedding models.

pdf bib
Private Release of Text Embedding Vectors
Oluwaseyi Feyisetan | Shiva Kasiviswanathan

Ensuring strong theoretical privacy guarantees on text data is a challenging problem which is usually attained at the expense of utility. However, to improve the practicality of privacy preserving text analyses, it is essential to design algorithms that better optimize this tradeoff. To address this challenge, we propose a release mechanism that takes any (text) embedding vector as input and releases a corresponding private vector. The mechanism satisfies an extension of differential privacy to metric spaces. Our idea based on first randomly projecting the vectors to a lower-dimensional space and then adding noise in this projected space generates private vectors that achieve strong theoretical guarantees on its utility. We support our theoretical proofs with empirical experiments on multiple word embedding models and NLP datasets, achieving in some cases more than 10 % gains over the existing state-of-the-art privatization techniques.

pdf bib
xER : An Explainable Model for Entity Resolution using an Efficient Solution for the Clique Partitioning ProblemER: An Explainable Model for Entity Resolution using an Efficient Solution for the Clique Partitioning Problem
Samhita Vadrevu | Rakesh Nagi | JinJun Xiong | Wen-mei Hwu

In this paper, we propose a global, self- explainable solution to solve a prominent NLP problem : Entity Resolution (ER). We formu- late ER as a graph partitioning problem. Every mention of a real-world entity is represented by a node in the graph, and the pairwise sim- ilarity scores between the mentions are used to associate these nodes to exactly one clique, which represents a real-world entity in the ER domain. In this paper, we use Clique Partition- ing Problem (CPP), which is an Integer Pro- gram (IP) to formulate ER as a graph partition- ing problem and then highlight the explainable nature of this method. Since CPP is NP-Hard, we introduce an efficient solution procedure, the xER algorithm, to solve CPP as a combi- nation of finding maximal cliques in the graph and then performing generalized set packing using a novel formulation. We discuss the advantages of using xER over the traditional methods and provide the computational exper- iments and results of applying this method to ER data sets.

pdf bib
Towards Benchmarking the Utility of Explanations for Model Debugging
Maximilian Idahl | Lijun Lyu | Ujwal Gadiraju | Avishek Anand

Post-hoc explanation methods are an important class of approaches that help understand the rationale underlying a trained model’s decision. But how useful are they for an end-user towards accomplishing a given task? In this vision paper, we argue the need for a benchmark to facilitate evaluations of the utility of post-hoc explanation methods. As a first step to this end, we enumerate desirable properties that such a benchmark should possess for the task of debugging text classifiers. Additionally, we highlight that such a benchmark facilitates not only assessing the effectiveness of explanations but also their efficiency.