Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts

Jing Jiang, Ivan Vulić (Editors)


Anthology ID:
2021.emnlp-tutorials
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic & Online
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/2021.emnlp-tutorials
DOI:
Bib Export formats:
BibTeX MODS XML EndNote

pdf bib
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
Jing Jiang | Ivan Vulić

pdf bib
Financial Opinion Mining
Chung-Chi Chen | Hen-Hsen Huang | Hsin-Hsi Chen

In this tutorial, we will show where we are and where we will be to those researchers interested in this topic. We divide this tutorial into three parts, including coarse-grained financial opinion mining, fine-grained financial opinion mining, and possible research directions. This tutorial starts by introducing the components in a financial opinion proposed in our research agenda and summarizes their related studies. We also highlight the task of mining customers’ opinions toward financial services in the FinTech industry, and compare them with usual opinions. Several potential research questions will be addressed. We hope the audiences of this tutorial will gain an overview of financial opinion mining and figure out their research directions.

pdf bib
Robustness and Adversarial Examples in Natural Language Processing
Kai-Wei Chang | He He | Robin Jia | Sameer Singh

Recent studies show that many NLP systems are sensitive and vulnerable to a small perturbation of inputs and do not generalize well across different datasets. This lack of robustness derails the use of NLP systems in real-world applications. This tutorial aims at bringing awareness of practical concerns about NLP robustness. It targets NLP researchers and practitioners who are interested in building reliable NLP systems. In particular, we will review recent studies on analyzing the weakness of NLP systems when facing adversarial inputs and data with a distribution shift. We will provide the audience with a holistic view of 1) how to use adversarial examples to examine the weakness of NLP models and facilitate debugging ; 2) how to enhance the robustness of existing NLP models and defense against adversarial inputs ; and 3) how the consideration of robustness affects the real-world NLP applications used in our daily lives. We will conclude the tutorial by outlining future research directions in this area.