Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Jasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad (Editors)
- Anthology ID:
- 2021.blackboxnlp-1
- Month:
- November
- Year:
- 2021
- Address:
- Punta Cana, Dominican Republic
- Venues:
- BlackboxNLP | EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- URL:
- https://aclanthology.org/2021.blackboxnlp-1
- DOI:
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Jasmijn Bastings
|
Yonatan Belinkov
|
Emmanuel Dupoux
|
Mario Giulianelli
|
Dieuwke Hupkes
|
Yuval Pinter
|
Hassan Sajjad
Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
Hendrik Schuff
|
Hsiu-Yu Yang
|
Heike Adel
|
Ngoc Thang Vu
On the Limits of Minimal Pairs in Contrastive Evaluation
Jannis Vamvas
|
Rico Sennrich
What Models Know About Their Attackers : Deriving Attacker Information From Latent Representations
Zhouhang Xie
|
Jonathan Brophy
|
Adam Noack
|
Wencong You
|
Kalyani Asthana
|
Carter Perkins
|
Sabrina Reis
|
Zayd Hammoudeh
|
Daniel Lowd
|
Sameer Singh
ProSPer : Probing Human and Neural Network Language Model Understanding of Spatial PerspectiveProSPer: Probing Human and Neural Network Language Model Understanding of Spatial Perspective
Tessa Masis
|
Carolyn Anderson
Transferring Knowledge from Vision to Language : How to Achieve it and how to Measure it?
Tobias Norlund
|
Lovisa Hagström
|
Richard Johansson
A howling success or a working sea? Testing what BERT knows about metaphorsBERT knows about metaphors
Paolo Pedinotti
|
Eliana Di Palma
|
Ludovica Cerini
|
Alessandro Lenci
How Length Prediction Influence the Performance of Non-Autoregressive Translation?
Minghan Wang
|
Guo Jiaxin
|
Yuxia Wang
|
Yimeng Chen
|
Su Chang
|
Hengchao Shang
|
Min Zhang
|
Shimin Tao
|
Hao Yang
On the Language-specificity of Multilingual BERT and the Impact of Fine-tuningBERT and the Impact of Fine-tuning
Marc Tanti
|
Lonneke van der Plas
|
Claudia Borg
|
Albert Gatt
Variation and generality in encoding of syntactic anomaly information in sentence embeddings
Qinxuan Wu
|
Allyson Ettinger
Enhancing Interpretable Clauses Semantically using Pretrained Word Representation
Rohan Kumar Yadav
|
Lei Jiao
|
Ole-Christoffer Granmo
|
Morten Goodwin
An in-depth look at Euclidean disk embeddings for structure preserving parsingEuclidean disk embeddings for structure preserving parsing
Federico Fancellu
|
Lan Xiao
|
Allan Jepson
|
Afsaneh Fazly
Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language InferenceJapanese Adversarial Natural Language Inference
Hitomi Yanaka
|
Koji Mineshima
Investigating Negation in Pre-trained Vision-and-language Models
Radina Dobreva
|
Frank Keller
Learning Mathematical Properties of Integers
Maria Ryskina
|
Kevin Knight
An Investigation of Language Model Interpretability via Sentence Editing
Samuel Stevens
|
Yu Su
Controlled tasks for model analysis : Retrieving discrete information from sequences
Ionut-Teodor Sorodoc
|
Gemma Boleda
|
Marco Baroni
Do Language Models Know the Way to Rome?Rome?
Bastien Liétard
|
Mostafa Abdou
|
Anders Søgaard