Irina Illina
2022
Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection
Tulika Bose
|
Nikolaos Aletras
|
Irina Illina
|
Dominique Fohr
Findings of the Association for Computational Linguistics: ACL 2022
Hate speech classifiers exhibit substantial performance degradation when evaluated on datasets different from the source. This is due to learning spurious correlations between words that are not necessarily relevant to hateful language, and hate speech labels from the training corpus. Previous work has attempted to mitigate this problem by regularizing specific terms from pre-defined static dictionaries. While this has been demonstrated to improve the generalizability of classifiers, the coverage of such methods is limited and the dictionaries require regular manual updates from human experts. In this paper, we propose to automatically identify and reduce spurious correlations using attribution methods with dynamic refinement of the list of terms that need to be regularized during training. Our approach is flexible and improves the cross-corpora performance over previous work independently and in combination with pre-defined dictionaries.
2020
Label Propagation-Based Semi-Supervised Learning for Hate Speech Classification
Ashwin Geet D’Sa
|
Irina Illina
|
Dominique Fohr
|
Dietrich Klakow
|
Dana Ruiter
Proceedings of the First Workshop on Insights from Negative Results in NLP
Research on hate speech classification has received increased attention. In real-life scenarios, a small amount of labeled hate speech data is available to train a reliable classifier. Semi-supervised learning takes advantage of a small amount of labeled data and a large amount of unlabeled data. In this paper, label propagation-based semi-supervised learning is explored for the task of hate speech classification. The quality of labeling the unlabeled set depends on the input representations. In this work, we show that pre-trained representations are label agnostic, and when used with label propagation yield poor results. Neural network-based fine-tuning can be adopted to learn task-specific representations using a small amount of labeled data. We show that fully fine-tuned representations may not always be the best representations for the label propagation and intermediate representations may perform better in a semi-supervised setup.
Search
Co-authors
- Dominique Fohr 2
- Tulika Bose 1
- Nikolaos Aletras 1
- Ashwin Geet D’Sa 1
- Dietrich Klakow 1
- show all...