Bill Noble


2021

pdf bib
Large-scale text pre-training helps with dialogue act recognition, but not without fine-tuning
Bill Noble | Vladislav Maraev
Proceedings of the 14th International Conference on Computational Semantics (IWCS)

We use dialogue act recognition (DAR) to investigate how well BERT represents utterances in dialogue, and how fine-tuning and large-scale pre-training contribute to its performance. We find that while both the standard BERT pre-training and pretraining on dialogue-like data are useful, task-specific fine-tuning is essential for good performance.
Search
Co-authors
Venues