NoiER: An Approach for Training more Reliable Fine-TunedDownstream Task Models

29 Aug 2021  ·  Myeongjun Jang, Thomas Lukasiewicz ·

The recent development in pretrained language models trained in a self-supervised fashion, such as BERT, is driving rapid progress in the field of NLP. However, their brilliant performance is based on leveraging syntactic artifacts of the training data rather than fully understanding the intrinsic meaning of language. The excessive exploitation of spurious artifacts causes a problematic issue: The distribution collapse problem, which is the phenomenon that the model fine-tuned on downstream tasks is unable to distinguish out-of-distribution (OOD) sentences while producing a high confidence score. In this paper, we argue that distribution collapse is a prevalent issue in pretrained language models and propose noise entropy regularisation (NoiER) as an efficient learning paradigm that solves the problem without auxiliary models and additional~data. The proposed approach improved traditional OOD detection evaluation metrics by 55% on average compared to the original fine-tuned models.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods