GLM: General Language Model Pretraining with Autoregressive Blank Infilling

There have been various types of pretraining architectures including autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and encoder-decoder models (e.g., T5). However, none of the pretraining frameworks performs the best for all tasks of three main categories including natural language understanding (NLU), unconditional generation, and conditional generation. We propose a General Language Model (GLM) based on autoregressive blank infilling to address this challenge. GLM improves blank filling pretraining by adding 2D positional encodings and allowing an arbitrary order to predict spans, which results in performance gains over BERT and T5 on NLU tasks. Meanwhile, GLM can be pretrained for different types of tasks by varying the number and lengths of blanks. On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1.25x parameters of BERT Large , demonstrating its generalizability to different downstream tasks.

PDF Abstract ACL 2022 PDF ACL 2022 Abstract

Results from the Paper


Ranked #4 on Language Modelling on WikiText-103 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Document Summarization CNN / Daily Mail GLM-XXLarge ROUGE-1 44.7 # 4
ROUGE-2 21.4 # 4
ROUGE-L 41.4 # 4
Abstractive Text Summarization CNN / Daily Mail GLM-XXLarge ROUGE-1 44.7 # 9
ROUGE-2 21.4 # 13
ROUGE-L 41.4 # 10
Language Modelling LAMBADA GLM-XXLarge (unidirectional) Accuracy 67.18 # 27
Language Modelling LAMBADA GLM-XXLarge (bidirectional) Accuracy 72.35 # 21
Language Modelling WikiText-103 GLM-XXLarge (bidirectional) Test perplexity 11.33 # 4
Number of params 10000M # 1
Language Modelling WikiText-103 GLM-XXLarge (unidirectional) Test perplexity 12.22 # 5
Number of params 10000M # 1

Methods