A Simple yet Effective Framework for Few-Shot Aspect-Based Sentiment Analysis

The pre-training and fine-tuning paradigm has become the mainstream framework in the field of Aspect-Based Sentiment Analysis (ABSA). Although it has achieved sound performance in the domains containing enough fine-grained aspect-sentiment annotations, it is still challenging to conduct few-shot ABSA in domains where manual annotations are scarce. In this work, we argue that two kinds of gaps, i.e., domain gap and objective gap, hinder the transfer of knowledge from pre-training language models (PLMs) to ABSA tasks. To address this issue, we introduce a simple yet effective framework called FS-ABSA, which involves domain-adaptive pre-training and text-infilling fine-tuning. We approach the End-toEnd ABSA task as a text-infilling problem and perform domain-adaptive pre-training with the text-infilling objective, narrowing the two gaps and consequently facilitating the knowledge transfer. Experiments show that the resulting model achieves more compelling performance than baselines under the few-shot setting while driving the state-of-the-art performance to a new level across datasets under the fully-supervised setting. Moreover, we apply our framework to two non-English low-resource languages to demonstrate its generality and effectiveness.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Aspect Term Extraction and Sentiment Classification SemEval FS-ABSA Avg F1 76.73 # 1
Restaurant 2014 (F1) 82.29 # 1
Laptop 2014 (F1) 71.16 # 1
Aspect-Based Sentiment Analysis (ABSA) SemEval 2014 Task 4 Laptop FS-ABSA F1 71.16 # 2
Aspect-Based Sentiment Analysis (ABSA) SemEval 2014 Task 4 Subtask 1+2 FS-ABSA F1 71.16 # 3

Methods


No methods listed for this paper. Add relevant methods here