target-oriented opinion words extraction

7 papers with code • 0 benchmarks • 0 datasets

The objective of TOWE is to extract the corresponding opinion words describing or evaluating the target from the review.

Most implemented papers

Target-oriented Opinion Words Extraction with Target-fused Neural Sequence Labeling

NJUNLP/TOWE NAACL 2019

In this paper, we propose a novel sequence labeling subtask for ABSA named TOWE (Target-oriented Opinion Words Extraction), which aims at extracting the corresponding opinion words for a given opinion target.

Latent Opinions Transfer Network for Target-Oriented Opinion Words Extraction

1429904852/LOTN 7 Jan 2020

In this paper, we propose a novel model to transfer these opinions knowledge from resource-rich review sentiment classification datasets to low-resource task TOWE.

Attention-based Relational Graph Convolutional Network for Target-Oriented Opinion Words Extraction

wcwowwwww/towe-eacl EACL 2021

It aims to extract the corresponding opinion words for a given opinion target in a review sentence.

Target-specified Sequence Labeling with Multi-head Self-attention for Target-oriented Opinion Words Extraction

fengyh3/TSMSA NAACL 2021

Many recent works on ABSA focus on Target-oriented Opinion Words (or Terms) Extraction (TOWE), which aims at extracting the corresponding opinion words for a given opinion target.

An Empirical Study on Leveraging Position Embeddings for Target-oriented Opinion Words Extraction

samensah/encoders_towe_emnlp2021 EMNLP 2021

Target-oriented opinion words extraction (TOWE) (Fan et al., 2019b) is a new subtask of target-oriented sentiment analysis that aims to extract opinion words for a given aspect in text.

Training Entire-Space Models for Target-oriented Opinion Words Extraction

l294265421/SIGIR22-TOWE 15 Apr 2022

Moreover, the performance of these models on the first type of instance cannot reflect their performance on entire space.

Exploiting Unlabeled Data for Target-Oriented Opinion Words Extraction

towessl/towessl COLING 2022

Limited labeled data increase the risk of distribution shift between test data and training data.