1 code implementation • 17 May 2024 • Tingyu Xia, Bowen Yu, Yuan Wu, Yi Chang, Chang Zhou
In this paper, we initiate our discussion by demonstrating how Large Language Models (LLMs), when tasked with responding to queries, display a more even probability distribution in their answers if they are more adept, as opposed to their less skilled counterparts.
1 code implementation • 11 Dec 2022 • Tingyu Xia, Yue Wang, Yuan Tian, Yi Chang
Weakly-supervised text classification aims to train a classifier using only class descriptions and unlabeled data.
1 code implementation • 22 Feb 2021 • Tingyu Xia, Yue Wang, Yuan Tian, Yi Chang
We study the problem of incorporating prior knowledge into a deep Transformer-based model, i. e., Bidirectional Encoder Representations from Transformers (BERT), to enhance its performance on semantic textual matching tasks.