no code implementations • 23 May 2024 • Yiming Chen, Chen Zhang, Danqing Luo, Luis Fernando D'Haro, Robby T. Tan, Haizhou Li
Specifically, inspired by the recent success of large language models (LLMs) in text generation and evaluation, we adopt strong LLMs as both the data generator and gold evaluator.
no code implementations • 19 Mar 2024 • Danqing Luo, Chen Zhang, Yan Zhang, Haizhou Li
Training or finetuning large-scale language models (LLMs) requires substantial computation resources, motivating recent efforts to explore parameter-efficient adaptation to downstream tasks.
no code implementations • 23 May 2023 • Danqing Luo, Chen Zhang, Jiahui Xu, Bin Wang, Yiming Chen, Yan Zhang, Haizhou Li
To achieve this, we treat the black-box model as a feature extractor and train a classifier with the augmented text data.