no code implementations • 16 May 2024 • Yizhe Yang, Heyan Huang, Palakorn Achananuparp, Jing Jiang, Ee-Peng Lim
The recent success of large language models (LLMs) has attracted widespread interest to develop role-playing conversational agents personalized to the characteristics and styles of different speakers to enhance their abilities to perform both general and special purpose dialogue tasks.
no code implementations • 19 Feb 2024 • Jiahao Ying, Yixin Cao, Bo wang, Wei Tang, Yizhe Yang, Shuicheng Yan
The basic idea is to generate unseen and high-quality testing samples based on existing ones to mitigate leakage issues.
no code implementations • 13 Dec 2023 • Yizhe Yang, Heyan Huang, Yihang Liu, Yang Gao
Knowledge-grounded dialogue is a task of generating an informative response based on both the dialogue history and external knowledge source.
no code implementations • 14 Nov 2023 • Huashan Sun, Yixiao Wu, Yinghao Li, Jiawei Li, Yizhe Yang, Yang Gao
In summary, we present the TSST task, a new benchmark for style transfer and emphasizing human-oriented evaluation, exploring and advancing the performance of current LLMs.
no code implementations • 24 Oct 2023 • Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Heyan Huang, Yang Gao
Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence.
no code implementations • 27 Apr 2022 • Yizhe Yang, Heyan Huang, Yang Gao, Jiawei Li and
However, it is a challenge for the current sequence-based model to acquire knowledge from complex documents and integrate it to perform correct responses without the aid of an explicit semantic structure.
no code implementations • 17 Mar 2022 • Jiawei Li, Mucheng Ren, Yang Gao, Yizhe Yang
Specifically, we carefully design an end-to-end QG module on the basis of a classical QA module, which could help the model understand the context by asking inherently logical sub-questions, thus inheriting interpretability from the QD-based method and showing superior performance.