1 code implementation • 3 Apr 2024 • Zhiyuan Wen, Jiannong Cao, Yu Yang, Ruosong Yang, Shuaiqi Liu
To utilize affectivity within dialog content for accurate personality recognition, we fine-tuned a pre-trained language model specifically for emotion recognition in conversations, facilitating real-time affective annotations for utterances.
no code implementations • 3 Apr 2024 • Zhiyuan Wen, Jiannong Cao, Jiaxing Shen, Ruosong Yang, Shuaiqi Liu, Maosong Sun
Therefore, we propose a new task, Personality-affected Emotion Generation, to generate emotion based on the personality given to the dialog system and further investigate a solution through the personality-affected mood transition.
no code implementations • 7 Mar 2024 • Shuaiqi Liu, Jiannong Cao, Yicong Li, Ruosong Yang, Zhiyuan Wen
Current summarization datasets are insufficient to satisfy the demands of summarizing precedents across multiple jurisdictions, especially when labeled data are scarce for many jurisdictions.
1 code implementation • 9 Feb 2023 • Shuaiqi Liu, Jiannong Cao, Ruosong Yang, Zhiyuan Wen
Existing MDS datasets usually focus on producing the structureless summary covering a few input documents.
1 code implementation • 8 Feb 2023 • Shuaiqi Liu, Jiannong Cao, Ruosong Yang, Zhiyuan Wen
Within a report document, the salient information can be scattered in the textual and non-textual content.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Ruosong Yang, Jiannong Cao, Zhiyuan Wen, Youzheng Wu, Xiaodong He
However, to solve the AES task, previous works utilize shallow neural networks to learn essay representations and constrain calculated scores with regression loss or ranking loss, respectively.
no code implementations • LREC 2020 • Zhiyuan Wen, Jiannong Cao, Ruosong Yang, Senzhang Wang
The two major challenges in existing works lie in (1) effectively disentangling the original sentiment from input sentences; and (2) preserving the semantic content while transferring the sentiment.
no code implementations • LREC 2020 • Ruosong Yang, Jiannong Cao, Zhiyuan Wen
To enhance corpus based word embedding models, researchers utilize domain knowledge to learn more distinguishable representations via joint optimization and post-processing based models.