no code implementations • SEMEVAL 2021 • Junfeng Tian, Min Gui, Chenliang Li, Ming Yan, Wenming Xiao
We describe our systems of subtask1 and subtask3 for SemEval-2021 Task 6 on Detection of Persuasion Techniques in Texts and Images.
no code implementations • ACL 2021 • Haiyang Xu, Ming Yan, Chenliang Li, Bin Bi, Songfang Huang, Wenming Xiao, Fei Huang
Vision-language pre-training (VLP) on large-scale image-text pairs has achieved huge success for the cross-modal downstream tasks.
1 code implementation • ACL 2021 • Wei Liu, Xiyan Fu, Yue Zhang, Wenming Xiao
Lexicon information and pre-trained models, such as BERT, have been combined to explore Chinese sequence labelling tasks due to their respective strengths.
1 code implementation • COLING 2020 • Jie zhou, Junfeng Tian, Rui Wang, Yuanbin Wu, Wenming Xiao, Liang He
However, due to the variety of users{'} emotional expressions across domains, fine-tuning the pre-trained models on the source domain tends to overfit, leading to inferior results on the target domain.