no code implementations • 23 Mar 2024 • Hongzheng Li, Ruojin Wang, Ge Shi, Xing Lv, Lei Lei, Chong Feng, Fang Liu, JinKun Lin, Yangguang Mei, Lingnan Xu
In this paper, we introduce RAAMove, a comprehensive multi-domain corpus dedicated to the annotation of move structures in RA abstracts.
no code implementations • 14 Feb 2024 • Ge Shi, Zhili Yang
Then we render the output of optical flow net to a fully convolutional SegNet model.
no code implementations • 25 Jan 2024 • Zeyu Xi, Ge Shi, Xuefen Li, Junchi Yan, Zun Li, Lifang Wu, Zilin Liu, Liang Wang
We develop a knowledge guided entity-aware video captioning network (KEANet) based on a candidate player list in encoder-decoder form for basketball live text broadcast.
no code implementations • 7 Oct 2023 • Shuyang Liu, Zixuan Chen, Ge Shi, Ji Wang, Changjie Fan, Yu Xiong, Runze Wu Yujing Hu, Ze Ji, Yang Gao
The selection of appropriate baselines in IG is crucial for crafting meaningful and unbiased explanations of model predictions in diverse settings.
no code implementations • 16 May 2023 • Bo wang, Heyan Huang, Xiaochi Wei, Ge Shi, Xiao Liu, Chong Feng, Tong Zhou, Shuaiqiang Wang, Dawei Yin
Event extraction aims to recognize pre-defined event triggers and arguments from texts, which suffer from the lack of high-quality annotations.
no code implementations • 28 Feb 2023 • Xianglong Lang, Zhuming Wang, Zun Li, Meng Tian, Ge Shi, Lifang Wu, Liang Wang
Specifically, the framework consists of a Visual Representation Module to extract individual appearance features, a Knowledge Augmented Semantic Relation Module explore semantic representations of individual actions, and a Knowledge-Semantic-Visual Interaction Module aims to integrate visual and semantic information by the knowledge.
no code implementations • ACL 2022 • Xiao Liu, Heyan Huang, Ge Shi, Bo wang
We consider event extraction in a generative manner with template-based conditional generation.
no code implementations • 26 Jan 2022 • Sinuo Deng, Lifang Wu, Ge Shi, Lehao Xing, Meng Jian, Ye Xiang
We first introduce a prompt tuning method that mimics the pretraining objective of CLIP and thus can leverage the rich image and text semantics entailed in CLIP.
no code implementations • EMNLP 2018 • Ge Shi, Chong Feng, Lifu Huang, Boliang Zhang, Heng Ji, Lejian Liao, He-Yan Huang
Relation Extraction suffers from dramatical performance decrease when training a model on one genre and directly applying it to a new genre, due to the distinct feature distributions.