no code implementations • WMT (EMNLP) 2020 • Haijiang Wu, Zixuan Wang, Qingsong Ma, Xinjie Wen, Ruichen Wang, Xiaoli Wang, Yulin Zhang, Zhipeng Yao, Siyao Peng
This paper presents Tencent’s submission to the WMT20 Quality Estimation (QE) Shared Task: Sentence-Level Post-editing Effort for English-Chinese in Task 2.
1 code implementation • 16 Mar 2024 • Guanzhou Ke, Bo wang, Xiaoli Wang, Shengfeng He
To this end, we propose an innovative framework for multi-view representation learning, which incorporates a technique we term 'distilled disentangling'.
no code implementations • 23 Feb 2024 • Jiawei Zheng, Hanghai Hong, Xiaoli Wang, Jingsong Su, Yonggui Liang, Shikai Wu
Second, LLMs with fine-tuning on domain-specific data often require high training costs for domain adaptation, and may weaken the zero-shot MT capabilities of LLMs due to over-specialization.
no code implementations • 13 Dec 2023 • Xiaojie Hong, Zixin Song, Liangzhi Li, Xiaoli Wang, Feiyan Liu
Medical Visual Question Answering (Med-VQA) is a very important task in healthcare industry, which answers a natural language question with a medical image.
1 code implementation • 9 Sep 2023 • Yifan Dong, Suhang Wu, Fandong Meng, Jie zhou, Xiaoli Wang, Jianxin Lin, Jinsong Su
2) the input text and image are often not perfectly matched, and thus the image may introduce noise into the model.
1 code implementation • 3 Aug 2023 • Guanzhou Ke, Yang Yu, Guoqing Chao, Xiaoli Wang, Chenyang Xu, Shengfeng He
In this paper, we propose a novel multi-view representation disentangling method that aims to go beyond inductive biases, ensuring both interpretability and generalizability of the resulting representations.
no code implementations • 27 Jun 2023 • Yakun Yu, Mingjun Zhao, Shi-ang Qi, Feiran Sun, Baoxun Wang, Weidong Guo, Xiaoli Wang, Lei Yang, Di Niu
Multimodal Sentiment Analysis leverages multimodal signals to detect the sentiment of a speaker.
1 code implementation • 31 May 2023 • Tong Li, Zhihao Wang, Liangying Shao, Xuling Zheng, Xiaoli Wang, Jinsong Su
Specifically, in addition to a text encoder encoding the input text, our model is equipped with a table header generator to first output a table header, i. e., the first row of the table, in the manner of sequence generation.
no code implementations • CVPR 2023 • Mingjun Zhao, Yakun Yu, Xiaoli Wang, Lei Yang, Di Niu
To overcome the limitations of existing methods, we propose a Search-Map-Search learning paradigm which combines the advantages of heuristic search and supervised learning to select the best combination of frames from a video as one entity.
1 code implementation • 20 Apr 2023 • Mingjun Zhao, Shan Lu, Zixuan Wang, Xiaoli Wang, Di Niu
Automated augmentation is an emerging and effective technique to search for data augmentation policies to improve generalizability of deep neural network training.
1 code implementation • 28 Dec 2022 • Guanzhou Ke, Guoqing Chao, Xiaoli Wang, Chenyang Xu, Yongqi Zhu, Yang Yu
To this end, we utilize a deep fusion network to fuse view-specific representations into the view-common representation, extracting high-level semantics for obtaining robust representation.
1 code implementation • 13 Nov 2022 • Binbin Xie, Xiangpeng Wei, Baosong Yang, Huan Lin, Jun Xie, Xiaoli Wang, Min Zhang, Jinsong Su
Keyphrase generation aims to automatically generate short phrases summarizing an input document.
no code implementations • SIGIR '22: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval 2022 • Yefan Huang, Xiaoli Wang, Feiyan Liu, Guofeng Huang
Medical visual question answering (Med-VQA) is a challenging problem that aims to take a medical image and a clinical question about the image as input and output a correct answer in natural language.
1 code implementation • 31 May 2021 • Mingjun Zhao, Haijiang Wu, Di Niu, Zixuan Wang, Xiaoli Wang
Verdi adopts two word predictors to enable diverse features to be extracted from a pair of sentences for subsequent quality estimation, including a transformer-based neural machine translation (NMT) model and a pre-trained cross-lingual language model (XLM).
no code implementations • 13 Apr 2020 • Mingjun Zhao, Haijiang Wu, Di Niu, Xiaoli Wang
Specifically, we propose a data selection framework based on Deterministic Actor-Critic, in which a critic network predicts the expected change of model performance due to a certain sample, while an actor network learns to select the best sample out of a random batch of samples presented to it.