no code implementations • Findings (EMNLP) 2021 • Yaochen Liu, Yazhou Zhang, Qiuchi Li, Benyou Wang, Dawei Song
The QPM framework involves a complex-valued multi-modal representation encoder, a quantum-like fusion subnetwork and a quantum measurement mechanism.
no code implementations • 30 May 2024 • Ling-Hao Chen, Shunlin Lu, Ailing Zeng, Hao Zhang, Benyou Wang, Ruimao Zhang, Lei Zhang
This study delves into the realm of multi-modality (i. e., video and motion modalities) human behavior understanding by leveraging the powerful capabilities of Large Language Models (LLMs).
1 code implementation • 28 May 2024 • Yaoyao Xu, Xinjian Zhao, Xiaozhuang Song, Benyou Wang, Tianshu Yu
We introduce a pioneering methodology for boosting large language models in the domain of protein representation learning.
1 code implementation • 28 May 2024 • Zhengyang Tang, Chenyu Huang, Xin Zheng, Shixi Hu, Zizhuo Wang, Dongdong Ge, Benyou Wang
We apply the data from OR-Instruct to various open-source LLMs of 7b size (termed as ORLMs), resulting in a significantly improved capability for optimization modeling.
no code implementations • 26 May 2024 • Zhan Su, Fengran Mo, Prayag Tiwari, Benyou Wang, Jian-Yun Nie, Jakob Grue Simonsen
For \textit{routing function}, we tailor two innovative routing functions according to the granularity: \texttt{TensorPoly-I} which directs to each rank within the entangled tensor while \texttt{TensorPoly-II} offers a finer-grained routing approach targeting each order of the entangled tensor.
1 code implementation • 21 May 2024 • Xuhan Huang, Qingning Shen, Yan Hu, Anningzhe Gao, Benyou Wang
By focusing on the processes LLMs undertake rather than the correctness of their final solutions, Mamo pioneers a novel evaluation paradigm.
1 code implementation • 14 May 2024 • Chenghao Zhu, Nuo Chen, Yufei Gao, Benyou Wang
The rapid advancement of Large Language Models (LLMs) highlights the urgent need for evolving evaluation methodologies that keep pace with improvements in language comprehension and information processing.
1 code implementation • 9 May 2024 • Junzhi Chen, Juhao Liang, Benyou Wang
The emergence of large language models (LLMs) has opened up unprecedented possibilities for automating complex tasks that are often comparable to human performance.
no code implementations • 29 Apr 2024 • Dingjie Song, Shunian Chen, Guiming Hardy Chen, Fei Yu, Xiang Wan, Benyou Wang
Despite the advancements and impressive performance of Multimodal Large Language Models (MLLMs) on benchmarks, their effectiveness in real-world, long-context, and multi-image tasks is unclear due to the benchmarks' limited scope.
2 code implementations • 10 Mar 2024 • Gang Hu, Ke Qin, Chenhan Yuan, Min Peng, Alejandro Lopez-Lira, Benyou Wang, Sophia Ananiadou, Wanlong Yu, Jimin Huang, Qianqian Xie
While the progression of Large Language Models (LLMs) has notably propelled financial analysis, their application has largely been confined to singular language realms, leaving untapped the potential of bilingual Chinese-English capacity.
1 code implementation • 6 Mar 2024 • Xidong Wang, Nuo Chen, Junyin Chen, Yan Hu, Yidong Wang, Xiangbo Wu, Anningzhe Gao, Xiang Wan, Haizhou Li, Benyou Wang
Despite the vast repository of global medical knowledge predominantly being in English, local languages are crucial for delivering tailored healthcare services, particularly in areas with limited medical resources.
no code implementations • 4 Mar 2024 • Juhao Liang, Ziwei Wang, Zhuoheng Ma, Jianquan Li, Zhiyi Zhang, Xiangbo Wu, Benyou Wang
Large Language Models(LLMs) have dramatically revolutionized the field of Natural Language Processing(NLP), offering remarkable capabilities that have garnered widespread usage.
2 code implementations • 1 Mar 2024 • Xianghong Fang, Jian Li, Qiang Sun, Benyou Wang
Uniformity plays an important role in evaluating learned representations, providing insights into self-supervised learning.
2 code implementations • 20 Feb 2024 • Qianqian Xie, Weiguang Han, Zhengyu Chen, Ruoyu Xiang, Xiao Zhang, Yueru He, Mengxi Xiao, Dong Li, Yongfu Dai, Duanyu Feng, Yijing Xu, Haoqiang Kang, Ziyan Kuang, Chenhan Yuan, Kailai Yang, Zheheng Luo, Tianlin Zhang, Zhiwei Liu, Guojun Xiong, Zhiyang Deng, Yuechen Jiang, Zhiyuan Yao, Haohang Li, Yangyang Yu, Gang Hu, Jiajia Huang, Xiao-Yang Liu, Alejandro Lopez-Lira, Benyou Wang, Yanzhao Lai, Hao Wang, Min Peng, Sophia Ananiadou, Jimin Huang
This along with the rapid development of LLMs, highlights the urgent need for a systematic financial evaluation benchmark for LLMs.
no code implementations • 20 Feb 2024 • Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang, Haoyang Huang, Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, Yuxian Gu, Xin Cheng, Xun Wang, Si-Qing Chen, Li Dong, Wei Lu, Zhifang Sui, Benyou Wang, Wai Lam, Furu Wei
We introduce Generalized Instruction Tuning (called GLAN), a general and scalable method for instruction tuning of Large Language Models (LLMs).
1 code implementation • 18 Feb 2024 • Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, Zhihong Chen, Jianquan Li, Xiang Wan, Benyou Wang
Recent advancements in Large Vision-Language Models (LVLMs) have enabled processing of multimodal inputs in language models but require significant computational resources for deployment, especially in edge devices.
no code implementations • 16 Feb 2024 • Guiming Hardy Chen, Shunian Chen, Ziche Liu, Feng Jiang, Benyou Wang
Adopting human and large language models (LLM) as judges (\textit{a. k. a} human- and LLM-as-a-judge) for evaluating the performance of LLMs has recently gained attention.
no code implementations • 12 Feb 2024 • Yazhou Zhang, Mengyao Wang, Chenyu Ren, Qiuchi Li, Prayag Tiwari, Benyou Wang, Jing Qin
The value of text classification's future research has encountered challenges and uncertainties, due to the extraordinary efficacy demonstrated by large language models (LLMs) across numerous downstream NLP tasks.
no code implementations • 17 Dec 2023 • Lei LI, Zhihui Xie, Mukai Li, Shunian Chen, Peiyi Wang, Liang Chen, Yazheng Yang, Benyou Wang, Lingpeng Kong
This paper explores preference distillation for large vision language models (LVLMs), improving their ability to generate helpful and faithful responses anchoring the visual context.
Ranked #19 on Visual Question Answering on MM-Vet
1 code implementation • 23 Nov 2023 • Wentao Ge, Shunian Chen, Guiming Hardy Chen, Zhihong Chen, Junying Chen, Shuo Yan, Chenghao Zhu, Ziyue Lin, Wenya Xie, Xinyi Zhang, Yichen Chai, Xiaoyu Liu, Dingjie Song, Xidong Wang, Anningzhe Gao, Zhiyi Zhang, Jianquan Li, Xiang Wan, Benyou Wang
Multimodal large language models (MLLMs) (e. g., GPT-4V, LLaVA, and Claude-3) have broadened the scope of AI applications.
1 code implementation • 16 Nov 2023 • Fei Yu, Anningzhe Gao, Benyou Wang
These findings offer a novel perspective on the role of outcome supervision in training value models for multi-step reasoning tasks and provide theoretical justification for its advantage in value estimation for guided decoding.
Ranked #43 on Arithmetic Reasoning on GSM8K
1 code implementation • 16 Nov 2023 • Junying Chen, Xidong Wang, Anningzhe Gao, Feng Jiang, Shunian Chen, Hongbo Zhang, Dingjie Song, Wenya Xie, Chuyi Kong, Jianquan Li, Xiang Wan, Haizhou Li, Benyou Wang
We validate the new protocol in the domains where proprietary LLMs like ChatGPT perform relatively poorly, such as Traditional Chinese Medicine.
no code implementations • 13 Nov 2023 • Chen Zhang, Benyou Wang, Dawei Song
To this end, we propose an elastic language model (ElasticLM) that elastically adjusts the tradeoff according to the request stream.
1 code implementation • 18 Oct 2023 • Yaxin Fan, Feng Jiang, Benyou Wang, Peifeng Li, Haizhou Li
Recent studies primarily focused on the quality of FMs evaluated by GPT-4 or their ability to pass medical exams, no studies have quantified the extent of self-diagnostic atomic knowledge stored in FMs' memory, which is the basis of foundation models to provide factual and reliable suggestions.
1 code implementation • 17 Oct 2023 • Yazhou Zhang, Mengyao Wang, Youxi Wu, Prayag Tiwari, Qiuchi Li, Benyou Wang, Jing Qin
Large language models (LLMs) and their variants have shown extraordinary efficacy across numerous downstream natural language processing (NLP) tasks, which has presented a new vision for the development of NLP.
1 code implementation • 21 Sep 2023 • Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Juncai He, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, Jinchao Xu
This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models.
no code implementations • 21 Aug 2023 • Chuyi Kong, Yaxin Fan, Xiang Wan, Feng Jiang, Benyou Wang
The unparalleled performance of closed-sourced ChatGPT has sparked efforts towards its democratization, with notable strides made by leveraging real user and ChatGPT dialogues, as evidenced by Vicuna.
1 code implementation • 17 Aug 2023 • Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li
We hope this benchmark provide first-hand experience in existing LLMs for medicine and also facilitate the widespread adoption and enhancement of medical LLMs within China.
1 code implementation • 6 Jun 2023 • Zhihong Chen, Guiming Hardy Chen, Shizhe Diao, Xiang Wan, Benyou Wang
Masked language modeling (MLM) has been one of the most popular pretraining recipes in natural language processing, e. g., BERT, one of the representative models.
no code implementations • 6 Jun 2023 • Yaochen Liu, Qiuchi Li, Benyou Wang, Yazhou Zhang, Dawei Song
Quantum theory, originally proposed as a physical theory to describe the motions of microscopic particles, has been applied to various non-physics domains involving human cognition and decision-making that are inherently uncertain and exhibit certain non-classical, quantum-like characteristics.
1 code implementation • NeurIPS 2023 • Zhongwei Wan, Che Liu, Mi Zhang, Jie Fu, Benyou Wang, Sibo Cheng, Lei Ma, César Quilodrán-Casas, Rossella Arcucci
Med-UniC reaches superior performance across 5 medical image tasks and 10 datasets encompassing over 30 diseases, offering a versatile framework for unifying multi-modal medical data within diverse linguistic communities.
1 code implementation • 24 May 2023 • Hongbo Zhang, Junying Chen, Feng Jiang, Fei Yu, Zhihong Chen, Jianquan Li, Guiming Chen, Xiangbo Wu, Zhiyi Zhang, Qingying Xiao, Xiang Wan, Benyou Wang, Haizhou Li
Experimental results demonstrate that HuatuoGPT achieves state-of-the-art results in performing medical consultation among open-source LLMs in GPT-4 evaluation, human evaluation, and medical benchmark datasets.
1 code implementation • 24 May 2023 • Hongbo Zhang, Xiang Wan, Benyou Wang
This gives us a hint that relational knowledge might not be redundant to the stored knowledge of PLMs, but rather be complementary.
1 code implementation • 23 May 2023 • Ridong Han, Tao Peng, Chaohao Yang, Benyou Wang, Lu Liu, Xiang Wan
ChatGPT has stimulated the research boom in the field of large language models.
1 code implementation • 20 May 2023 • Chen Zhang, Yang Yang, Jiahao Liu, Jingang Wang, Yunsen Xian, Benyou Wang, Dawei Song
However, when the capacity gap between the teacher and the student is large, a curse of capacity gap appears, invoking a deficiency in distilling LMs.
1 code implementation • 10 May 2023 • Zhibin Lu, Qianqian Xie, Benyou Wang, Jian-Yun Nie
An inductive Word-grounded Graph Convolutional Network (WGCN) is proposed to learn word and document representations based on WGraph in a supervised manner.
1 code implementation • 2 May 2023 • Jianquan Li, Xidong Wang, Xiangbo Wu, Zhiyi Zhang, Xiaolong Xu, Jie Fu, Prayag Tiwari, Xiang Wan, Benyou Wang
Moreover, we also experimentally show the benefit of the proposed dataset in many aspects: (i) trained models for other QA datasets in a zero-shot fashion; and (ii) as external knowledge for retrieval-augmented generation (RAG); and (iii) improving existing pre-trained language models by using the QA pairs as a pre-training corpus in continued training manner.
1 code implementation • 20 Apr 2023 • Xiaokang Liu, Jianquan Li, Jingjing Mu, Min Yang, Ruifeng Xu, Benyou Wang
In this paper, we introduce novel K-center contrastive learning and adjustable decision boundary learning (CLAB) to improve the effectiveness of open intent classification.
1 code implementation • 20 Apr 2023 • Zhihong Chen, Feng Jiang, Junying Chen, Tiannan Wang, Fei Yu, Guiming Chen, Hongbo Zhang, Juhao Liang, Chen Zhang, Zhiyi Zhang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li
This paper presents our efforts to democratize ChatGPT across language.
no code implementations • 18 Apr 2023 • Qianqian Xie, Zheheng Luo, Benyou Wang, Sophia Ananiadou
In this paper, we present a systematic review of recent advancements in BTS, leveraging cutting-edge NLP techniques from PLMs to LLMs, to help understand the latest progress, challenges, and future directions.
1 code implementation • 26 Mar 2023 • Fei Yu, Hongbo Zhang, Prayag Tiwari, Benyou Wang
This survey paper proposes a clearer view of natural language reasoning in the field of Natural Language Processing (NLP), both conceptually and practically.
1 code implementation • 23 Mar 2023 • Juhao Liang, Chen Zhang, Zhengyang Tang, Jie Fu, Dawei Song, Benyou Wang
Built upon the paradigm, we propose a retrieval model with modular prompt tuning named REMOP.
no code implementations • 24 Feb 2023 • Qiuchi Li, Benyou Wang, Yudong Zhu, Christina Lioma, Qun Liu
The emerging classical-quantum transfer learning paradigm has brought a decent performance to quantum computational models in many tasks, such as computer vision, by enabling a combination of quantum models and classical pre-trained neural networks.
1 code implementation • ICCV 2023 • Zhihong Chen, Shizhe Diao, Benyou Wang, Guanbin Li, Xiang Wan
Medical vision-and-language pre-training (Med-VLP) has shown promising improvements on many downstream medical tasks owing to its applicability to extracting generic representations from medical images and texts.
1 code implementation • 20 Dec 2022 • Ridong Han, Tao Peng, Benyou Wang, Lu Liu, Xiang Wan
Document-level relation extraction faces two overlooked challenges: long-tail problem and multi-label problem.
2 code implementations • 27 Oct 2022 • Guobing Gan, Peng Zhang, Sunzhu Li, Xiuqing Lu, Benyou Wang
In the era of deep learning, word embeddings are essential when dealing with text tasks.
1 code implementation • 23 Sep 2022 • Zhongwei Wan, Xin Liu, Benyou Wang, Jiezhong Qiu, Boyu Li, Ting Guo, Guangyong Chen, Yang Wang
The idea is to supplement the GNN-based main supervised recommendation task with the temporal representation via an auxiliary cross-view contrastive learning mechanism.
1 code implementation • COLING 2022 • Zhengyang Tang, Benyou Wang, Ting Yao
We believe this work facilitates the industry, as it saves enormous efforts and costs of deployment and increases the utility of computing resources.
1 code implementation • 20 Jul 2022 • Yi Yang, Chen Zhang, Benyou Wang, Dawei Song
To uncover the domain-general LM, we propose to identify domain-general parameters by playing lottery tickets (dubbed doge tickets).
1 code implementation • 2 Jul 2022 • Benyou Wang, Xiangbo Wu, Xiaokang Liu, Jianquan Li, Prayag Tiwari, Qianqian Xie
However, the humor aspect of natural language is relatively under-investigated, especially in the age of pre-trained language models.
1 code implementation • ICLR 2022 • Yuxin Ren, Benyou Wang, Lifeng Shang, Xin Jiang, Qun Liu
A tiny version achieves $96. 7\%$ performance of BERT-base with $ {1}/{48} $ encoder parameters (i. e., less than 2M parameters excluding the embedding layer) and $2. 7 \times$ faster on inference.
1 code implementation • NeurIPS 2021 • Benyou Wang, Emanuele Di Buccio, Massimo Melucci
Word meaning may change over time as a reflection of changes in human society.
1 code implementation • 11 Oct 2021 • Benyou Wang, Qianqian Xie, Jiahuan Pei, Zhihong Chen, Prayag Tiwari, Zhao Li, Jie Fu
In this paper, we summarize the recent progress of pre-trained language models in the biomedical domain and their applications in biomedical downstream tasks.
no code implementations • ICLR 2021 • Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, Jakob Grue Simonsen
Various Position Embeddings (PEs) have been proposed in Transformer based architectures~(e. g. BERT) to model word order.
no code implementations • 26 Oct 2020 • Zhenzhen Li, Jian-Yun Nie, Benyou Wang, Pan Du, Yuhan Zhang, Lixin Zou, Dongsheng Li
Distant supervision provides a means to create a large number of weakly labeled data at low cost for relation classification.
3 code implementations • Findings of the Association for Computational Linguistics 2020 • Chen Zhang, Qiuchi Li, Dawei Song, Benyou Wang
The state-of-the-art Aspect-based Sentiment Analysis (ABSA) approaches are mainly based on either detecting aspect terms and their corresponding sentiment polarities, or co-extracting aspect and opinion terms.
Aspect-Based Sentiment Analysis Aspect Sentiment Triplet Extraction +2
no code implementations • IEEE Transactions on Cybernetics 2020 • Wei Zhao, Benyou Wang, Min Yang, Jianbo Ye, Zhou Zhao, Xiaojun Chen, and Ying Shen
Movie recommendation systems provide users with ranked lists of movies based on individual’s preferences and constraints.
1 code implementation • ICLR 2020 • Benyou Wang, Donghao Zhao, Christina Lioma, Qiuchi Li, Peng Zhang, Jakob Grue Simonsen
The benefit of continuous functions over variable positions is that word representations shift smoothly with increasing positions.
no code implementations • 25 Sep 2019 • Peng Zhang, Xiaoliu Mao, Xindian Ma, Benyou Wang, Jing Zhang, Jun Wang, Dawei Song
We prove that by a mapping (via the trace operator) on the high-dimensional matching matrix, a low-dimensional attention matrix can be derived.
1 code implementation • NAACL 2019 • Qiuchi Li, Benyou Wang, Massimo Melucci
This paper seeks to model human language by the mathematical framework of quantum physics.
1 code implementation • 26 Feb 2019 • Benyou Wang, Qiuchi Li, Massimo Melucci, Dawei Song
To address this issue, we propose a new framework that models different levels of semantic units (e. g. sememe, word, sentence, and semantic abstraction) on a single \textit{Semantic Hilbert Space}, which naturally admits a non-linear semantic composition by means of a complex-valued vector word representation.
1 code implementation • 28 Aug 2018 • Peng Zhang, Zhan Su, Lipeng Zhang, Benyou Wang, Dawei Song
The recently proposed quantum language model (QLM) aimed at a principled approach to modeling term dependency by applying the quantum probability theory.
no code implementations • WS 2018 • Qiuchi Li, Sagar Uprety, Benyou Wang, Dawei Song
A challenging task for word embeddings is to capture the emergent meaning or polarity of a combination of individual words.
no code implementations • 10 Feb 2018 • Benyou Wang, Li Wang, Qikang Wei, Lichun Liu
Text representation is a fundamental concern in Natural Language Processing, especially in text classification.
3 code implementations • 30 May 2017 • Jun Wang, Lantao Yu, Wei-Nan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, Dell Zhang
This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair.