1 code implementation • COLING 2022 • Weijie Yu, Liang Pang, Jun Xu, Bing Su, Zhenhua Dong, Ji-Rong Wen
Enjoying the partial transport properties of OPT, the selected key sentences can not only effectively enhance the matching accuracy, but also be explained as the rationales for the matching results.
1 code implementation • 27 Apr 2024 • Chen Xu, Xiaopeng Ye, Wenjie Wang, Liang Pang, Jun Xu, Tat-Seng Chua
From a taxation perspective, we theoretically demonstrate that most previous fair re-ranking methods can be reformulated as an item-level tax policy.
no code implementations • 25 Apr 2024 • Yongqi Li, Xinyu Lin, Wenjie Wang, Fuli Feng, Liang Pang, Wenjie Li, Liqiang Nie, Xiangnan He, Tat-Seng Chua
With the information explosion on the Web, search and recommendation are foundational infrastructures to satisfying users' information needs.
no code implementations • 17 Apr 2024 • Minghe Gao, Shuang Chen, Liang Pang, Yuan YAO, Jisheng Dang, Wenqiao Zhang, Juncheng Li, Siliang Tang, Yueting Zhuang, Tat-Seng Chua
Their ability to execute intricate compositional reasoning tasks is also constrained, culminating in a stagnation of learning progression for these models.
1 code implementation • 17 Apr 2024 • Sunhao Dai, Chen Xu, Shicheng Xu, Liang Pang, Zhenhua Dong, Jun Xu
With the rapid advancement of large language models (LLMs), information retrieval (IR) systems, such as search engines and recommender systems, have undergone a significant paradigm shift.
no code implementations • 13 Apr 2024 • Jia Gu, Liang Pang, HuaWei Shen, Xueqi Cheng
In the first case, the agent is required to give the type and parameters of the probability distribution through the problem description, and then give the sampling sequence.
2 code implementations • 7 Apr 2024 • Zihao Wei, Jingcheng Deng, Liang Pang, Hanxing Ding, HuaWei Shen, Xueqi Cheng
We evaluate the multilingual knowledge editing generalization capabilities of existing methods on MLaKE.
1 code implementation • 28 Mar 2024 • Junkai Zhou, Liang Pang, Ya Jing, Jia Gu, HuaWei Shen, Xueqi Cheng
For dynamic persona information, we use current action information to internally retrieve the persona information of the agent, thereby reducing the interference of diverse persona information on the current action.
no code implementations • 15 Mar 2024 • Tianxiang Ye, Qi Wu, Junyuan Deng, Guoqing Liu, Liu Liu, Songpengcheng Xia, Liang Pang, Wenxian Yu, Ling Pei
In recent years, Neural Radiance Fields (NeRFs) have demonstrated significant potential in encoding highly-detailed 3D geometry and environmental appearance, positioning themselves as a promising alternative to traditional explicit representation for 3D scene reconstruction.
1 code implementation • 12 Mar 2024 • Tongyao Zhu, Qian Liu, Liang Pang, Zhengbao Jiang, Min-Yen Kan, Min Lin
Through carefully-designed synthetic tasks, covering the scenarios of full recitation, selective recitation and grounded question answering, we reveal that LMs manage to sequentially access their memory while encountering challenges in randomly accessing memorized content.
no code implementations • 7 Mar 2024 • Yuling Wang, Changxin Tian, Binbin Hu, Yanhua Yu, Ziqi Liu, Zhiqiang Zhang, Jun Zhou, Liang Pang, Xiao Wang
We encode the generated rationales from the student model into a dense vector, which empowers recommendation in both ID-based and ID-agnostic scenarios.
1 code implementation • 28 Feb 2024 • Shicheng Xu, Liang Pang, Mo Yu, Fandong Meng, HuaWei Shen, Xueqi Cheng, Jie zhou
Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating additional information from retrieval.
1 code implementation • 23 Feb 2024 • Zirui Guo, Lianghao Xia, Yanhua Yu, Yuling Wang, Zixuan Yang, Wei Wei, Liang Pang, Tat-Seng Chua, Chao Huang
Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data by generating novel graph structures.
1 code implementation • 22 Feb 2024 • Yan Lei, Liang Pang, Yuanzhuo Wang, HuaWei Shen, Xueqi Cheng
Questionnaires entail a series of questions that must conform to intricate constraints involving the questions, options, and overall structure.
no code implementations • 21 Feb 2024 • Danyang Hou, Liang Pang, HuaWei Shen, Xueqi Cheng
Video Corpus Moment Retrieval (VCMR) is a practical video retrieval task focused on identifying a specific moment within a vast corpus of untrimmed videos using the natural language query.
1 code implementation • 21 Feb 2024 • Danyang Hou, Liang Pang, HuaWei Shen, Xueqi Cheng
We argue that effectively capturing the partial relevance between the query and video is essential for the VCMR task.
1 code implementation • 20 Feb 2024 • Zihao Wei, Liang Pang, Hanxing Ding, Jingcheng Deng, HuaWei Shen, Xueqi Cheng
The premise of localization results in an incomplete knowledge editing, whereas an isolated assumption may impair both other knowledge and general abilities.
no code implementations • 16 Feb 2024 • Hanxing Ding, Liang Pang, Zihao Wei, HuaWei Shen, Xueqi Cheng
A careful and balanced integration of the parametric knowledge within LLMs with external information is crucial to alleviate hallucinations.
1 code implementation • 5 Feb 2024 • Shicheng Xu, Liang Pang, Jun Xu, HuaWei Shen, Xueqi Cheng
First, it is hard to share the contextual information of the ranking list between the two tasks.
1 code implementation • 2 Dec 2023 • Yunshan Ma, Chenchen Ye, Zijian Wu, Xiang Wang, Yixin Cao, Liang Pang, Tat-Seng Chua
Temporal complex event forecasting aims to predict the future events given the observed events from history.
no code implementations • 23 Nov 2023 • Shicheng Xu, Danyang Hou, Liang Pang, Jingcheng Deng, Jun Xu, HuaWei Shen, Xueqi Cheng
Furthermore, our subsequent exploration reveals that the inclusion of AI-generated images in the training data of the retrieval models exacerbates the invisible relevance bias.
1 code implementation • 22 Nov 2023 • Qifan Yu, Juncheng Li, Longhui Wei, Liang Pang, Wentao Ye, Bosheng Qin, Siliang Tang, Qi Tian, Yueting Zhuang
Multi-modal Large Language Models (MLLMs) tuned on machine-generated instruction-following data have demonstrated remarkable performance in various multi-modal understanding and generation tasks.
no code implementations • 21 Nov 2023 • Minghe Gao, Juncheng Li, Hao Fei, Liang Pang, Wei Ji, Guoming Wang, Wenqiao Zhang, Siliang Tang, Yueting Zhuang
Visual programming, a modular and generalizable paradigm, integrates different modules and Python operators to solve various vision-language tasks.
no code implementations • 13 Nov 2023 • Chen Xu, Wenjie Wang, Yuxin Li, Liang Pang, Jun Xu, Tat-Seng Chua
Recently, Large Language Models (LLMs) have enhanced user interaction, enabling seamless information retrieval and recommendations.
1 code implementation • 13 Nov 2023 • Junkai Zhou, Liang Pang, HuaWei Shen, Xueqi Cheng
The emergence of large language models (LLMs) further improves the capabilities of open-domain dialogue systems and can generate fluent, coherent, and diverse responses.
no code implementations • 3 Nov 2023 • Shicheng Xu, Liang Pang, Jiangnan Li, Mo Yu, Fandong Meng, HuaWei Shen, Xueqi Cheng, Jie zhou
Readers usually only give an abstract and vague description as the query based on their own understanding, summaries, or speculations of the plot, which requires the retrieval model to have a strong ability to estimate the abstract semantic associations between the query and candidate plots.
1 code implementation • 31 Oct 2023 • Sunhao Dai, Yuqi Zhou, Liang Pang, Weihao Liu, Xiaolin Hu, Yong liu, Xiao Zhang, Gang Wang, Jun Xu
We refer to this category of biases in neural retrieval models towards the LLM-generated text as the \textbf{source bias}.
1 code implementation • 16 Oct 2023 • Jingcheng Deng, Liang Pang, HuaWei Shen, Xueqi Cheng
It encodes the text corpus into a latent space, capturing current and future information from both source and target text.
no code implementations • 13 Oct 2023 • Chenxu Yang, Zheng Lin, Lanrui Wang, Chong Tian, Liang Pang, Jiangnan Li, Qirong Ho, Yanan Cao, Weiping Wang
Knowledge-grounded dialogue generation aims to mitigate the issue of text degeneration by incorporating external knowledge to supplement the context.
1 code implementation • 24 May 2023 • Kangxi Wu, Liang Pang, HuaWei Shen, Xueqi Cheng, Tat-Seng Chua
By jointly analyzing the proxy perplexities of LLMs, we can determine the source of the generated text.
1 code implementation • 22 May 2023 • Hanxing Ding, Liang Pang, Zihao Wei, HuaWei Shen, Xueqi Cheng, Tat-Seng Chua
Multi-aspect controllable text generation aims to generate fluent sentences that possess multiple desired attributes simultaneously.
1 code implementation • 18 May 2023 • Junkai Zhou, Liang Pang, HuaWei Shen, Xueqi Cheng
Language models trained on large-scale corpora can generate remarkably fluent results in open-domain dialogue.
no code implementations • 18 May 2023 • Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng
Dense retrieval has shown promise in the first-stage retrieval process when trained on in-domain labeled datasets.
no code implementations • 3 May 2023 • Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
In this paper, we propose a new visual reasoning task, called Visual Transformation Telling (VTT).
1 code implementation • 2 May 2023 • Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
Such \textbf{state driven} visual reasoning has limitations in reflecting the ability to infer the dynamics between different states, which has shown to be equally important for human cognition in Piaget's theory.
1 code implementation • 28 Apr 2023 • Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng, Tat-Seng Chua
This paper proposes a novel framework named \textbf{Search-in-the-Chain} (SearChain) for the interaction between LLM and IR to solve the challenges.
no code implementations • 29 Jan 2023 • Danyang Hou, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
In this paper, we focus on improving two problems of two-stage method: (1) Moment prediction bias: The predicted moments for most queries come from the top retrieved videos, ignoring the possibility that the target moment is in the bottom retrieved videos, which is caused by the inconsistency of Shared Normalization during training and inference.
1 code implementation • 10 Jan 2023 • Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal.
1 code implementation • 1 Dec 2022 • Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng
Different needs correspond to different IR tasks such as document retrieval, open-domain question answering, retrieval-based dialogue, etc., while they share the same schema to estimate the relationship between texts.
1 code implementation • 25 Apr 2022 • Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Ideally, if a PRF model can distinguish between irrelevant and relevant information in the feedback, the more feedback documents there are, the better the revised query will be.
1 code implementation • 6 Apr 2022 • Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng
In generalization stage, matching model explores the essential matching signals by being trained on diverse matching tasks.
no code implementations • NeurIPS 2021 • Ruibin Xiong, Yimeng Chen, Liang Pang, Xueqi Chen, Yanyan Lan
Ensemble-based debiasing methods have been shown effective in mitigating the reliance of classifiers on specific dataset bias, by exploiting the output of a bias-only model to adjust the learning target.
1 code implementation • EMNLP 2021 • Fei Xiao, Liang Pang, Yanyan Lan, Yan Wang, HuaWei Shen, Xueqi Cheng
The proposed transductive learning approach is general and effective to the task of unsupervised style transfer, and we will apply it to the other two typical methods in the future.
1 code implementation • EMNLP 2021 • Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus.
Ranked #3 on Question Answering on HotpotQA
no code implementations • 16 Aug 2021 • Lijuan Chen, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
We further extend these constraints to the semantic settings, which are shown to be better satisfied for all the deep text matching models.
no code implementations • 12 Aug 2021 • Lin Bo, Liang Pang, Gang Wang, Jun Xu, Xiuqiang He, Ji-Rong Wen
Experimental results base on three publicly available benchmarks showed that in both of the implementations, Pre-Rank can respectively outperform the underlying ranking models and achieved state-of-the-art performances.
1 code implementation • 2 Apr 2021 • Changying Hao, Liang Pang, Yanyan Lan, Yan Wang, Jiafeng Guo, Xueqi Cheng
In the sketch stage, a skeleton is extracted by removing words which are conflict to the counterfactual condition, from the original ending.
1 code implementation • 16 Jan 2021 • Liang Pang, Yanyan Lan, Xueqi Cheng
However, these models designed for short texts cannot well address the long-form text matching problem, because there are many contexts in long-form texts can not be directly aligned with each other, and it is difficult for existing models to capture the key matching signals from such noisy data.
1 code implementation • COLING 2020 • Bin Jiang, Wanyue Zhou, Jingxu Yang, Chao Yang, Shihan Wang, Liang Pang
However, generating personalized responses is still a challenging task since the leverage of predefined persona information is often insufficient.
no code implementations • COLING 2020 • Bin Jiang, Jing Hou, Wanyue Zhou, Chao Yang, Shihan Wang, Liang Pang
Aspect-based sentiment analysis (ABSA) aims to determine the sentiment polarity of each specific aspect in a given sentence.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2
1 code implementation • CVPR 2021 • Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
Following this definition, a new dataset namely TRANCE is constructed on the basis of CLEVR, including three levels of settings, i. e.~Basic (single-step transformation), Event (multi-step transformation), and View (multi-step transformation with variant views).
1 code implementation • EMNLP 2020 • Weijie Yu, Chen Xu, Jun Xu, Liang Pang, Xiaopeng Gao, Xiaozhao Wang, Ji-Rong Wen
Four popular text matching methods have been exploited in the paper.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Wanqing Cui, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
This paper proposes a novel approach to learn commonsense from images, instead of limited raw texts or costly constructed knowledge bases, for the commonsense reasoning problem in NLP.
no code implementations • 27 Sep 2020 • Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, Dawei Yin
Therefore, an ideal dialogue generation models should be able to capture the topic information of each context, detect the relevant context, and produce appropriate responses accordingly.
no code implementations • 13 Aug 2020 • Changying Hao, Liang Pang, Yanyan Lan, Fei Sun, Jiafeng Guo, Xue-Qi Cheng
To tackle this problem, we propose a Ranking Enhanced Dialogue generation framework in this paper.
no code implementations • 1 Jun 2020 • Linfang Hou, Liang Pang, Xin Hong, Yanyan Lan, Zhi-Ming Ma, Dawei Yin
Robust Reinforcement Learning aims to find the optimal policy with some extent of robustness to environmental dynamics.
1 code implementation • 22 May 2020 • Yunchang Zhu, Liang Pang, Yanyan Lan, Xue-Qi Cheng
To fill this gap, we switch to a ranking perspective that sorts the hypotheses in order of their plausibilities.
2 code implementations • 12 Dec 2019 • Liang Pang, Jun Xu, Qingyao Ai, Yanyan Lan, Xue-Qi Cheng, Ji-Rong Wen
In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents.
2 code implementations • ACL 2019 • Hainan Zhang, Yanyan Lan, Liang Pang, Jiafeng Guo, Xue-Qi Cheng
Then, the self-attention mechanism is utilized to update both the context and masked response representation.
no code implementations • 16 Mar 2019 • Jiafeng Guo, Yixing Fan, Liang Pang, Liu Yang, Qingyao Ai, Hamed Zamani, Chen Wu, W. Bruce Croft, Xue-Qi Cheng
Ranking models lie at the heart of research on information retrieval (IR).
no code implementations • 12 Jan 2019 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Lixin Su, Xue-Qi Cheng
However, the performances of such models are not so good as that in the RC task.
no code implementations • 18 Dec 2018 • Peng Peng, Liang Pang, Yufeng Yuan, Chao GAO
We show in the experiments that Pommerman is a perfect environment for studying continual learning, and the agent can improve its performance by continually learning new skills without forgetting the old ones.
1 code implementation • 22 Nov 2017 • Liang Pang, Yanyan Lan, Jun Xu, Jiafeng Guo, Xue-Qi Cheng
The main idea is to represent the weight matrix of the locally connected layer as the product of the kernel and the smoother, where the kernel is shared over different local receptive fields, and the smoother is for determining the importance and relations of different local receptive fields.
2 code implementations • 26th ACM International Conference on Information and Knowledge Management (CIKM '17) 2017 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, Xue-Qi Cheng
This paper concerns a deep learning approach to relevance ranking in information retrieval (IR).
no code implementations • 24 Jul 2017 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
Therefore, it is necessary to identify the difference between automatically learned features by deep IR models and hand-crafted features used in traditional learning to rank approaches.
1 code implementation • 23 Jul 2017 • Yixing Fan, Liang Pang, Jianpeng Hou, Jiafeng Guo, Yanyan Lan, Xue-Qi Cheng
In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods.
1 code implementation • 15 Jun 2016 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it.
1 code implementation • 15 Apr 2016 • Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, Xue-Qi Cheng
In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i. e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position.
7 code implementations • 20 Feb 2016 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, Xue-Qi Cheng
An effective way is to extract meaningful matching patterns from words, phrases, and sentences to produce the matching score.
1 code implementation • 26 Nov 2015 • Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, Xue-Qi Cheng
Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.
no code implementations • 27 Aug 2014 • Yuyu Zhang, Liang Pang, Lei Shi, Bin Wang
This paper describes the solution of Bazinga Team for Tmall Recommendation Prize 2014.
no code implementations • 29 Nov 2013 • Xudong Liu, Bing Xu, Yuyu Zhang, Qiang Yan, Liang Pang, Qiang Li, Hanxiao Sun, Bin Wang
The ICDM Challenge 2013 is to apply machine learning to the problem of hotel ranking, aiming to maximize purchases according to given hotel characteristics, location attractiveness of hotels, user's aggregated purchase history and competitive online travel agency information for each potential hotel choice.