no code implementations • EMNLP 2021 • Haoran Xu, Hainan Zhang, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Yanyan Lan
Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics.
no code implementations • NAACL 2022 • Yue Fang, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Bo Long, Yanyan Lan, Yanquan Zhou
Firstly, an utterance rewriter is conducted to complete the ellipsis content of dialogue content and then obtain the rewriting utterances.
no code implementations • 4 Mar 2024 • Bowen Gao, Minsi Ren, Yuyan Ni, Yanwen Huang, Bo Qiang, Zhi-Ming Ma, Wei-Ying Ma, Yanyan Lan
In the field of Structure-based Drug Design (SBDD), deep learning-based generative models have achieved outstanding performance in terms of docking score.
no code implementations • 21 Feb 2024 • Han Tang, Shikun Feng, Bicheng Lin, Yuyan Ni, Jingjing Liu, Wei-Ying Ma, Yanyan Lan
REMO offers a novel solution to MRL by exploiting the underlying shared patterns in chemical reactions as \textit{context} for pre-training, which effectively infers meaningful representations of common chemistry knowledge.
1 code implementation • 12 Dec 2023 • Yuxuan Song, Jingjing Gong, Minkai Xu, Ziyao Cao, Yanyan Lan, Stefano Ermon, Hao Zhou, Wei-Ying Ma
The generation of 3D molecules requires simultaneously deciding the categorical features~(atom types) and continuous features~(atom coordinates).
no code implementations • 9 Nov 2023 • Shikun Feng, Minghao Li, Yinjun Jia, WeiYing Ma, Yanyan Lan
The binding between proteins and ligands plays a crucial role in the realm of drug discovery.
no code implementations • Mathematics 2022 • Yuyan Ni, Yanyan Lan, Ao Liu, ZhiMing Ma
Comparing IB and DIB on these terms, we prove that DIB's SG bound is tighter than IB's while DIB's RD is larger than IB's.
no code implementations • 3 Nov 2023 • Yuyan Ni, Shikun Feng, Wei-Ying Ma, Zhi-Ming Ma, Yanyan Lan
By aligning with physical principles, SliDe shows a 42\% improvement in the accuracy of estimated force fields compared to current state-of-the-art denoising methods, and thus outperforms traditional baselines on various molecular property prediction tasks.
no code implementations • 1 Nov 2023 • Minsi Ren, Bowen Gao, Bo Qiang, Yanyan Lan
Structure-based drug design (SBDD) stands at the forefront of drug discovery, emphasizing the creation of molecules that target specific binding pockets.
1 code implementation • 22 Oct 2023 • Shikun Feng, Lixin Yang, WeiYing Ma, Yanyan Lan
Molecular representation learning is fundamental for many drug related applications.
no code implementations • 11 Oct 2023 • Bowen Gao, Yinjun Jia, Yuanle Mo, Yuyan Ni, WeiYing Ma, ZhiMing Ma, Yanyan Lan
Pocket representations play a vital role in various biomedical applications, such as druggability estimation, ligand affinity prediction, and de novo drug design.
1 code implementation • 10 Oct 2023 • Bowen Gao, Bo Qiang, Haichuan Tan, Minsi Ren, Yinjun Jia, Minsi Lu, Jingjing Liu, WeiYing Ma, Yanyan Lan
Virtual screening, which identifies potential drugs from vast compound databases to bind with a particular protein pocket, is a critical step in AI-assisted drug discovery.
1 code implementation • 20 Jul 2023 • Shikun Feng, Yuyan Ni, Yanyan Lan, Zhi-Ming Ma, Wei-Ying Ma
Theoretically, the objective is equivalent to learning the force field, which is revealed helpful for downstream tasks.
no code implementations • 19 Jul 2023 • Qingyao Ai, Ting Bai, Zhao Cao, Yi Chang, Jiawei Chen, Zhumin Chen, Zhiyong Cheng, Shoubin Dong, Zhicheng Dou, Fuli Feng, Shen Gao, Jiafeng Guo, Xiangnan He, Yanyan Lan, Chenliang Li, Yiqun Liu, Ziyu Lyu, Weizhi Ma, Jun Ma, Zhaochun Ren, Pengjie Ren, Zhiqiang Wang, Mingwen Wang, Ji-Rong Wen, Le Wu, Xin Xin, Jun Xu, Dawei Yin, Peng Zhang, Fan Zhang, Weinan Zhang, Min Zhang, Xiaofei Zhu
The research field of Information Retrieval (IR) has evolved significantly, expanding beyond traditional search to meet diverse user information needs.
no code implementations • 12 Jul 2023 • Qiying Yu, Yudi Zhang, Yuyan Ni, Shikun Feng, Yanyan Lan, Hao Zhou, Jingjing Liu
Self-supervised learning has recently gained growing interest in molecular modeling for scientific tasks such as AI-assisted drug discovery.
1 code implementation • 5 May 2023 • Bo Qiang, Yuxuan Song, Minkai Xu, Jingjing Gong, Bowen Gao, Hao Zhou, WeiYing Ma, Yanyan Lan
Generating desirable molecular structures in 3D is a fundamental problem for drug discovery.
no code implementations • 3 May 2023 • Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
In this paper, we propose a new visual reasoning task, called Visual Transformation Telling (VTT).
1 code implementation • 2 May 2023 • Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
Such \textbf{state driven} visual reasoning has limitations in reflecting the ability to infer the dynamics between different states, which has shown to be equally important for human cognition in Piaget's theory.
no code implementations • 29 Jan 2023 • Danyang Hou, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
In this paper, we focus on improving two problems of two-stage method: (1) Moment prediction bias: The predicted moments for most queries come from the top retrieved videos, ignoring the possibility that the target moment is in the bottom retrieved videos, which is caused by the inconsistency of Shared Normalization during training and inference.
1 code implementation • 10 Jan 2023 • Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal.
no code implementations • 18 Oct 2022 • Yuancheng Sun, Yimeng Chen, Weizhi Ma, Wenhao Huang, Kang Liu, ZhiMing Ma, Wei-Ying Ma, Yanyan Lan
In our implementation, we adopt both the state-of-the-art molecule embedding models under the supervised learning paradigm and the pretraining paradigm as the molecule representation module of PEMP, respectively.
1 code implementation • 29 Jun 2022 • Yimeng Chen, Ruibin Xiong, ZhiMing Ma, Yanyan Lan
Motivated by this, we design a new group invariant learning method, which constructs groups with statistical independence tests, and reweights samples by group label proportion to meet the criteria.
1 code implementation • 25 Apr 2022 • Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Ideally, if a PRF model can distinguish between irrelevant and relevant information in the feedback, the more feedback documents there are, the better the revised query will be.
no code implementations • NeurIPS 2021 • Ruibin Xiong, Yimeng Chen, Liang Pang, Xueqi Chen, Yanyan Lan
Ensemble-based debiasing methods have been shown effective in mitigating the reliance of classifiers on specific dataset bias, by exploiting the output of a bias-only model to adjust the learning target.
no code implementations • 22 Oct 2021 • Haoran Xu, Hainan Zhang, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Yanyan Lan
Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario.
no code implementations • 11 Oct 2021 • Shentong Mo, Xi Fu, Chenyang Hong, Yizhen Chen, Yuxuan Zheng, Xiangru Tang, Zhiqiang Shen, Eric P Xing, Yanyan Lan
The core problem is to model how regulatory elements interact with each other and its variability across different cell types.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Shentong Mo, Xi Fu, Chenyang Hong, Yizhen Chen, Yuxuan Zheng, Xiangru Tang, Yanyan Lan, Zhiqiang Shen, Eric Xing
In this work, we propose a simple yet effective approach for pre-training genome data in a multi-modal and self-supervised manner, which we call GeneBERT.
no code implementations • Findings (EMNLP) 2021 • Xu Wang, Hainan Zhang, Shuai Zhao, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Bo Cheng, Yanyan Lan
Furthermore, the consistency signals between each candidate and the speaker's own history are considered to drive a model to prefer a candidate that is logically consistent with the speaker's history logic.
1 code implementation • EMNLP 2021 • Fei Xiao, Liang Pang, Yanyan Lan, Yan Wang, HuaWei Shen, Xueqi Cheng
The proposed transductive learning approach is general and effective to the task of unsupervised style transfer, and we will apply it to the other two typical methods in the future.
1 code implementation • EMNLP 2021 • Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus.
Ranked #3 on Question Answering on HotpotQA
no code implementations • 16 Aug 2021 • Lijuan Chen, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
We further extend these constraints to the semantic settings, which are shown to be better satisfied for all the deep text matching models.
no code implementations • 18 Jul 2021 • Yinqiong Cai, Yixing Fan, Jiafeng Guo, Ruqing Zhang, Yanyan Lan, Xueqi Cheng
However, these methods often lose the discriminative power as term-based methods, thus introduce noise during retrieval and hurt the recall performance.
no code implementations • 14 Jun 2021 • Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan YAO, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, Jun Zhu
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI).
no code implementations • NAACL 2021 • Haolan Zhan, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Yongjun Bao, Yanyan Lan
In particular, a sequential knowledge transition model equipped with a pre-trained knowledge-aware response generator (SKT-KG) formulates the high-level knowledge transition and fully utilizes the limited knowledge data.
1 code implementation • 2 Apr 2021 • Changying Hao, Liang Pang, Yanyan Lan, Yan Wang, Jiafeng Guo, Xueqi Cheng
In the sketch stage, a skeleton is extracted by removing words which are conflict to the counterfactual condition, from the original ending.
2 code implementations • 11 Mar 2021 • Yuqi Huo, Manli Zhang, Guangzhen Liu, Haoyu Lu, Yizhao Gao, Guoxing Yang, Jingyuan Wen, Heng Zhang, Baogui Xu, Weihao Zheng, Zongzheng Xi, Yueqian Yang, Anwen Hu, Jinming Zhao, Ruichen Li, Yida Zhao, Liang Zhang, Yuqing Song, Xin Hong, Wanqing Cui, Danyang Hou, Yingyan Li, Junyi Li, Peiyu Liu, Zheng Gong, Chuhao Jin, Yuchong Sun, ShiZhe Chen, Zhiwu Lu, Zhicheng Dou, Qin Jin, Yanyan Lan, Wayne Xin Zhao, Ruihua Song, Ji-Rong Wen
We further construct a large Chinese multi-source image-text dataset called RUC-CAS-WenLan for pre-training our BriVL model.
Ranked #1 on Image Retrieval on RUC-CAS-WenLan
no code implementations • 2 Mar 2021 • Haolan Zhan, Hainan Zhang, Hongshen Chen, Lei Shen, Zhuoye Ding, Yongjun Bao, Weipeng Yan, Yanyan Lan
To tackle this problem, we propose an adaptive posterior network based on Transformer architecture that can utilize user-cared information from customer reviews.
no code implementations • 1 Mar 2021 • Yixing Fan, Jiafeng Guo, Xinyu Ma, Ruqing Zhang, Yanyan Lan, Xueqi Cheng
We employ 16 linguistic tasks to probe a unified retrieval model over these three retrieval tasks to answer this question.
no code implementations • 25 Feb 2021 • Chen Wu, Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Xueqi Cheng
One is the widely adopted metric such as F1 which acts as a balanced objective, and the other is the best F1 under some minimal recall constraint which represents a typical objective in professional search.
no code implementations • 16 Feb 2021 • Haolan Zhan, Hainan Zhang, Hongshen Chen, Lei Shen, Yanyan Lan, Zhuoye Ding, Dawei Yin
A simple and effective way is to extract keywords directly from the knowledge-base of products, i. e., attributes or title, as the recommendation reason.
1 code implementation • 16 Jan 2021 • Liang Pang, Yanyan Lan, Xueqi Cheng
However, these models designed for short texts cannot well address the long-form text matching problem, because there are many contexts in long-form texts can not be directly aligned with each other, and it is difficult for existing models to capture the key matching signals from such noisy data.
no code implementations • 25 Dec 2020 • Yan Gao, Jiafeng Guo, Yanyan Lan, Huaming Liao
The ranking objective is the same as existing methods, i. e., to create a ranking list of items according to users' interests.
1 code implementation • CVPR 2021 • Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
Following this definition, a new dataset namely TRANCE is constructed on the basis of CLEVR, including three levels of settings, i. e.~Basic (single-step transformation), Event (multi-step transformation), and View (multi-step transformation with variant views).
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Wanqing Cui, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
This paper proposes a novel approach to learn commonsense from images, instead of limited raw texts or costly constructed knowledge bases, for the commonsense reasoning problem in NLP.
no code implementations • 27 Sep 2020 • Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, Dawei Yin
Therefore, an ideal dialogue generation models should be able to capture the topic information of each context, detect the relevant context, and produce appropriate responses accordingly.
no code implementations • 25 Aug 2020 • Lixin Su, Jiafeng Guo, Ruqing Zhang, Yixing Fan, Yanyan Lan, Xue-Qi Cheng
To tackle such a challenge, in this work, we introduce the \textit{Continual Domain Adaptation} (CDA) task for MRC.
1 code implementation • 25 Aug 2020 • Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Xue-Qi Cheng
To address this new task, we propose a novel Contrastive Generation model, namely CtrsGen for short, to generate the intent description by contrasting the relevant documents with the irrelevant documents given a query.
no code implementations • 13 Aug 2020 • Changying Hao, Liang Pang, Yanyan Lan, Fei Sun, Jiafeng Guo, Xue-Qi Cheng
To tackle this problem, we propose a Ranking Enhanced Dialogue generation framework in this paper.
no code implementations • ICML 2020 • Jianing Li, Yanyan Lan, Jiafeng Guo, Xue-Qi Cheng
We prove that under certain conditions, a linear combination of quality and diversity constitutes a divergence metric between the generated distribution and the real distribution.
no code implementations • 21 Jun 2020 • Zizhen Wang, Yixing Fan, Jiafeng Guo, Liu Yang, Ruqing Zhang, Yanyan Lan, Xue-Qi Cheng, Hui Jiang, Xiaozhao Wang
However, it has long been a challenge to properly measure the similarity between two questions due to the inherent variation of natural language, i. e., there could be different ways to ask a same question or different questions sharing similar expressions.
no code implementations • 1 Jun 2020 • Linfang Hou, Liang Pang, Xin Hong, Yanyan Lan, Zhi-Ming Ma, Dawei Yin
Robust Reinforcement Learning aims to find the optimal policy with some extent of robustness to environmental dynamics.
1 code implementation • 22 May 2020 • Yunchang Zhu, Liang Pang, Yanyan Lan, Xue-Qi Cheng
To fill this gap, we switch to a ranking perspective that sorts the hypotheses in order of their plausibilities.
8 code implementations • ICML 2020 • Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Li-Wei Wang, Tie-Yan Liu
This motivates us to remove the warm-up stage for the training of Pre-LN Transformers.
2 code implementations • 12 Dec 2019 • Liang Pang, Jun Xu, Qingyao Ai, Yanyan Lan, Xue-Qi Cheng, Ji-Rong Wen
In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents.
2 code implementations • ACL 2019 • Hainan Zhang, Yanyan Lan, Liang Pang, Jiafeng Guo, Xue-Qi Cheng
Then, the self-attention mechanism is utilized to update both the context and masked response representation.
no code implementations • 9 Jul 2019 • Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
Chinese input recommendation plays an important role in alleviating human cost in typing Chinese words, especially in the scenario of mobile applications.
no code implementations • 24 May 2019 • Lixin Su, Jiafeng Guo, Yixing Fan, Yanyan Lan, Xue-Qi Cheng
Web question answering (QA) has become an indispensable component in modern search systems, which can significantly improve users' search experience by providing a direct answer to users' information need.
no code implementations • 24 May 2019 • Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Xue-Qi Cheng
To generate a sound outline, an ideal OG model should be able to capture three levels of coherence, namely the coherence between context paragraphs, that between a section and its heading, and that between context headings.
no code implementations • 12 Jan 2019 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Lixin Su, Xue-Qi Cheng
However, the performances of such models are not so good as that in the RC task.
no code implementations • ACL 2018 • Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
In this paper, we propose two tailored optimization criteria for Seq2Seq to different conversation scenarios, i. e., the maximum generated likelihood for specific-requirement scenario, and the conditional value-at-risk for diverse-requirement scenario.
no code implementations • ACL 2018 • Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, Xue-Qi Cheng
In conversation, a general response (e. g., {``}I don{'}t know{''}) could correspond to a large variety of input utterances.
2 code implementations • SIGIR '18 2018 • Yixing Fan, Jiafeng Guo, Yanyan Lan, Jun Xu, ChengXiang Zhai, Xue-Qi Cheng
The local matching layer focuses on producing a set of local relevance signals by modeling the semantic matching between a query and each passage of a document.
1 code implementation • 29 Apr 2018 • Yadi Lao, Jun Xu, Yanyan Lan, Jiafeng Guo, Sheng Gao, Xue-Qi Cheng
Inspired by the success and methodology of the AlphaGo Zero, MM-Tag formalizes the problem of sequence tagging with a Monte Carlo tree search (MCTS) enhanced Markov decision process (MDP) model, in which the time steps correspond to the positions of words in a sentence from left to right, and each action corresponds to assign a tag to a word.
no code implementations • 22 Apr 2018 • Guoxin Cui, Jun Xu, Wei Zeng, Yanyan Lan, Jiafeng Guo, Xue-Qi Cheng
One of the most significant bottleneck in training large scale machine learning models on parameter server (PS) is the communication overhead, because it needs to frequently exchange the model gradients between the workers and servers during the training iterations.
1 code implementation • 22 Nov 2017 • Liang Pang, Yanyan Lan, Jun Xu, Jiafeng Guo, Xue-Qi Cheng
The main idea is to represent the weight matrix of the locally connected layer as the product of the kernel and the smoother, where the kernel is shared over different local receptive fields, and the smoother is for determining the importance and relations of different local receptive fields.
2 code implementations • 26th ACM International Conference on Information and Knowledge Management (CIKM '17) 2017 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, Xue-Qi Cheng
This paper concerns a deep learning approach to relevance ranking in information retrieval (IR).
no code implementations • 24 Jul 2017 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
Therefore, it is necessary to identify the difference between automatically learned features by deep IR models and hand-crafted features used in traditional learning to rank approaches.
1 code implementation • 23 Jul 2017 • Yixing Fan, Liang Pang, Jianpeng Hou, Jiafeng Guo, Yanyan Lan, Xue-Qi Cheng
In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods.
no code implementations • 18 Jul 2017 • Ruqing Zhang, Jiafeng Guo, Yanyan Lan, Jun Xu, Xue-Qi Cheng
Representing texts as fixed-length vectors is central to many language processing tasks.
no code implementations • 23 Nov 2016 • Jia Zhang, Zheng Wang, Qian Li, Jialin Zhang, Yanyan Lan, Qiang Li, Xiaoming Sun
In the guaranteed delivery scenario, ad exposures (which are also called impressions in some works) to users are guaranteed by contracts signed in advance between advertisers and publishers.
1 code implementation • 15 Jun 2016 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it.
1 code implementation • 15 Apr 2016 • Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, Xue-Qi Cheng
In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i. e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position.
no code implementations • 24 Mar 2016 • Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, Xue-Qi Cheng
Recent work exhibited that distributed word representations are good at capturing linguistic regularities in language.
7 code implementations • 20 Feb 2016 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, Xue-Qi Cheng
An effective way is to extract meaningful matching patterns from words, phrases, and sentences to produce the matching score.
1 code implementation • 26 Nov 2015 • Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, Xue-Qi Cheng
Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.
no code implementations • 26 Sep 2013 • Shuzi Niu, Yanyan Lan, Jiafeng Guo, Xue-Qi Cheng
Traditional rank aggregation methods are deterministic, and can be categorized into explicit and implicit methods depending on whether rank information is explicitly or implicitly utilized.
no code implementations • NeurIPS 2012 • Yanyan Lan, Jiafeng Guo, Xueqi Cheng, Tie-Yan Liu
This paper is concerned with the statistical consistency of ranking methods.
no code implementations • NeurIPS 2009 • Wei Chen, Tie-Yan Liu, Yanyan Lan, Zhi-Ming Ma, Hang Li
We show that these loss functions are upper bounds of the measure-based ranking errors.