1 code implementation • 14 Nov 2023 • Xiaonan Li, Changtai Zhu, Linyang Li, Zhangyue Yin, Tianxiang Sun, Xipeng Qiu
Thus, the LLM can iteratively provide feedback to retrieval and facilitate the retrieval result to fully support verifiable generation.
1 code implementation • 9 May 2023 • Xiaonan Li, Xipeng Qiu
Specifically, MoT is divided into two stages: 1. before the test stage, the LLM pre-thinks on the unlabeled dataset and saves the high-confidence thoughts as external memory; 2.
1 code implementation • 7 May 2023 • Xiaonan Li, Kai Lv, Hang Yan, Tianyang Lin, Wei Zhu, Yuan Ni, Guotong Xie, Xiaoling Wang, Xipeng Qiu
To train UDR, we cast various tasks' training signals into a unified list-wise ranking formulation by language model's feedback.
no code implementations • 27 Feb 2023 • Xiaonan Li, Xipeng Qiu
Additionally, the strong dependency among in-context examples makes it an NP-hard combinatorial optimization problem and enumerating all permutations is infeasible.
1 code implementation • 18 Oct 2022 • Xiaonan Li, Daya Guo, Yeyun Gong, Yun Lin, Yelong Shen, Xipeng Qiu, Daxin Jiang, Weizhu Chen, Nan Duan
In this paper, we present \textbf{SCodeR}, a \textbf{S}oft-labeled contrastive pre-training framework with two positive sample construction methods to learn functional-level \textbf{Code} \textbf{R}epresentation.
1 code implementation • 9 Aug 2022 • Hang Yan, Yu Sun, Xiaonan Li, Xipeng Qiu
In this paper, we propose using Convolutional Neural Network (CNN) to model these spatial relations in the score matrix.
Ranked #3 on Nested Named Entity Recognition on ACE 2005
1 code implementation • 26 Jan 2022 • Xiaonan Li, Yeyun Gong, Yelong Shen, Xipeng Qiu, Hang Zhang, Bolun Yao, Weizhen Qi, Daxin Jiang, Weizhu Chen, Nan Duan
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build code-text pairs.
no code implementations • EMNLP 2021 • Linyang Li, Demin Song, Xiaonan Li, Jiehang Zeng, Ruotian Ma, Xipeng Qiu
\textbf{P}re-\textbf{T}rained \textbf{M}odel\textbf{s} have been widely applied and recently proved vulnerable under backdoor attacks: the released pre-trained weights can be maliciously poisoned with certain triggers.
1 code implementation • ACL 2021 • Xiaonan Li, Yunfan Shao, Tianxiang Sun, Hang Yan, Xipeng Qiu, Xuanjing Huang
To alleviate this problem, we extend the recent successful early-exit mechanism to accelerate the inference of PTMs for sequence labeling tasks.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Hang Yan, Xiaonan Li, Xipeng Qiu
Reverse dictionary is the task to find the proper target word given the word description.
1 code implementation • ACL 2020 • Xiaonan Li, Hang Yan, Xipeng Qiu, Xuanjing Huang
Recently, the character-word lattice structure has been proved to be effective for Chinese named entity recognition (NER) by incorporating the word information.
Ranked #5 on Chinese Named Entity Recognition on MSRA
Chinese Named Entity Recognition named-entity-recognition +3
1 code implementation • 12 Nov 2019 • Tianxiang Sun, Yunfan Shao, Xiaonan Li, PengFei Liu, Hang Yan, Xipeng Qiu, Xuanjing Huang
Most existing deep multi-task learning models are based on parameter sharing, such as hard sharing, hierarchical sharing, and soft sharing.
7 code implementations • 10 Nov 2019 • Hang Yan, Bocao Deng, Xiaonan Li, Xipeng Qiu
The Bidirectional long short-term memory networks (BiLSTM) have been widely used as an encoder in models solving the named entity recognition (NER) task.
Ranked #11 on Chinese Named Entity Recognition on Resume NER
2 code implementations • ACL 2019 • Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, Tian Gao
Introducing common sense to natural language understanding systems has received increasing research attention.