1 code implementation • ACL 2022 • Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI
Recently, parallel text generation has received widespread attention due to its success in generation efficiency.
1 code implementation • Findings (NAACL) 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Multimodal named entity recognition and relation extraction (MNER and MRE) is a fundamental and crucial branch in information extraction.
no code implementations • EMNLP (sdp) 2020 • Lei LI, Yang Xie, Wei Liu, Yinan Liu, Yafei Jiang, Siya Qi, Xingyuan Li
In the LongSumm shared task, we integrate both the extractive and abstractive summarization ways.
1 code implementation • Findings (EMNLP) 2021 • Hua Zheng, Lei LI, Damai Dai, Deli Chen, Tianyu Liu, Xu sun, Yang Liu
In this paper, we propose to leverage word-formation knowledge to enhance Chinese WSD.
no code implementations • ICML 2020 • Wenxian Shi, Hao Zhou, Ning Miao, Lei LI
Interpretability is important in text generation for guiding the generation with interpretable attributes.
1 code implementation • COLING 2022 • Dugang Liu, Weihao Du, Lei LI, Weike Pan, Zhong Ming
Existing legal judgment prediction methods usually only consider one single case fact description as input, which may not fully utilize the information in the data such as case relations and frequency.
no code implementations • ACL 2022 • Shijie Geng, Zuohui Fu, Yingqiang Ge, Lei LI, Gerard de Melo, Yongfeng Zhang
In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items.
no code implementations • EMNLP 2021 • Zhiyuan Zeng, Jiaze Chen, Weiran Xu, Lei LI
Based on the artificial dataset, we train an evaluation model that can not only make accurate and robust factual consistency discrimination but is also capable of making interpretable factual errors tracing by backpropagated gradient distribution on token embeddings.
no code implementations • Findings (ACL) 2022 • Lei LI, Kai Fan, Hongjia Li, Chun Yuan
Syntactic structure has long been argued to be potentially useful for enforcing accurate word alignment and improving generalization performance of machine translation.
no code implementations • FNP (COLING) 2020 • Lei LI, Yafei Jiang, Yinan Liu
We participate in the FNS-Summarisation 2020 shared task to be held at FNP 2020 workshop at COLING 2020.
1 code implementation • 3 May 2024 • Piotr Padlewski, Max Bain, Matthew Henderson, Zhongkai Zhu, Nishant Relan, Hai Pham, Donovan Ong, Kaloyan Aleksiev, Aitor Ormazabal, Samuel Phua, Ethan Yeo, Eugenie Lamprecht, Qi Liu, Yuqi Wang, Eric Chen, Deyu Fu, Lei LI, Che Zheng, Cyprien de Masson d'Autume, Dani Yogatama, Mikel Artetxe, Yi Tay
We introduce Vibe-Eval: a new open benchmark and framework for evaluating multimodal chat models.
no code implementations • 18 Apr 2024 • Aitor Ormazabal, Che Zheng, Cyprien de Masson d'Autume, Dani Yogatama, Deyu Fu, Donovan Ong, Eric Chen, Eugenie Lamprecht, Hai Pham, Isaac Ong, Kaloyan Aleksiev, Lei LI, Matthew Henderson, Max Bain, Mikel Artetxe, Nishant Relan, Piotr Padlewski, Qi Liu, Ren Chen, Samuel Phua, Yazheng Yang, Yi Tay, Yuqi Wang, Zhongkai Zhu, Zhihui Xie
On text benchmarks, Core not only performs competitively to other frontier models on a set of well-established benchmarks (e. g. MMLU, GSM8K) but also outperforms GPT4-0613 on human evaluation.
1 code implementation • 17 Apr 2024 • Xin Li, Kun Yuan, Yajing Pei, Yiting Lu, Ming Sun, Chao Zhou, Zhibo Chen, Radu Timofte, Wei Sun, HaoNing Wu, ZiCheng Zhang, Jun Jia, Zhichao Zhang, Linhan Cao, Qiubo Chen, Xiongkuo Min, Weisi Lin, Guangtao Zhai, Jianhui Sun, Tianyi Wang, Lei LI, Han Kong, Wenxuan Wang, Bing Li, Cheng Luo, Haiqiang Wang, Xiangguang Chen, Wenhui Meng, Xiang Pan, Huiying Shi, Han Zhu, Xiaozhong Xu, Lei Sun, Zhenzhong Chen, Shan Liu, Fangyuan Kong, Haotian Fan, Yifang Xu, Haoran Xu, Mengduo Yang, Jie zhou, Jiaze Li, Shijie Wen, Mai Xu, Da Li, Shunyu Yao, Jiazhi Du, WangMeng Zuo, Zhibo Li, Shuai He, Anlong Ming, Huiyuan Fu, Huadong Ma, Yong Wu, Fie Xue, Guozhi Zhao, Lina Du, Jie Guo, Yu Zhang, huimin zheng, JunHao Chen, Yue Liu, Dulan Zhou, Kele Xu, Qisheng Xu, Tao Sun, Zhixiang Ding, Yuhang Hu
This paper reviews the NTIRE 2024 Challenge on Shortform UGC Video Quality Assessment (S-UGC VQA), where various excellent solutions are submitted and evaluated on the collected dataset KVQ from popular short-form video platform, i. e., Kuaishou/Kwai Platform.
no code implementations • 16 Apr 2024 • Jingze Chen, Junfeng Yao, Qiqin Lin, Lei LI
Occlusions hinder point cloud frame alignment in LiDAR data, a challenge inadequately addressed by scene flow models tested mainly on occlusion-free datasets.
1 code implementation • 31 Mar 2024 • Jingzhe Shi, Jialuo Li, Qinwei Ma, Zaiwen Yang, Huan Ma, Lei LI
We have conducted extensive experiments to validate the performance of our proposed CHOPS architecture using the CPHOS-dataset, with the aim of demonstrating how LLMs can enhance or serve as alternatives to human customer service.
no code implementations • 29 Mar 2024 • Yazheng Yang, Yuqi Wang, Sankalok Sen, Lei LI, Qi Liu
Despite their proficiency in comprehending natural language, LLMs fall short in dealing with structured tabular data.
1 code implementation • 28 Mar 2024 • Sishuo Chen, Lei LI, Shuhuai Ren, Rundong Gao, Yuanxin Liu, Xiaohan Bi, Xu sun, Lu Hou
Video paragraph captioning (VPC) involves generating detailed narratives for long videos, utilizing supportive modalities such as speech and event boundaries.
no code implementations • 26 Mar 2024 • Souhail Hadgi, Lei LI, Maks Ovsjanikov
Unfortunately, its applicability in 3D data processing has been relatively limited.
no code implementations • 24 Mar 2024 • Yang Jing, Lei LI
Second, since the loss function will be approximated by Monte Carlo method in training, we established the convergence between the discrete loss function and the continuous one when the sample number $N$ goes to infinity as well.
1 code implementation • 21 Mar 2024 • Wei Chen, Yuxuan Liang, Yuanshao Zhu, Yanchuan Chang, Kang Luo, Haomin Wen, Lei LI, Yanwei Yu, Qingsong Wen, Chao Chen, Kai Zheng, Yunjun Gao, Xiaofang Zhou, Yu Zheng
In this paper, we present a comprehensive review of the development and recent advances in deep learning for trajectory computing (DL4Traj).
no code implementations • 19 Mar 2024 • JieLin Qiu, William Han, Winfred Wang, Zhengyuan Yang, Linjie Li, JianFeng Wang, Christos Faloutsos, Lei LI, Lijuan Wang
Open-domain real-world entity recognition is essential yet challenging, involving identifying various entities in diverse environments.
no code implementations • 18 Mar 2024 • Qinghua Zhao, Jiaang Li, Lei LI, Zenghui Zhou, Junfeng Liu
Existing works have studied the impacts of the order of words within natural text.
no code implementations • 15 Mar 2024 • Chen Chen, Lei LI, Marcel Beetz, Abhirup Banerjee, Ramneek Gupta, Vicente Grau
We present a novel, lightweight dual-attention ECG network designed to capture complex ECG features essential for early HF risk prediction, despite the notable imbalance between low and high-risk groups.
no code implementations • 7 Mar 2024 • JieLin Qiu, Andrea Madotto, Zhaojiang Lin, Paul A. Crook, Yifan Ethan Xu, Xin Luna Dong, Christos Faloutsos, Lei LI, Babak Damavandi, Seungwhan Moon
We have developed the \textbf{SnapNTell Dataset}, distinct from traditional VQA datasets: (1) It encompasses a wide range of categorized entities, each represented by images and explicitly named in the answers; (2) It features QA pairs that require extensive knowledge for accurate responses.
no code implementations • 7 Mar 2024 • Lei LI, Tianfang Zhang, Xinglin Zhang, Jiaqi Liu, Bingqi Ma, Yan Luo, Tao Chen
Within the domain of medical analysis, extensive research has explored the potential of mutual learning between Masked Autoencoders(MAEs) and multimodal data.
1 code implementation • 5 Mar 2024 • Xijia Tao, Shuai Zhong, Lei LI, Qi Liu, Lingpeng Kong
In this paper, we propose a novel jailbreaking attack against VLMs, aiming to bypass their safety barrier when a user inputs harmful instructions.
no code implementations • 4 Mar 2024 • Lei LI, Tianfang Zhang, Zhongyu Jiang, Cheng-Yen Yang, Jenq-Neng Hwang, Stefan Oehmcke, Dimitri Pierre Johannes Gominski, Fabian Gieseke, Christian Igel
We leverage the fusion of three-dimensional LiDAR measurements and 2D imagery to facilitate the accurate counting of trees.
1 code implementation • 1 Mar 2024 • Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei LI, Sishuo Chen, Xu sun, Lu Hou
Motivated by these two problems, we propose the \textbf{TempCompass} benchmark, which introduces a diversity of temporal aspects and task formats.
no code implementations • 1 Mar 2024 • Lei LI, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, Qi Liu
To fill this gap, we introduce Multimodal ArXiv, consisting of ArXivCap and ArXivQA, for enhancing LVLMs scientific comprehension.
no code implementations • 28 Feb 2024 • Kexun Zhang, Yee Man Choi, Zhenqiao Song, Taiqi He, William Yang Wang, Lei LI
On the contrary, we observe that 2000 endangered languages, though without a large corpus, have a grammar book or a dictionary.
no code implementations • 22 Feb 2024 • Yuwei Yang, Siqi Ouyang, Xueyu Hu, Mingyue Zheng, Hao Zhou, Lei LI
We develop a novel 3D graph editing model to generate molecules using fragments, and pre-train this model on abundant 3D ligands for learning target-independent properties.
no code implementations • 19 Feb 2024 • Sameer Jain, Sedrick Scott Keh, Shova Chettri, Karun Dewan, Pablo Izquierdo, Johanna Prussman, Pooja Shreshtha, Cesar Suarez, Zheyuan Ryan Shi, Lei LI, Fei Fang
Environmental conservation organizations routinely monitor news content on conservation in protected areas to maintain situational awareness of developments that can have an environmental impact.
no code implementations • 18 Feb 2024 • Wenda Xu, Guanglei Zhu, Xuandong Zhao, Liangming Pan, Lei LI, William Yang Wang
Recent studies show that self-feedback improves large language models (LLMs) on certain tasks while worsens other tasks.
1 code implementation • 15 Feb 2024 • André V. Duarte, Xuandong Zhao, Arlindo L. Oliveira, Lei LI
We are motivated by the premise that a language model is likely to identify verbatim excerpts from its training text.
1 code implementation • 13 Feb 2024 • Yiyang Li, Lei LI, Dingxin Hu, Xueyi Hao, Marina Litvak, Natalia Vanetik, Yanquan Zhou
Improving factual consistency in abstractive summarization has been a focus of current research.
1 code implementation • 8 Feb 2024 • Xuandong Zhao, Lei LI, Yu-Xiang Wang
In this paper, we propose a new decoding method called Permute-and-Flip (PF) decoder.
1 code implementation • 7 Feb 2024 • Ziyang Wang, Jian-Qing Zheng, Yichi Zhang, Ge Cui, Lei LI
Mamba-UNet adopts a pure Visual Mamba (VMamba)-based encoder-decoder structure, infused with skip connections to preserve spatial information across different scales of the network.
no code implementations • 6 Feb 2024 • Qing Li, Zhihang Hu, YiXuan Wang, Lei LI, Yimin Fan, Irwin King, Le Song, Yu Li
Central to our focus is the application of FMs to specific biological problems, aiming to guide the research community in choosing appropriate FMs for their research needs.
no code implementations • 5 Feb 2024 • Fei Yuan, Chang Ma, Shuai Yuan, Qiushi Sun, Lei LI
We further theoretically prove that KS-Lottery can find the certified winning tickets in the embedding layer, fine-tuning on the found parameters is guaranteed to perform as well as full fine-tuning.
3 code implementations • 5 Feb 2024 • Yixin Ou, Ningyu Zhang, Honghao Gui, Ziwen Xu, Shuofei Qiao, Yida Xue, Runnan Fang, Kangwei Liu, Lei LI, Zhen Bi, Guozhou Zheng, Huajun Chen
In recent years, instruction tuning has gained increasing attention and emerged as a crucial technique to enhance the capabilities of Large Language Models (LLMs).
1 code implementation • 30 Jan 2024 • Xuandong Zhao, Xianjun Yang, Tianyu Pang, Chao Du, Lei LI, Yu-Xiang Wang, William Yang Wang
In this paper, we propose the weak-to-strong jailbreaking attack, an efficient method to attack aligned LLMs to produce harmful text.
no code implementations • 23 Jan 2024 • Mukai Li, Lei LI, Yuwei Yin, Masood Ahmed, Zhenguang Liu, Qi Liu
Additionally, we simply apply red teaming alignment to LLaVA-v1. 5 with Supervised Fine-tuning (SFT) using RTVLM, and this bolsters the models' performance with 10% in RTVLM test set, 13% in MM-Hal, and without noticeable decline in MM-Bench, overpassing other LLaVA-based models with regular alignment data.
no code implementations • 15 Jan 2024 • Qing Li, Lei LI, Yu Li
Central to our focus is the utilizing of language models and multimodal paradigms for medical question answering, aiming to guide the research community in selecting appropriate mechanisms for their specific medical research requirements.
1 code implementation • 12 Jan 2024 • Lei LI, Jianxun Lian, Xiao Zhou, Xing Xie
However, most existing retrieval models employ a single-round inference paradigm, which may not adequately capture the dynamic nature of user preferences and stuck in one area in the item space.
no code implementations • 28 Dec 2023 • Jiazhang Zheng, Lei LI, Qiuping Liao, Cheng Li, Li Li, Yangxing Liu
This paper proposes a lightweight network that outperforms existing state-of-the-art (SOTA) methods in low-light enhancement tasks while minimizing computation.
no code implementations • 23 Dec 2023 • Jingze Chen, Junfeng Yao, Qiqin Lin, Rongzhou Zhou, Lei LI
This paper introduces SSFlowNet, a semi-supervised approach for scene flow estimation, that utilizes a blend of labeled and unlabeled data, optimizing the balance between the cost of labeling and the precision of model training.
no code implementations • 17 Dec 2023 • Lei LI, Zhihui Xie, Mukai Li, Shunian Chen, Peiyi Wang, Liang Chen, Yazheng Yang, Benyou Wang, Lingpeng Kong
This paper explores preference distillation for large vision language models (LVLMs), improving their ability to generate helpful and faithful responses anchoring the visual context.
Ranked #18 on Visual Question Answering on MM-Vet
1 code implementation • 14 Dec 2023 • Peiyi Wang, Lei LI, Zhihong Shao, R. X. Xu, Damai Dai, Yifei Li, Deli Chen, Y. Wu, Zhifang Sui
In this paper, we present an innovative process-oriented math process reward model called \textbf{Math-Shepherd}, which assigns a reward score to each step of math problem solutions.
Ranked #14 on Arithmetic Reasoning on GSM8K (using extra training data)
no code implementations • 30 Nov 2023 • Tianli Liao, Chenyang Zhao, Lei LI, Heling Cao
However, the effectiveness of seam-cutting usually depends on that images can be roughly aligned such that there exists a local region where a plausible seam can be found.
no code implementations • 29 Nov 2023 • Lei LI, Angela Dai
Given a natural language description and a coarse point location of the desired interaction in a 3D scene, we first leverage VLMs to imagine plausible 2D human interactions inpainted into multiple rendered views of the scene.
1 code implementation • 29 Nov 2023 • Shicheng Li, Lei LI, Shuhuai Ren, Yuanxin Liu, Yi Liu, Rundong Gao, Xu sun, Lu Hou
The ability to perceive how objects change over time is a crucial ingredient in human intelligence.
no code implementations • 24 Nov 2023 • Zhongyu Jiang, Wenhao Chai, Lei LI, Zhuoran Zhou, Cheng-Yen Yang, Jenq-Neng Hwang
In this paper, we propose UniHPE, a unified Human Pose Estimation pipeline, which aligns features from all three modalities, i. e., 2D human pose estimation, lifting-based and image-based 3D human pose estimation, in the same pipeline.
1 code implementation • 17 Nov 2023 • Zhuoran Zhou, Zhongyu Jiang, Wenhao Chai, Cheng-Yen Yang, Lei LI, Jenq-Neng Hwang
We further apply a guided diffusion model to domain adapt 3D adult pose to infant pose to supplement small datasets.
no code implementations • 15 Nov 2023 • Fei Yuan, Shuai Yuan, Zhiyong Wu, Lei LI
Large Language Models (LLMs), trained predominantly on extensive English data, often exhibit limitations when applied to other languages.
no code implementations • 15 Nov 2023 • Wenda Xu, Daniel Deutsch, Mara Finkelstein, Juraj Juraska, Biao Zhang, Zhongtao Liu, William Yang Wang, Lei LI, Markus Freitag
Recent large language models (LLM) are leveraging human feedback to improve their generation quality.
no code implementations • 9 Nov 2023 • Lei LI, Alexander Liniger, Mario Millhaeusler, Vagia Tsiminaki, Yuanyou Li, Dengxin Dai
In this paper, we develop a novel knowledge distillation approach to shrink the performance gap between these two modalities.
1 code implementation • NeurIPS 2023 • Yuanxin Liu, Lei LI, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu sun, Lu Hou
The multi-aspect categorization of FETV enables fine-grained analysis of the metrics' reliability in different scenarios.
1 code implementation • 2 Nov 2023 • Fengyi Wu, Tianfang Zhang, Lei LI, Yian Huang, Zhenming Peng
Deep learning (DL) networks have achieved remarkable performance in infrared small target detection (ISTD).
no code implementations • 24 Oct 2023 • Lei LI
In light of this, we introduce the CPSeg, Chain-of-Thought Language Prompting for Finer-grained Semantic Segmentation), an innovative framework designed to augment image segmentation performance by integrating a novel "Chain-of-Thought" process that harnesses textual information associated with images.
1 code implementation • 10 Oct 2023 • Kexun Zhang, Hongqiao Chen, Lei LI, William Wang
Large language models (LLMs) have shown promising capabilities in using external tools to solve complex problems.
1 code implementation • 6 Oct 2023 • Zhenqiao Song, Yunlong Zhao, Wenxian Shi, Yang Yang, Lei LI
In this paper, we propose NAEPro, a model to jointly design Protein sequence and structure based on automatically detected functional sites.
no code implementations • 5 Oct 2023 • Danqing Wang, Kevin Yang, Hanlin Zhu, Xiaomeng Yang, Andrew Cohen, Lei LI, Yuandong Tian
We further develop a personalized story evaluation model PERSE to infer reviewer preferences and provide a personalized evaluation.
no code implementations • 4 Oct 2023 • Zhenqiao Song, Yunlong Zhao, Yufei Song, Wenxian Shi, Yang Yang, Lei LI
Designing novel proteins with desired functions is crucial in biology and chemistry.
no code implementations • 2 Oct 2023 • Lei LI
The task of identifying and segmenting buildings within remote sensing imagery has perennially stood at the forefront of scholarly investigations.
1 code implementation • 2 Oct 2023 • Lei LI, Yekun Chai, Shuohuan Wang, Yu Sun, Hao Tian, Ningyu Zhang, Hua Wu
We validate our approach across a wide range of domains, incorporating seven distinct external tools.
no code implementations • 23 Sep 2023 • Lei LI
This paper proposes an innovative approach to Hierarchical Edge Aware 3D Point Cloud Learning (HEA-Net) that seeks to address the challenges of noise in point cloud data, and improve object recognition and segmentation by focusing on edge features.
no code implementations • 5 Sep 2023 • Peiyi Wang, Lei LI, Liang Chen, Feifan Song, Binghuai Lin, Yunbo Cao, Tianyu Liu, Zhifang Sui
To address this problem, we introduce an \textit{Alignment Fine-Tuning (AFT)} paradigm, which involves three steps: 1) fine-tuning LLMs with COT training data; 2) generating multiple COT responses for each question, and categorizing them into positive and negative ones based on whether they achieve the correct answer; 3) calibrating the scores of positive and negative responses given by LLMs with a novel constraint alignment loss.
no code implementations • 3 Sep 2023 • Lei LI, Yongfeng Zhang, Dugang Liu, Li Chen
Large language models (LLM) not only have revolutionized the field of natural language processing (NLP) but also have the potential to reshape many other fields, e. g., recommender systems (RS).
2 code implementations • 9 Aug 2023 • Wenhao Zhu, Yunzhe Lv, Qingxiu Dong, Fei Yuan, Jingjing Xu, ShuJian Huang, Lingpeng Kong, Jiajun Chen, Lei LI
We start from targeting individual languages by performing cross-lingual instruction-tuning (CoIT) on LLaMA, i. e. tuning it with translation task data and cross-lingual general task data to obtain cross-lingual models (x-LLaMAs), and formulate underlying scaling laws to investigate the advantages of using scalable translation data.
no code implementations • 14 Jul 2023 • Marcel Beetz, Yilong Yang, Abhirup Banerjee, Lei LI, Vicente Grau
Myocardial infarction (MI) is one of the most prevalent cardiovascular diseases with associated clinical decision-making typically based on single-valued imaging biomarkers.
no code implementations • 10 Jul 2023 • Lei LI, Julia Camps, Zhinuo, Wang, Abhirup Banerjee, Marcel Beetz, Blanca Rodriguez, Vicente Grau
In this work, we investigate the feasibility of inferring myocardial tissue properties from the electrocardiogram (ECG) within a CDT platform.
1 code implementation • 7 Jul 2023 • Zhongyu Jiang, Zhuoran Zhou, Lei LI, Wenhao Chai, Cheng-Yen Yang, Jenq-Neng Hwang
Learning-based methods have dominated the 3D human pose estimation (HPE) tasks with significantly better performance in most benchmarks than traditional optimization-based methods.
Ranked #10 on 3D Human Pose Estimation on 3DPW (PA-MPJPE metric)
4 code implementations • 30 Jun 2023 • Xuandong Zhao, Prabhanjan Ananth, Lei LI, Yu-Xiang Wang
We propose a robust and high-quality watermark method, Unigram-Watermark, by extending an existing approach with a simplified fixed grouping strategy.
1 code implementation • 21 Jun 2023 • Wentao Liu, Tong Tian, Lemeng Wang, Weijin Xu, Lei LI, Haoyuan Li, Wenyi Zhao, Siyu Tian, Xipeng Pan, Huihua Yang, Feng Gao, Yiming Deng, Ruisheng Su
In this paper, we introduces DIAS, a dataset specifically developed for IA segmentation in DSA sequences.
1 code implementation • 8 Jun 2023 • Xinhang Li, Yiying Yang, Zheng Yuan, Zhe Wang, Qinwen Wang, Chen Xu, Lei LI, Jianhua He, Lin Zhang
For the more challenging problem of pursuing multiple evading vehicles, these algorithms typically select a fixed target evading vehicle for pursuing vehicles without considering dynamic traffic situation, which significantly reduces pursuing success rate.
no code implementations • 7 Jun 2023 • Lei LI, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu sun, Lingpeng Kong, Qi Liu
To tackle this challenge and promote research in the vision-language field, we introduce the Multi-Modal, Multilingual Instruction Tuning (M$^3$IT) dataset, designed to optimize VLM alignment with human instructions.
1 code implementation • 2 Jun 2023 • Xuandong Zhao, Kexun Zhang, Zihao Su, Saastha Vasan, Ilya Grishchenko, Christopher Kruegel, Giovanni Vigna, Yu-Xiang Wang, Lei LI
However, if we do not require the watermarked image to look the same as the original one, watermarks that keep the image semantically similar can be an alternative defense against our attack.
1 code implementation • 29 May 2023 • Peiyi Wang, Lei LI, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, Zhifang Sui
In this paper, we uncover a systematic bias in the evaluation paradigm of adopting large language models~(LLMs), e. g., GPT-4, as a referee to score and compare the quality of responses generated by candidate models.
no code implementations • 28 May 2023 • Lei LI, Kai Fan, Lingyu Yang, Hongjia Li, Chun Yuan
Existing wisdom demonstrates the significance of syntactic knowledge for the improvement of neural machine translation models.
1 code implementation • 24 May 2023 • Heming Xia, Qingxiu Dong, Lei LI, Jingjing Xu, Tianyu Liu, Ziwei Qin, Zhifang Sui
Recently, Large Language Models (LLMs) have been serving as general-purpose interfaces, posing a significant demand for comprehensive visual knowledge.
1 code implementation • NeurIPS 2023 • Kexun Zhang, Danqing Wang, Jingtao Xia, William Yang Wang, Lei LI
To address these challenges, we propose ALGO, a framework that synthesizes Algorithmic programs with LLM-Generated Oracles to guide the generation and verify their correctness.
1 code implementation • 24 May 2023 • Siqi Ouyang, Lei LI
However, LLMs frequently fail in complex decision-making tasks due to the misalignment between the pre-trained knowledge in LLMs and the actual rules in the environment.
1 code implementation • 23 May 2023 • Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Yang Wang, Lei LI
By harnessing both explicit human instruction and the implicit knowledge of GPT-4, we fine-tune a text evaluation metric based on LLaMA, producing both a score for generated text and a human readable diagnostic report.
1 code implementation • 23 May 2023 • Lean Wang, Lei LI, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie zhou, Xu sun
In-context learning (ICL) emerges as a promising capability of large language models (LLMs) by providing them with demonstration examples to perform diverse tasks.
1 code implementation • 23 May 2023 • Lei LI, Jingjing Xu, Qingxiu Dong, Ce Zheng, Qi Liu, Lingpeng Kong, Xu sun
Language models~(LMs) gradually become general-purpose interfaces in the interactive and embodied world, where the understanding of physical concepts is an essential prerequisite.
1 code implementation • 23 May 2023 • Danqing Wang, Lei LI
In this paper, we propose Study Assistant for Large LAnguage Model (SALAM), a novel framework with an auxiliary agent to assist the main LLM in learning from mistakes through interactive cooperation.
no code implementations • 22 May 2023 • Bohong Wu, Fei Yuan, Hai Zhao, Lei LI, Jingjing Xu
Considering that encoder-based models have the advantage of efficient generation and self-correction abilities, this paper explores methods to empower multilingual understanding models the generation abilities to get a unified model.
2 code implementations • 22 May 2023 • Ce Zheng, Lei LI, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, Baobao Chang
Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge.
1 code implementation • 21 May 2023 • Yi Liu, Xiaohan Bi, Lei LI, Sishuo Chen, Wenkai Yang, Xu sun
However, as pre-trained language models (PLMs) continue to increase in size, the communication cost for transmitting parameters during synchronization has become a training speed bottleneck.
1 code implementation • 10 May 2023 • Jiangjie Chen, Wei Shi, Ziquan Fu, Sijie Cheng, Lei LI, Yanghua Xiao
Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge.
no code implementations • 7 May 2023 • Yijun Wang, Changzhi Sun, Yuanbin Wu, Lei LI, Junchi Yan, Hao Zhou
Entity relation extraction consists of two sub-tasks: entity recognition and relation extraction.
1 code implementation • 30 Apr 2023 • Zhenqiao Song, Lei LI
How can we efficiently generate diverse and novel protein sequences with high fitness?
1 code implementation • 28 Apr 2023 • Yuhao Huang, Xin Yang, Lian Liu, Han Zhou, Ao Chang, Xinrui Zhou, Rusi Chen, Junxuan Yu, Jiongquan Chen, Chaoyu Chen, Sijing Liu, Haozhe Chi, Xindi Hu, Kejuan Yue, Lei LI, Vicente Grau, Deng-Ping Fan, Fajin Dong, Dong Ni
To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks.
1 code implementation • 18 Apr 2023 • Lei LI, Jing Chen, Bozhong Tian, Ningyu Zhang
Pre-trained Language Models (PLMs), as parametric-based eager learners, have become the de-facto choice for current paradigms of Natural Language Processing (NLP).
2 code implementations • 10 Apr 2023 • Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, ShuJian Huang, Lingpeng Kong, Jiajun Chen, Lei LI
Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT).
no code implementations • 4 Apr 2023 • Lei LI, Julia Camps, Zhinuo, Wang, Abhirup Banerjee, Blanca Rodriguez, Vicente Grau
However, the influence of various MI properties on the QRS is not intuitively predictable. In this work, we have systematically investigated the effects of 17 post-MI scenarios, varying the location, size, transmural extent, and conductive level of scarring and border zone area, on the forward-calculated QRS.
1 code implementation • CVPR 2023 • Souhaib Attaiki, Lei LI, Maks Ovsjanikov
We observe that with proper training, learned features can be useful in such tasks, but, crucially, only with an appropriate choice of the receptive field size.
no code implementations • 9 Feb 2023 • Pengfei Zhu, Chao Pang, Yekun Chai, Lei LI, Shuohuan Wang, Yu Sun, Hao Tian, Hua Wu
In response to this lacuna, this paper introduces a pioneering contribution in the form of a text-to-waveform music generation model, underpinned by the utilization of diffusion models.
no code implementations • 7 Feb 2023 • Wangbin Ding, Lei LI, Junyi Qiu, Sihan Wang, Liqin Huang, Yinyin Chen, Shan Yang, Xiahai Zhuang
For instance, balanced steady-state free precession cine sequences present clear anatomical boundaries, while late gadolinium enhancement and T2-weighted CMR sequences visualize myocardial scar and edema of MI, respectively.
2 code implementations • 6 Feb 2023 • Xuandong Zhao, Yu-Xiang Wang, Lei LI
We can then detect the secret message by probing a suspect model to tell if it is distilled from the protected one.
1 code implementation • 5 Feb 2023 • Kexun Zhang, Xianjun Yang, William Yang Wang, Lei LI
Diffusion models show promising generation capability for a variety of data.
no code implementations • 31 Jan 2023 • Lei LI, Tianfang Zhang, Zhongfeng Kang, Wenhan Zhang
This paper designed and implemented football detection system under multiple cameras for the detection and capture of targets in real-time matches.
2 code implementations • 25 Jan 2023 • Xiang Chen, Lei LI, Shuofei Qiao, Ningyu Zhang, Chuanqi Tan, Yong Jiang, Fei Huang, Huajun Chen
Previous typical solutions mainly obtain a NER model by pre-trained language models (PLMs) with data from a rich-resource domain and adapt it to the target domain.
no code implementations • 17 Jan 2023 • Lei LI, Jian-Guo Liu, Yuliang Wang
We consider the geometric ergodicity of the Stochastic Gradient Langevin Dynamics (SGLD) algorithm under nonconvexity settings.
no code implementations • 15 Jan 2023 • Lei LI, Tianfang Zhang, Stefan Oehmcke, Fabian Gieseke, Christian Igel
Building segmentation from aerial images and 3D laser scanning (LiDAR) is a challenging task due to the diversity of backgrounds, building textures, and image quality.
no code implementations • 15 Jan 2023 • Sihan Wang, Fuping Wu, Lei LI, Zheyao Gao, Byung-Woo Hong, Xiahai Zhuang
In this work, we propose an unsupervised framework for multi-class segmentation with both intensity and shape constraints.
no code implementations • 13 Jan 2023 • Kaiwen Wan, Lei LI, Dengqiang Jia, Shangqi Gao, Wei Qian, Yingzhi Wu, Huandong Lin, Xiongzheng Mu, Xin Gao, Sijia Wang, Fuping Wu, Xiahai Zhuang
This is particularly evident for the learning-based multi-target landmark detection, where algorithms could be misleading to learn primarily the variation of background due to the varying FOV, failing the detection of targets.
no code implementations • 9 Jan 2023 • Huanyu Bian, Zhilong Jia, Menghan Dou, Yuan Fang, Lei LI, Yiming Zhao, Hanchao Wang, Zhaohui Zhou, Wei Wang, Wenyu Zhu, Ye Li, Yang Yang, Weiming Zhang, Nenghai Yu, Zhaoyun Chen, Guoping Guo
Therefore, based on VQNet 1. 0, we further propose VQNet 2. 0, a new generation of unified classical and quantum machine learning framework that supports hybrid optimization.
1 code implementation • 31 Dec 2022 • Qingxiu Dong, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu sun, Jingjing Xu, Lei LI, Zhifang Sui
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few examples.
1 code implementation • 20 Dec 2022 • Fei Yuan, Yinquan Lu, Wenhao Zhu, Lingpeng Kong, Lei LI, Yu Qiao, Jingjing Xu
To address the needs of learning representations for all languages in a unified space, we propose a novel efficient training recipe, upon which we build an effective detachable model, Lego-MT.
no code implementations • 20 Dec 2022 • Jingjing Xu, Qingxiu Dong, Hongyi Liu, Lei LI
With increasing scale, large language models demonstrate both quantitative improvement and new qualitative capabilities, especially as zero-shot learners, like GPT-3.
1 code implementation • 19 Dec 2022 • Siqi Ouyang, Rong Ye, Lei LI
In this paper, we propose Word-Aligned COntrastive learning (WACO), a simple and effective method for extremely low-resource speech-to-text translation.
1 code implementation • 19 Dec 2022 • Wenda Xu, Xian Qian, Mingxuan Wang, Lei LI, William Yang Wang
In this paper, we propose SESCORE2, a self-supervised approach for training a model-based metric for text generation evaluation.
no code implementations • 18 Dec 2022 • Tianfang Zhang, Lei LI, Christian Igel, Stefan Oehmcke, Fabian Gieseke, Zhenming Peng
In this work, we propose a DUN called low-rank CS network (LR-CSNet) for natural image CS.
no code implementations • 18 Dec 2022 • Lei LI, Tianfang Zhang, Zhongfeng Kang, Xikun Jiang
Fine-grained semantic segmentation of a person's face and head, including facial parts and head components, has progressed a great deal in recent years.
2 code implementations • 14 Dec 2022 • Xuandong Zhao, Siqi Ouyang, Zhiguo Yu, Ming Wu, Lei LI
How can we extend a pre-trained model to many language understanding tasks, without labeled or additional unlabeled data?
1 code implementation • 28 Nov 2022 • Danqing Wang, Zeyu Wen, Fei Ye, Lei LI, Hao Zhou
By sampling in the latent space, LSSAMP can simultaneously generate peptides with ideal sequence attributes and secondary structures.
1 code implementation • 22 Nov 2022 • Jiangjie Chen, Rui Xu, Wenxuan Zeng, Changzhi Sun, Lei LI, Yanghua Xiao
Given a possibly false claim sentence, how can we automatically correct it with minimal editing?
2 code implementations • 14 Nov 2022 • Lei LI, Xiang Chen, Shuofei Qiao, Feiyu Xiong, Huajun Chen, Ningyu Zhang
Multimodal relation extraction is an essential task for knowledge graph construction.
no code implementations • 6 Nov 2022 • Junyi Qiu, Lei LI, Sihan Wang, Ke Zhang, Yinyin Chen, Shan Yang, Xiahai Zhuang
We therefore conducted extensive experiments to investigate the performance of the proposed method in dealing with such complex combinations of different CMR sequences.
1 code implementation • 2 Nov 2022 • Lean Wang, Lei LI, Xu sun
Knowledge distillation (KD) is an effective framework to transfer knowledge from a large-scale teacher to a compact yet well-performing student.
1 code implementation • 12 Oct 2022 • Lei LI, Nicolas Donati, Maks Ovsjanikov
Our approach is not only accurate with near-isometric input, for which a high spectral resolution is typically preferred, but also robust and able to produce reasonable matching even in the presence of significant non-isometric distortion, which poses great challenges to existing methods.
1 code implementation • 11 Oct 2022 • Lei LI, Yankai Lin, Xuancheng Ren, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun
We then design a Model Uncertainty--aware Knowledge Integration (MUKI) framework to recover the golden supervision for the student.
1 code implementation • 10 Oct 2022 • Wenda Xu, YiLin Tuan, Yujie Lu, Michael Saxon, Lei LI, William Yang Wang
Is it possible to build a general and automatic natural language generation (NLG) evaluation metric?
1 code implementation • 7 Oct 2022 • Qingxiu Dong, Damai Dai, YiFan Song, Jingjing Xu, Zhifang Sui, Lei LI
However, we find that facts stored in the PLMs are not always correct.
1 code implementation • 7 Oct 2022 • Jiangtao Feng, Yi Zhou, Jun Zhang, Xian Qian, Liwei Wu, Zhexi Zhang, Yanming Liu, Mingxuan Wang, Lei LI, Hao Zhou
PARAGEN is a PyTorch-based NLP toolkit for further development on parallel generation.
1 code implementation • 7 Oct 2022 • Xuandong Zhao, Lei LI, Yu-Xiang Wang
We prove that a protected model still retains the original accuracy within a certain bound.
no code implementations • 6 Oct 2022 • Meng Yuan, Ye Wang, Lei LI, Tianyou Chai, Wei Tech Ang
Electric-powered wheelchair plays an important role in providing accessibility for people with mobility impairment.
no code implementations • 6 Oct 2022 • Fei Jie, Chunpai Wang, Feng Chen, Lei LI, Xindong Wu
We propose a generalized framework for block-structured nonconvex optimization, which can be applied to structured subgraph detection in interdependent networks, such as multi-layer networks, temporal networks, networks of networks, and many others.
1 code implementation • 6 Oct 2022 • Yiyang Li, Lei LI, Marina Litvak, Natalia Vanetik, Dingxin Hu, Yuze Li, Yanquan Zhou
The issue of factual consistency in abstractive summarization has received extensive attention in recent years, and the evaluation of factual consistency between summary and document has become an important and urgent task.
2 code implementations • 1 Oct 2022 • Ningyu Zhang, Lei LI, Xiang Chen, Xiaozhuan Liang, Shumin Deng, Huajun Chen
Analogical reasoning is fundamental to human cognition and holds an important place in various fields.
1 code implementation • 16 Sep 2022 • Lei LI, Souhaib Attaiki, Maks Ovsjanikov
In this work, we present a novel learning-based framework that combines the local accuracy of contrastive learning with the global consistency of geometric approaches, for robust non-rigid matching.
2 code implementations • 15 Sep 2022 • Gongping Chen, Lei LI, Jianxun Zhang, Yu Dai
However, variable tumor morphology, blurred boundary, and similar intensity distributions bring challenges for accurate segmentation of breast tumors.
1 code implementation • 7 Sep 2022 • Lei LI, Zhizheng Liu, Weining Ren, Liudi Yang, Fangjinhua Wang, Marc Pollefeys, Songyou Peng
3D textured shape recovery from partial scans is crucial for many real-world applications.
no code implementations • 26 Aug 2022 • Lei LI, Wangbin Ding, Liqun Huang, Xiahai Zhuang, Vicente Grau
Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases.
no code implementations • 25 Aug 2022 • Yang Jing, Jiaheng Chen, Lei LI, Jianfeng Lu
In this paper, we develop a deep learning framework to compute the geodesics under the spherical WFR metric, and the learned geodesics can be adopted to generate weighted samples.
3 code implementations • 23 Aug 2022 • Ren Yang, Radu Timofte, Qi Zhang, Lin Zhang, Fanglong Liu, Dongliang He, Fu Li, He Zheng, Weihang Yuan, Pavel Ostyakov, Dmitry Vyal, Magauiya Zhussip, Xueyi Zou, Youliang Yan, Lei LI, Jingzhu Tang, Ming Chen, Shijie Zhao, Yu Zhu, Xiaoran Qin, Chenghua Li, Cong Leng, Jian Cheng, Claudio Rota, Marco Buzzelli, Simone Bianco, Raimondo Schettini, Dafeng Zhang, Feiyu Huang, Shizhuo Liu, Xiaobing Wang, Zhezhu Jin, Bingchen Li, Xin Li, Mingxi Li, Ding Liu, Wenbin Zou, Peijie Dong, Tian Ye, Yunchen Zhang, Ming Tan, Xin Niu, Mustafa Ayazoglu, Marcos Conde, Ui-Jin Choi, Zhuang Jia, Tianyu Xu, Yijian Zhang, Mao Ye, Dengyan Luo, Xiaofeng Pan, Liuhan Peng
The homepage of this challenge is at https://github. com/RenYang-home/AIM22_CompressSR.
no code implementations • 8 Aug 2022 • Lei LI, Julia Camps, Abhirup Banerjee, Marcel Beetz, Blanca Rodriguez, Vicente Grau
Cardiac digital twins can provide non-invasive characterizations of cardiac functions for individual patients, and therefore are promising for the patient-specific diagnosis and therapy stratification.
1 code implementation • 4 Aug 2022 • Lei LI, Zhiyuan Zhang, Ruihan Bao, Keiko Harimoto, Xu sun
Traditional knowledge distillation in classification problems transfers the knowledge via class correlations in the soft label produced by teacher models, which are not available in regression problems like stock trading volume prediction.
no code implementations • 19 Jul 2022 • Lei LI, Yuliang Wang
We establish a sharp uniform-in-time error estimate for the Stochastic Gradient Langevin Dynamics (SGLD), which is a popular sampling algorithm.
no code implementations • 11 Jul 2022 • Lei LI, Yuliang Wang
The main technique is to establish the exponential decay rates of the derivatives of the solution to the backward Kolmogorov equation.
1 code implementation • IWSLT (ACL) 2022 • Siqi Ouyang, Rong Ye, Lei LI
Training speech translation (ST) models requires large and high-quality datasets.
no code implementations • 13 Jun 2022 • Fei Huang, Tianhua Tao, Hao Zhou, Lei LI, Minlie Huang
Non-autoregressive Transformer (NAT) is a family of text generation models, which aims to reduce the decoding latency by predicting the whole sentences in parallel.
1 code implementation • 10 Jun 2022 • Zheyao Gao, Lei LI, Fuping Wu, Sihan Wang, Xiahai Zhuang
In this work, we propose a new framework of distributed learning that bridges the gap between two groups, and improves the performance for both generic and local data.
1 code implementation • 4 Jun 2022 • Shuhuai Ren, Lei LI, Xuancheng Ren, Guangxiang Zhao, Xu sun
However, evaluating the openness of CLIP-like models is challenging, as the models are open to arbitrary vocabulary in theory, but their accuracy varies in practice.
2 code implementations • 29 May 2022 • Xiang Chen, Lei LI, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Specifically, vanilla prompt learning may struggle to utilize atypical instances by rote during fully-supervised training or overfit shallow patterns with low-shot data.
1 code implementation • ICLR 2022 • Huiyun Yang, Huadong Chen, Hao Zhou, Lei LI
Based on large-scale pre-trained multilingual representations, recent cross-lingual transfer methods have achieved impressive transfer performances.
1 code implementation • 7 May 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
To deal with these issues, we propose a novel Hierarchical Visual Prefix fusion NeTwork (HVPNeT) for visual-enhanced entity and relation extraction, aiming to achieve more effective and robust performance.
1 code implementation • NAACL 2022 • Rong Ye, Mingxuan Wang, Lei LI
Learning similar representations for semantically similar speech and text is important for speech translation.
1 code implementation • NAACL 2022 • Xuandong Zhao, Lei LI, Yu-Xiang Wang
Large language models are shown to memorize privacy information such as social security numbers in training data.
1 code implementation • 4 May 2022 • Xiang Chen, Lei LI, Ningyu Zhang, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Note that the previous parametric learning paradigm can be viewed as memorization regarding training data as a book and inference as the close-book test.
1 code implementation • 4 May 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen
Since most MKGs are far from complete, extensive knowledge graph completion studies have been proposed focusing on the multimodal entity, relation extraction and link prediction.
1 code implementation • 30 Apr 2022 • Chengyu Wang, Minghui Qiu, Chen Shi, Taolin Zhang, Tingting Liu, Lei LI, Jianing Wang, Ming Wang, Jun Huang, Wei Lin
The success of Pre-Trained Models (PTMs) has reshaped the development of Natural Language Processing (NLP).
1 code implementation • 12 Apr 2022 • Yunfei Li, Tao Kong, Lei LI, Yi Wu
Can a robot autonomously learn to design and construct a bridge from varying-sized blocks without a blueprint?
1 code implementation • 10 Apr 2022 • Xinhang Li, Zihao Li, Nan Yang, Zheng Yuan, Qinwen Wang, Yiying Yang, Yupeng Huang, Xuri Song, Lei LI, Lin Zhang
The expansion of renewable energy could help realizing the goals of peaking carbon dioxide emissions and carbon neutralization.
1 code implementation • ACL 2022 • Zhiyi Fu, Wangchunshu Zhou, Jingjing Xu, Hao Zhou, Lei LI
How do masked language models (MLMs) such as BERT learn contextual representations?
1 code implementation • 5 Apr 2022 • Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI
Recently, parallel text generation has received widespread attention due to its success in generation efficiency.
1 code implementation • ACL 2022 • Qingkai Fang, Rong Ye, Lei LI, Yang Feng, Mingxuan Wang
How to learn a better speech representation for end-to-end speech-to-text translation (ST) with limited labeled data?
no code implementations • Findings (ACL) 2022 • Jiangjie Chen, Rui Xu, Ziquan Fu, Wei Shi, Zhongqiao Li, Xinbo Zhang, Changzhi Sun, Lei LI, Yanghua Xiao, Hao Zhou
Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR).
1 code implementation • Findings (ACL) 2022 • Xuandong Zhao, Zhiguo Yu, Ming Wu, Lei LI
How to learn highly compact yet effective sentence representation?
2 code implementations • 4 Mar 2022 • Bing Zou, Yubo Zheng, Mu Shen, Yingying Luo, Lei LI, Lin Zhang
Commonly used EEG acquisition system's hardware and software are usually closed-source.
1 code implementation • 1 Mar 2022 • Zheng Yuan, Tianhao Wu, Qinwen Wang, Yiying Yang, Lei LI, Lin Zhang
Although there are some achievements in the field of MVP in the open space environment, the urban area brings complicated road structures and restricted moving spaces as challenges to the resolution of MVP games.
no code implementations • 28 Feb 2022 • Daniel Gao, Yantao Jia, Lei LI, Chengzhen Fu, Zhicheng Dou, Hao Jiang, Xinyu Zhang, Lei Chen, Zhao Cao
However, to figure out whether PLMs can be reliable knowledge sources and used as alternative knowledge bases (KBs), we need to further explore some critical features of PLMs.
1 code implementation • 28 Feb 2022 • Tianyun Yang, Ziyao Huang, Juan Cao, Lei LI, Xirong Li
With the rapid progress of generation technology, it has become necessary to attribute the origin of fake images.
1 code implementation • 15 Feb 2022 • Lei LI, Yongfeng Zhang, Li Chen
In the latter case, ID vectors are randomly initialized but the model is trained in advance on large corpora, so they are actually in different learning stages.
1 code implementation • 4 Feb 2022 • Wangbin Ding, Lei LI, Xiahai Zhuang, Liqin Huang
For the label fusion, we design a similarity estimation network (SimNet), which estimates the fusion weight of each atlas by measuring its similarity to the target image.
1 code implementation • 14 Jan 2022 • Kai-Ni Wang, Xin Yang, Juzheng Miao, Lei LI, Jing Yao, Ping Zhou, Wufeng Xue, Guang-Quan Zhou, Xiahai Zhuang, Dong Ni
Extensive experimental results on a publicly available dataset from Myocardial pathology segmentation combining multi-sequence CMR (MyoPS 2020) demonstrate our method can achieve promising performance compared with other state-of-the-art methods.
no code implementations • 10 Jan 2022 • Lei LI, Fuping Wu, Sihan Wang, Xinzhe Luo, Carlos Martin-Isla, Shuwei Zhai, Jianpeng Zhang, Yanfei Liu7, Zhen Zhang, Markus J. Ankenbrand, Haochuan Jiang, Xiaoran Zhang, Linhong Wang, Tewodros Weldebirhan Arega, Elif Altunok, Zhou Zhao, Feiyan Li, Jun Ma, Xiaoping Yang, Elodie Puybareau, Ilkay Oksuz, Stephanie Bricq, Weisheng Li, Kumaradevan Punithakumar, Sotirios A. Tsaftaris, Laura M. Schreiber, Mingjing Yang, Guocai Liu, Yong Xia, Guotai Wang, Sergio Escalera, Xiahai Zhuang
Assessment of myocardial viability is essential in diagnosis and treatment management of patients suffering from myocardial infarction, and classification of pathology on myocardium is the key to this assessment.
1 code implementation • 10 Jan 2022 • Ningyu Zhang, Xin Xu, Liankuan Tao, Haiyang Yu, Hongbin Ye, Shuofei Qiao, Xin Xie, Xiang Chen, Zhoubo Li, Lei LI, Xiaozhuan Liang, Yunzhi Yao, Shumin Deng, Peng Wang, Wen Zhang, Zhenru Zhang, Chuanqi Tan, Qiang Chen, Feiyu Xiong, Fei Huang, Guozhou Zheng, Huajun Chen
We present an open-source and extensible knowledge extraction toolkit DeepKE, supporting complicated low-resource, document-level and multimodal scenarios in the knowledge base population.
no code implementations • 21 Dec 2021 • Stefan Oehmcke, Lei LI, Katerina Trepekli, Jaime Revenga, Thomas Nord-Larsen, Fabian Gieseke, Christian Igel
Quantification of forest biomass stocks and their dynamics is important for implementing effective climate change mitigation measures.
no code implementations • 14 Dec 2021 • Lei LI, Yankai Lin, Xuancheng Ren, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun
As many fine-tuned pre-trained language models~(PLMs) with promising performance are generously released, investigating better ways to reuse these models is vital as it can greatly reduce the retraining computational cost and the potential environmental side-effects.
1 code implementation • 10 Dec 2021 • Jiangjie Chen, Chun Gan, Sijie Cheng, Hao Zhou, Yanghua Xiao, Lei LI
We also propose a new metric to alleviate the shortcomings of current automatic metrics and better evaluate the trade-off.
no code implementations • 23 Nov 2021 • Lei LI, Kai Fan, Chun Yuan
Scene text detection is still a challenging task, as there may be extremely small or low-resolution strokes, and close or arbitrary-shaped texts.
1 code implementation • EMNLP 2021 • Dongyu Ru, Changzhi Sun, Jiangtao Feng, Lin Qiu, Hao Zhou, Weinan Zhang, Yong Yu, Lei LI
LogiRE treats logic rules as latent variables and consists of two modules: a rule generator and a relation extractor.
Ranked #21 on Relation Extraction on DocRED
no code implementations • 8 Nov 2021 • Lei LI, Fuping Wu, Sihang Wang, Xiahai Zhuang
Accurate cardiac computing, analysis and modeling from multi-modality images are important for the diagnosis and treatment of cardiac disease.
no code implementations • 8 Nov 2021 • Jingjing Xu, Wangchunshu Zhou, Zhiyi Fu, Hao Zhou, Lei LI
In recent years, larger and deeper models are springing up and continuously pushing state-of-the-art (SOTA) results across various fields like natural language processing (NLP) and computer vision (CV).
1 code implementation • 30 Oct 2021 • Jiasong Wu, Qingchun Li, Guanyu Yang, Lei LI, Lotfi Senhadji, Huazhong Shu
The first module adopts a random audio sub-sampler on each noisy audio to generate training pairs.
no code implementations • 21 Oct 2021 • Danqing Wang, Jiaze Chen, Xianze Wu, Hao Zhou, Lei LI
In this paper, we present a large-scale Chinese news summarization dataset CNewSum, which consists of 304, 307 documents and human-written summaries for the news feed.
2 code implementations • 14 Oct 2021 • Chenyang Huang, Hao Zhou, Osmar R. Zaïane, Lili Mou, Lei LI
How do we perform efficient inference while retaining high translation quality?
1 code implementation • 13 Oct 2021 • Guangxiang Zhao, Wenkai Yang, Xuancheng Ren, Lei LI, Yunfang Wu, Xu sun
The conventional wisdom behind learning deep classification models is to focus on bad-classified examples and ignore well-classified examples that are far from the decision boundary.
1 code implementation • 12 Oct 2021 • Xiaohui Wang, Yang Wei, Ying Xiong, Guyue Huang, Xian Qian, Yufei Ding, Mingxuan Wang, Lei LI
In this paper, we present LightSeq2, a system to accelerate training for a general family of Transformer models on GPUs.
no code implementations • 29 Sep 2021 • Xinbo Zhang, Changzhi Sun, Yue Zhang, Lei LI, Hao Zhou
Logical reasoning over natural text is an important capability towards human level intelligence.
no code implementations • 29 Sep 2021 • Danqing Wang, Zeyu Wen, Lei LI, Hao Zhou
By sampling in the latent secondary structure space, we can generate peptides with ideal amino acids and secondary structures at the same time.
no code implementations • 29 Sep 2021 • Yuwei Yang, Siqi Ouyang, Meihua Dang, Mingyue Zheng, Lei LI, Hao Zhou
Deep learning models have been widely used in automatic drug design.
no code implementations • ICLR 2022 • Zhenqiao Song, Hao Zhou, Lihua Qian, Jingjing Xu, Shanbo Cheng, Mingxuan Wang, Lei LI
Multilingual machine translation aims to develop a single model for multiple language directions.
no code implementations • WMT (EMNLP) 2021 • Lihua Qian, Yi Zhou, Zaixiang Zheng, Yaoming Zhu, Zehui Lin, Jiangtao Feng, Shanbo Cheng, Lei LI, Mingxuan Wang, Hao Zhou
This paper describes the Volctrans' submission to the WMT21 news translation shared task for German->English translation.
1 code implementation • EMNLP 2021 • Lei LI, Yankai Lin, Shuhuai Ren, Peng Li, Jie zhou, Xu sun
Knowledge distillation~(KD) has been proved effective for compressing large-scale pre-trained language models.
2 code implementations • EMNLP 2021 • Qingnan Jiang, Mingxuan Wang, Jun Cao, Shanbo Cheng, ShuJian Huang, Lei LI
How to effectively adapt neural machine translation (NMT) models according to emerging cases without retraining?
1 code implementation • ACL 2022 • Qianqian Dong, Yaoming Zhu, Mingxuan Wang, Lei LI
Given a usually long speech sequence, we develop an efficient monotonic segmentation module inside an encoder-decoder model to accumulate acoustic information incrementally and detect proper speech unit boundaries for the input in speech translation task.
1 code implementation • Findings (EMNLP) 2021 • Zewei Sun, Mingxuan Wang, Lei LI
Can pre-trained BERT for one language and GPT for another be glued together to translate texts?
1 code implementation • 5 Sep 2021 • Lei LI, Wangbin Ding, Liqun Huang, Xiahai Zhuang
In this work, we propose an automatic RV segmentation framework, where the information from long-axis (LA) views is utilized to assist the segmentation of short-axis (SA) views via information transition.
1 code implementation • EMNLP 2021 • Shuhuai Ren, Jinchao Zhang, Lei LI, Xu sun, Jie zhou
Data augmentation aims to enrich training samples for alleviating the overfitting issue in low-resource or class-imbalanced situations.
1 code implementation • COLING 2022 • Xiang Chen, Lei LI, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen, Ningyu Zhang
Most NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data.
no code implementations • Findings (EMNLP) 2021 • Tao Wang, Chengqi Zhao, Mingxuan Wang, Lei LI, Hang Li, Deyi Xiong
This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors.
no code implementations • 26 Aug 2021 • Ruihang Chu, Yukang Chen, Tao Kong, Lu Qi, Lei LI
Separating 3D point clouds into individual instances is an important task for 3D vision.
1 code implementation • Findings (NAACL) 2022 • Yiran Chen, Zhenqiao Song, Xianze Wu, Danqing Wang, Jingjing Xu, Jiaze Chen, Hao Zhou, Lei LI
We introduce MTG, a new benchmark suite for training and evaluating multilingual text generation.
1 code implementation • 5 Aug 2021 • Lei LI, Hongbo Fu, Maks Ovsjanikov
Instead of using a predefined fixed-size local support in voxelization, we propose to learn the optimal support in a data-driven manner.
no code implementations • 5 Aug 2021 • Yiming Li, Tao Kong, Ruihang Chu, Yifeng Li, Peng Wang, Lei LI
In a unified framework, we jointly predict the feasible 6-DoF grasp poses, instance semantic segmentation, and collision information.
no code implementations • 5 Aug 2021 • Yunfei Li, Tao Kong, Lei LI, Yifeng Li, Yi Wu
In this task, the robot needs to first design a feasible bridge architecture for arbitrarily wide cliffs and then manipulate the blocks reliably to construct a stable bridge according to the proposed design.