1 code implementation • EMNLP 2020 • Jiaqi Guo, Qian Liu, Jian-Guang Lou, Zhenwen Li, Xueqing Liu, Tao Xie, Ting Liu
Thus, the impact of meaning representation on semantic parsing is less understood.
no code implementations • EMNLP 2020 • Yuntao Li, Bei Chen, Qian Liu, Yan Gao, Jian-Guang Lou, Yan Zhang, Dongmei Zhang
In Natural Language Interfaces to Databases systems, the text-to-SQL technique allows users to query databases by using natural language questions.
no code implementations • LREC 2022 • Erik Cambria, Qian Liu, Sergio Decherchi, Frank Xing, Kenneth Kwok
In recent years, AI research has demonstrated enormous potential for the benefit of humanity and society.
1 code implementation • COLING 2022 • Libo Qin, Qiguang Chen, Tianbao Xie, Qian Liu, Shijue Huang, Wanxiang Che, Zhou Yu
Consistency identification in task-oriented dialog (CI-ToD) usually consists of three subtasks, aiming to identify inconsistency between current system response and current user response, dialog history and the corresponding knowledge base.
1 code implementation • 28 May 2024 • Jundong Xu, Hao Fei, Liangming Pan, Qian Liu, Mong-Li Lee, Wynne Hsu
Technically, building upon an LLM, SymbCoT 1) first translates the natural language context into the symbolic format, and then 2) derives a step-by-step plan to solve the problem with symbolic logical rules, 3) followed by a verifier to check the translation and reasoning chain.
no code implementations • 2 May 2024 • Dongfu Jiang, Xuan He, Huaye Zeng, Cong Wei, Max Ku, Qian Liu, Wenhu Chen
We further evaluate Mantis on single-image benchmarks and demonstrate that Mantis also maintains a strong single-image performance on par with CogVLM and Emu2.
1 code implementation • 20 Apr 2024 • Zekai Li, Yanxia Qin, Qian Liu, Min-Yen Kan
We propose Iterative Facuality Refining on Informative Scientific Question-Answering (ISQA) feedback\footnote{Code is available at \url{https://github. com/lizekai-richard/isqa}}, a method following human learning theories that employs model-generated feedback consisting of both positive and negative information.
2 code implementations • 4 Apr 2024 • Longxu Dou, Qian Liu, Guangtao Zeng, Jia Guo, Jiahui Zhou, Wei Lu, Min Lin
We present Sailor, a family of open language models ranging from 0. 5B to 7B parameters, tailored for South-East Asian (SEA) languages.
1 code implementation • 12 Mar 2024 • Tongyao Zhu, Qian Liu, Liang Pang, Zhengbao Jiang, Min-Yen Kan, Min Lin
Through carefully-designed synthetic tasks, covering the scenarios of full recitation, selective recitation and grounded question answering, we reveal that LMs manage to sequentially access their memory while encountering challenges in randomly accessing memorized content.
no code implementations • 29 Feb 2024 • Anton Lozhkov, Raymond Li, Loubna Ben allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries
Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size.
Ranked #25 on Code Generation on MBPP
1 code implementation • 21 Feb 2024 • Zhaorui Yang, Tianyu Pang, Haozhe Feng, Han Wang, Wei Chen, Minfeng Zhu, Qian Liu
The surge in Large Language Models (LLMs) has revolutionized natural language processing, but fine-tuning them for specific tasks often encounters challenges in balancing performance and preserving general instruction-following abilities.
no code implementations • 19 Feb 2024 • Tianlin Li, XiaoYu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu
Building on this insight and observation, we develop FairThinking, a pipeline designed to automatically generate roles that enable LLMs to articulate diverse perspectives for fair expressions.
no code implementations • 19 Feb 2024 • Tianlin Li, Qian Liu, Tianyu Pang, Chao Du, Qing Guo, Yang Liu, Min Lin
The emerging success of large language models (LLMs) heavily relies on collecting abundant training data from external (untrusted) sources.
1 code implementation • 19 Feb 2024 • Hongjin Su, Shuyang Jiang, Yuhang Lai, Haoyuan Wu, Boao Shi, Che Liu, Qian Liu, Tao Yu
Recently the retrieval-augmented generation (RAG) paradigm has raised much attention for its potential in incorporating external knowledge into large language models (LLMs) without further training.
1 code implementation • 13 Feb 2024 • Xiangming Gu, Xiaosen Zheng, Tianyu Pang, Chao Du, Qian Liu, Ye Wang, Jing Jiang, Min Lin
A multimodal large language model (MLLM) agent can receive instructions, capture images, retrieve histories from memory, and decide which tools to use.
1 code implementation • 13 Feb 2024 • Dong Lu, Tianyu Pang, Chao Du, Qian Liu, Xianjun Yang, Min Lin
Backdoor attacks are commonly executed by contaminating training data, such that a trigger can activate predetermined harmful effects during the test phase.
1 code implementation • 12 Feb 2024 • Mingzhe Du, Anh Tuan Luu, Bin Ji, Qian Liu, See-Kiong Ng
Based on the distribution, we introduce a new metric Beyond, which computes a runtime-percentile-weighted Pass score to reflect functional correctness and computational efficiency simultaneously.
no code implementations • 1 Feb 2024 • Ran Elgedawy, John Sadik, Senjuti Dutta, Anuj Gautam, Konstantinos Georgiou, Farzin Gholamrezae, Fujiao Ji, Kyungchan Lim, Qian Liu, Scott Ruoti
$ $Large Language Models (LLMs) are being increasingly utilized in various applications, with code generations being a notable example.
no code implementations • 13 Jan 2024 • Yu Hong, Qian Liu, Huayuan Cheng, Danjiao Ma, Hang Dai, Yu Wang, Guangzhi Cao, Yong Ding
The past few years have witnessed the rapid development of vision-centric 3D perception in autonomous driving.
no code implementations • 11 Jan 2024 • Yixian Zheng, Rang Liu, Ming Li, Qian Liu
Integrated sensing and communication (ISAC) is an encouraging wireless technology which can simultaneously perform both radar and communication functionalities by sharing the same transmit waveform, spectral resource, and hardware platform.
no code implementations • 8 Jan 2024 • Yun Yang, Zhiping Lu, Ming Li, Rang Liu, Qian Liu
Motivated by this fact, in this paper we first investigate the amplification principle of typical active RIS and propose a more accurate amplification model based on amplifier hardware characteristics.
2 code implementations • 1 Jan 2024 • Terry Yue Zhuo, Armel Zebaze, Nitchakarn Suppattarachai, Leandro von Werra, Harm de Vries, Qian Liu, Niklas Muennighoff
Through investigations across 5 tasks and 8 different datasets encompassing both code comprehension and code generation tasks, we find that FFT generally leads to the best downstream performance across all scales, and PEFT methods differ significantly in their efficacy based on the model scale.
no code implementations • 17 Nov 2023 • Haocheng Zhang, Rang Liu, Ming Li, Wei Wang, Qian Liu
Extensive experimental results confirm the effectiveness of our proposed MADRL framework in improving both sensing and communication performance through the utilization of target-mounted STARS.
2 code implementations • 23 Oct 2023 • Fangyu Lei, Qian Liu, Yiming Huang, Shizhu He, Jun Zhao, Kang Liu
The rapid development of Large Language Models (LLMs) has led to great strides in model capabilities like long-context understanding and reasoning.
2 code implementations • 16 Oct 2023 • Tianbao Xie, Fan Zhou, Zhoujun Cheng, Peng Shi, Luoxuan Weng, Yitao Liu, Toh Jing Hua, Junning Zhao, Qian Liu, Che Liu, Leo Z. Liu, Yiheng Xu, Hongjin Su, Dongchan Shin, Caiming Xiong, Tao Yu
Language agents show potential in being capable of utilizing natural language for varied and intricate tasks in diverse environments, particularly when built upon large language models (LLMs).
1 code implementation • 10 Oct 2023 • Yiheng Xu, Hongjin Su, Chen Xing, Boyu Mi, Qian Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu, Tianbao Xie, Zhoujun Cheng, Siheng Zhao, Lingpeng Kong, Bailin Wang, Caiming Xiong, Tao Yu
We introduce Lemur and Lemur-Chat, openly accessible language models optimized for both natural language and coding capabilities to serve as the backbone of versatile language agents.
no code implementations • 17 Sep 2023 • Qi Zhu, Ming Li, Rang Liu, Qian Liu
Integrated sensing and communication (ISAC), which simultaneously performs sensing and communication functions within a shared frequency band and hardware platform, has emerged as a promising technology for future wireless systems.
2 code implementations • 14 Aug 2023 • Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46. 2% pass@1).
Ranked #5 on Code Generation on HumanEval
no code implementations • 7 Aug 2023 • Zichao Xiao, Rang Liu, Ming Li, Qian Liu
Therefore, the proposed joint estimation algorithm can achieve larger processing gains and higher resolution by fully exploiting echo signals and jointly estimating the angle-range-velocity information.
2 code implementations • 3 Aug 2023 • Keyu Duan, Qian Liu, Tat-Seng Chua, Shuicheng Yan, Wei Tsang Ooi, Qizhe Xie, Junxian He
More recently, with the rapid development of language models (LMs), researchers have focused on leveraging LMs to facilitate the learning of TGs, either by jointly training them in a computationally intensive framework (merging the two stages), or designing complex self-supervised training tasks for feature extraction (enhancing the first stage).
Ranked #1 on Node Property Prediction on ogbn-arxiv
no code implementations • 26 Jul 2023 • Shuxian Niu, Qian Liu, Yang Zhou, Min Li
In this paper, we apply a Threshold-Decreasing Algorithm to maximize $k$-submodular functions under a matroid constraint, which reduces the query complexity of the algorithm compared to the greedy algorithm with little loss in approximation ratio.
1 code implementation • 25 Jul 2023 • Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, Min Lin
This paper investigates LoRA composability for cross-task generalization and introduces LoraHub, a simple framework devised for the purposive assembly of LoRA modules trained on diverse given tasks, with the objective of achieving adaptable performance on unseen tasks.
no code implementations • 24 Jul 2023 • Hu Zhang, Yanchen Li, Luziwei Leng, Kaiwei Che, Qian Liu, Qinghai Guo, Jianxing Liao, Ran Cheng
Traditional object detection techniques that utilize Artificial Neural Networks (ANNs) face challenges due to the sparse and asynchronous nature of the events these sensors capture.
no code implementations • 30 May 2023 • Peishi Li, Zichao Xiao, Ming Li, Rang Liu, Qian Liu
Integrated sensing and communication (ISAC) is a promising technology in future wireless systems owing to its efficient hardware and spectrum utilization.
1 code implementation • 22 May 2023 • Xinyuan Lu, Liangming Pan, Qian Liu, Preslav Nakov, Min-Yen Kan
Current scientific fact-checking benchmarks exhibit several shortcomings, such as biases arising from crowd-sourced claims and an over-reliance on text-based evidence.
1 code implementation • 20 May 2023 • Hao Fei, Qian Liu, Meishan Zhang, Min Zhang, Tat-Seng Chua
In this work, we investigate a more realistic unsupervised multimodal machine translation (UMMT) setup, inference-time image-free UMMT, where the model is trained with source-text image pairs, and tested with only source-text inputs.
2 code implementations • 18 May 2023 • Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-Seng Chua
While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner.
1 code implementation • 16 May 2023 • Tianping Zhang, Shaowen Wang, Shuicheng Yan, Jian Li, Qian Liu
Recently, the topic of table pre-training has attracted considerable research interest.
1 code implementation • 11 May 2023 • Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, Graham Neubig
In this work, we provide a generalized view of active retrieval augmented generation, methods that actively decide when and what to retrieve across the course of the generation.
4 code implementations • 9 May 2023 • Raymond Li, Loubna Ben allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries
The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15. 5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention.
Ranked #43 on Code Generation on MBPP
1 code implementation • 17 Apr 2023 • Qian Liu, Fan Zhou, Zhengbao Jiang, Longxu Dou, Min Lin
Empirical results on various benchmarks validate that the integration of SQL execution leads to significant improvements in zero-shot scenarios, particularly in table reasoning.
no code implementations • 13 Apr 2023 • Ole Richter, Yannan Xing, Michele De Marchi, Carsten Nielsen, Merkourios Katsimpris, Roberto Cattaneo, Yudi Ren, Yalun Hu, Qian Liu, Sadique Sheik, Tugba Demirci, Ning Qiao
This is due to the increasing number of smart devices that require sensory processing for their application on the edge.
no code implementations • 22 Feb 2023 • Honghao Luo, Rang Liu, Ming Li, Qian Liu
Integrated sensing and communication (ISAC) has been envisioned as a promising technique to alleviate the spectrum congestion problem.
no code implementations • 21 Feb 2023 • Qi Zhu, Ming Li, Rang Liu, Qian Liu
Integrated sensing and communication (ISAC) is recognized as a promising technology with great potential in saving hardware and spectrum resources, since it simultaneously realizes radar detection and user communication functions in the fully-shared platform.
1 code implementation • 9 Feb 2023 • Weichen Yu, Tianyu Pang, Qian Liu, Chao Du, Bingyi Kang, Yan Huang, Min Lin, Shuicheng Yan
With the advance of language models, privacy protection is receiving more attention.
no code implementations • 26 Jan 2023 • Rang Liu, Ming Li, Qian Liu, A. Lee Swindlehurst
Two optimization problems are formulated for maximizing the achievable sum-rate of the multi-user communications under an SNR constraint for target detection or a CRB constraint for parameter estimation, the transmit power budget, and the unit-modulus constraint of the RIS reflection coefficients.
5 code implementations • 9 Jan 2023 • Loubna Ben allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra
The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code.
no code implementations • 1 Dec 2022 • Jinjin Chu, Rang Liu, Ming Li, Yang Liu, Qian Liu
Integrated sensing and communication (ISAC), which allows individual radar and communication systems to share the same spectrum bands, is an emerging and promising technique for alleviating spectrum congestion problems.
2 code implementations • 22 Nov 2022 • Tianping Zhang, Zheyu Zhang, Zhiyuan Fan, Haoyan Luo, Fengyuan Liu, Qian Liu, Wei Cao, Jian Li
In the two competitions, features generated by OpenFE with a simple baseline model can beat 99. 3% and 99. 6% data science teams respectively.
2 code implementations • 26 Oct 2022 • Jianan Zhao, Meng Qu, Chaozhuo Li, Hao Yan, Qian Liu, Rui Li, Xing Xie, Jian Tang
In this paper, we propose an efficient and effective solution to learning on large text-attributed graphs by fusing graph structure and language learning with a variational Expectation-Maximization (EM) framework, called GLEM.
Ranked #1 on Node Property Prediction on ogbn-papers100M
no code implementations • 25 Oct 2022 • Rang Liu, Zhu Bo, Ming Li, Qian Liu
To overcome the performance bottleneck of these approaches, in this letter we propose an end-to-end learning based approach to jointly optimize the modulation orders, the transmit precoding and the receive detection for an SLP communication system.
1 code implementation • 11 Oct 2022 • JunJie Huang, Wanjun Zhong, Qian Liu, Ming Gong, Daxin Jiang, Nan Duan
However, training an effective dense table-text retriever is difficult due to the challenges of table-text discrepancy and data sparsity problem.
no code implementations • 11 Oct 2022 • Fan Zhou, Haoyu Dong, Qian Liu, Zhoujun Cheng, Shi Han, Dongmei Zhang
Numerical reasoning over natural language has been a long-standing goal for the research community.
no code implementations • 5 Oct 2022 • Qi Zhu, Ming Li, Rang Liu, Yang Liu, Qian Liu
Affected by the "double fading" effect, however, conventional passive RIS cannot bring considerable performance improvement when users are not close enough to RIS.
no code implementations • 5 Sep 2022 • Yanan Ma, Ming Li, Yang Liu, Qingqing Wu, Qian Liu
Reconfigurable intelligent surface (RIS) has been deemed as one of potential components of future wireless communication systems because it can adaptively manipulate the wireless propagation environment with low-cost passive devices.
no code implementations • 1 Sep 2022 • Wenhao Cai, Ming Li, Qian Liu
Intelligent reflecting surface (IRS) has emerged as a promising and revolutionizing technology for future wireless networks.
no code implementations • 1 Sep 2022 • Wenhao Cai, Ming Li, Yang Liu, Qingqing Wu, Qian Liu
Intelligent reflecting surface (IRS) has been widely considered as one of the key enabling techniques for future wireless communication networks owing to its ability of dynamically controlling the phase shift of reflected electromagnetic (EM) waves to construct a favorable propagation environment.
no code implementations • 29 Aug 2022 • Bill Yuchen Lin, Chengsong Huang, Qian Liu, Wenda Gu, Sam Sommerer, Xiang Ren
Language models (LMs) have demonstrated their capability in possessing commonsense knowledge of the physical world, a crucial aspect of performing tasks in everyday life.
no code implementations • 10 Aug 2022 • Pengfei Ni, Ming Li, Rang Liu, Qian Liu
Cell-free networks are regarded as a promising technology to meet higher rate requirements for beyond fifth-generation (5G) communications.
no code implementations • 10 Aug 2022 • Pengfei Ni, Rang Liu, Ming Li, Qian Liu
In an effort to further exploit multiple-antenna diversities, we also consider the dynamic subarray architecture and propose a novel antenna design algorithm for the analog beamforming design.
no code implementations • 3 Aug 2022 • Honghao Luo, Rang Liu, Ming Li, Yang Liu, Qian Liu
Integrated sensing and communication (ISAC) has been envisioned as a promising technology to tackle the spectrum congestion problem for future networks.
no code implementations • 17 Jun 2022 • Rang Liu, Ming Li, Honghao Luo, Qian Liu, A. Lee Swindlehurst
Integrated sensing and communication (ISAC) is emerging as a key enabler to address the growing spectrum congestion problem and satisfy increasing demands for ubiquitous sensing and communication.
no code implementations • 13 Apr 2022 • Wenhao Cai, Rang Liu, Ming Li, Yang Liu, Qingqing Wu, Qian Liu
Intelligent reflecting surface (IRS) has been regarded as a promising and revolutionary technology for future wireless communication systems owing to its capability of tailoring signal propagation environment in an energy/spectrum/hardware-efficient manner.
no code implementations • 7 Mar 2022 • Shengnan An, Yifei Li, Zeqi Lin, Qian Liu, Bei Chen, Qiang Fu, Weizhu Chen, Nanning Zheng, Jian-Guang Lou
This motivates us to propose input-tuning, which fine-tunes both the continuous prompts and the input representations, leading to a more effective way to adapt unfamiliar inputs to frozen PLMs.
1 code implementation • 27 Jan 2022 • Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Qiang Fu, Yan Gao, Jian-Guang Lou, Weizhu Chen
Reasoning over natural language is a long-standing goal for the research community.
Ranked #2 on Question Answering on DROP Test (using extra training data)
2 code implementations • 20 Jan 2022 • Qi Shi, Qian Liu, Bei Chen, Yu Zhang, Ting Liu, Jian-Guang Lou
In this work, we propose LEMON, a general framework for language-based environment manipulation tasks.
no code implementations • 15 Jan 2022 • Wanjun Zhong, JunJie Huang, Qian Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan
CARP utilizes hybrid chain to model the explicit intermediate reasoning process across table and text for question answering.
Ranked #2 on Question Answering on OTT-QA
no code implementations • 27 Dec 2021 • Qian Liu, Yongpeng Li, Zhihang Wang
In computer vision, image processing and computer graphics, image smoothing filtering is a very basic and important task and to be expected possessing good edge-preserving smoothing property.
no code implementations • 16 Dec 2021 • Rang Liu, Ming Li, Yang Liu, Qingqing Wu, Qian Liu
Reconfigurable intelligent surface (RIS) is a promising technology for 6G networks owing to its superior ability to enhance the capacity and coverage of wireless communications by smartly creating a favorable propagation environment.
no code implementations • 17 Nov 2021 • Xinxing Wu, Tao Wang, Qian Liu, Peide Liu, Guanrong Chen, Xu Zhang
By introducing a new operator for IFVs via the linear order based on a score function and an accuracy function, we show that such an operator is a strong negation on IFVs.
no code implementations • 28 Sep 2021 • Yiyu Liu, Qian Liu, Yu Tian, Changping Wang, Yanan Niu, Yang song, Chenliang Li
In this paper, we propose a novel concept-aware denoising graph neural network (named CONDE) for micro-video recommendation.
1 code implementation • Findings (ACL) 2021 • Qian Liu, Dejian Yang, Jiahui Zhang, Jiaqi Guo, Bin Zhou, Jian-Guang Lou
Recent years pretrained language models (PLMs) hit a success on several downstream tasks, showing their power on modeling language.
no code implementations • 11 Aug 2021 • Rang Liu, Ming Li, Qian Liu, A. Lee Swindlehurst
In this paper, we consider multi-input multi-output (MIMO) DFRC systems and focus on transmit beamforming designs to provide both radar sensing and multi-user communications.
no code implementations • ACL 2021 • Jiaqi Guo, Ziliang Si, Yu Wang, Qian Liu, Ming Fan, Jian-Guang Lou, Zijiang Yang, Ting Liu
However, we identify two biases in existing datasets for XDTS: (1) a high proportion of context-independent questions and (2) a high proportion of easy SQL queries.
1 code implementation • ACL 2021 • Shuang Chen, Qian Liu, Zhiwei Yu, Chin-Yew Lin, Jian-Guang Lou, Feng Jiang
We present Retriever-Transducer-Checker (ReTraCk), a neural semantic parsing framework for large scale knowledge base question answering (KBQA).
Ranked #1 on Knowledge Base Question Answering on GrailQA
no code implementations • 24 Jul 2021 • Yanan Ma, Rang Liu, Yang Liu, Ming Li, Qian Liu
Reconfigurable intelligent surfaces (RISs) have been deemed as one of potential components of future wireless communication systems because they can adaptively manipulate the wireless propagation environment with low-cost passive devices.
2 code implementations • ICLR 2022 • Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou
TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and high-quality synthetic corpus.
Ranked #1 on Semantic Parsing on WikiSQL (Denotation accuracy (test) metric)
2 code implementations • Findings (ACL) 2021 • Chenyao Liu, Shengnan An, Zeqi Lin, Qian Liu, Bei Chen, Jian-Guang Lou, Lijie Wen, Nanning Zheng, Dongmei Zhang
In this paper, we propose LeAR, an end-to-end neural model to learn algebraic recombination for compositional generalization.
Ranked #2 on Semantic Parsing on CFQ
no code implementations • 27 Jun 2021 • Sifan Liu, Pengfei Ni, Rang Liu, Yang Liu, Ming Li, Qian Liu
During the dynamical access process, an iterative algorithm is proposed to alternatively obtain the active and passive beamforming.
no code implementations • 16 Feb 2021 • Mengzhi Wu, Qian Liu, Ping Li, Shi Chen, Binlong Wang, Wenhan Shen, Shiping Chen, Yangheng Zheng, Yigang Xie, Jin Li
The IBF and the transparent rate of electrons are two essential indicators of TPC, which affect the energy resolution and counting rate respectively.
Instrumentation and Detectors High Energy Physics - Experiment
no code implementations • 5 Jan 2021 • Wenhao Cai, Rang Liu, Yang Liu, Ming Li, Qian Liu
Therefore, the practical phase shift model, which can describe the difference of IRS phase shift responses for the signals with different frequencies, should be utilized in the IRS optimization for wideband and multi-band systems.
no code implementations • 25 Dec 2020 • Wanning Yang, Hongyu Li, Ming Li, Yang Liu, Qian Liu
Different from the prior works which assume that IRS has an ideal reflection model, we perform channel estimation by considering amplitude-phase shift-frequency relationship for the response of practical IRS.
1 code implementation • 9 Nov 2020 • Yuntao Li, Bei Chen, Qian Liu, Yan Gao, Jian-Guang Lou, Yan Zhang, Dongmei Zhang
In Natural Language Interfaces to Databases systems, the text-to-SQL technique allows users to query databases by using natural language questions.
1 code implementation • EMNLP 2020 • Qian Liu, Bei Chen, Jian-Guang Lou, Bin Zhou, Dongmei Zhang
Recent years the task of incomplete utterance rewriting has raised a large attention.
Ranked #1 on Dialogue Rewriting on Rewrite
no code implementations • 29 Jul 2020 • Rang Liu, Ming Li, Qian Liu, A. Lee Swindlehurst, Qingqing Wu
Intelligent reflecting surfaces (IRS) have been proposed as a revolutionary technology owing to its capability of adaptively reconfiguring the propagation environment in a cost-effective and hardware-efficient fashion.
no code implementations • 26 Jul 2020 • Hongyu Li, Wenhao Cai, Yang Liu, Ming Li, Qian Liu, Qingqing Wu
Simulation results demonstrate that the proposed algorithm can offer significant average sum-rate enhancement compared to that achieved using the ideal IRS reflection model, which confirms the importance of the use of the practical model for the design of wideband systems.
1 code implementation • NeurIPS 2020 • Qian Liu, Shengnan An, Jian-Guang Lou, Bei Chen, Zeqi Lin, Yan Gao, Bin Zhou, Nanning Zheng, Dongmei Zhang
Compositional generalization is a basic and essential intellective capability of human beings, which allows us to recombine known parts readily.
no code implementations • 2 Jun 2020 • Wenhao Cai, Hongyu Li, Ming Li, Qian Liu
In this letter, we aim to investigate the phase-amplitude-frequency relationship of the reflected signals and propose a practical model of reflection coefficient for an IRS-aided wideband system.
no code implementations • 2 Jun 2020 • Jie Cai, Zhengzhou Zhu, Ping Nie, Qian Liu
In this paper, inspired by the observation that most probing tasks involve identifying matched pairs of phrases (e. g. coreference requires matching an entity and a pronoun), we propose a pairwise probe to understand BERT fine-tuning on the machine reading comprehension (MRC) task.
1 code implementation • ACL 2020 • Qian Liu, Yihong Chen, Bei Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, Dongmei Zhang
Despite the continuing efforts to improve the engagingness and consistency of chit-chat dialogue systems, the majority of current work simply focus on mimicking human-like responses, leaving understudied the aspects of modeling understanding between interlocutors.
Ranked #2 on Dialogue Generation on Persona-Chat (using extra training data)
no code implementations • 8 Feb 2020 • Qian Liu, Tao Wang, Jie Liu, Yang Guan, Qi Bu, Longfei Yang
In order to learn powerful feature of videos, we propose a Collaborative Temporal Modeling (CTM) block (Figure 1) to learn temporal information for action recognition.
no code implementations • 7 Feb 2020 • Qian Liu, Dongyang Cai, Jie Liu, Nan Ding, Tao Wang
The standard non-local (NL) module is effective in aggregating frame-level features on the task of video classification but presents low parameters efficiency and high computational cost.
1 code implementation • 3 Feb 2020 • Qian Liu, Bei Chen, Jiaqi Guo, Jian-Guang Lou, Bin Zhou, Dongmei Zhang
Recently semantic parsing in context has received considerable attention, which is challenging since there are complex contextual phenomena.
2 code implementations • 3 Dec 2019 • Martino Sorbaro, Qian Liu, Massimo Bortone, Sadique Sheik
We demonstrate first that quantization-aware training of CNNs leads to better accuracy in SNNs.
no code implementations • IJCNLP 2019 • Haoyan Liu, Lei Fang, Qian Liu, Bei Chen, Jian-Guang Lou, Zhoujun Li
One key component in text-to-SQL is to predict the comparison relations between columns and their values.
1 code implementation • IJCNLP 2019 • Qian Liu, Bei Chen, Haoyan Liu, Lei Fang, Jian-Guang Lou, Bin Zhou, Dongmei Zhang
To leverage the advances in context-independent semantic parsing, we propose to perform follow-up query analysis, aiming to restate context-dependent natural language queries with contextual information.
no code implementations • 3 Sep 2019 • Rang Liu, Hongyu Li, Ming Li, Qian Liu
In this paper we investigate the problem of precoder design for a low-resolution IRS-based transmitter to implement multi-user MISO/MIMO wireless communications.
1 code implementation • 24 Jan 2019 • Qian Liu, Bei Chen, Jian-Guang Lou, Ge Jin, Dongmei Zhang
NLIDB allow users to search databases using natural language instead of SQL-like query languages.
1 code implementation • COLING 2018 • Qian Liu, He-Yan Huang, Yang Gao, Xiaochi Wei, Yuxin Tian, Luyang Liu
In this paper, we propose a task-oriented word embedding method and apply it to the text classification task.
Ranked #21 on Text Classification on AG News
no code implementations • 31 Mar 2017 • Qian Liu, Yunhua Chen, Steve Furber
We extended the work of proposed activation function, Noisy Softplus, to fit into training of layered up spiking neural networks (SNNs).
no code implementations • 11 Jul 2014 • Qian Liu, Guanhua Chen, Michael R. Kosorok, Eric Bair
This framework can be used to identify biclusters that differ with respect to the means of the features, the variance of the features, or more general differences.