no code implementations • COLING 2022 • Li Gao, Lingyun Song, Jie Liu, Bolin Chen, Xuequn Shang
However, little attention is paid to the issues of both authenticity of the relationships and topology imbalance in the structure of NPG, which trick existing methods and thus lead to incorrect prediction results.
1 code implementation • ECCV 2020 • Miao Zhang, Sun Xiao Fei, Jie Liu, Shuang Xu, Yongri Piao, Huchuan Lu
In this paper, we propose an asymmetric two-stream architecture taking account of the inherent differences between RGB and depth data for saliency detection.
Ranked #19 on Thermal Image Segmentation on RGB-T-Glass-Segmentation
1 code implementation • 14 May 2024 • Zhimin Li, Jianwei Zhang, Qin Lin, Jiangfeng Xiong, Yanxin Long, Xinchi Deng, Yingfang Zhang, Xingchao Liu, Minbin Huang, Zedong Xiao, Dayou Chen, Jiajun He, Jiahao Li, Wenyue Li, Chen Zhang, Rongwei Quan, Jianxiang Lu, Jiabin Huang, Xiaoyan Yuan, Xiaoxiao Zheng, Yixuan Li, Jihong Zhang, Chao Zhang, Meng Chen, Jie Liu, Zheng Fang, Weiyan Wang, Jinbao Xue, Yangyu Tao, Jianchen Zhu, Kai Liu, Sihuan Lin, Yifu Sun, Yun Li, Dongdong Wang, Mingtao Chen, Zhichao Hu, Xiao Xiao, Yan Chen, Yuhong Liu, Wei Liu, Di Wang, Yong Yang, Jie Jiang, Qinglin Lu
For fine-grained language understanding, we train a Multimodal Large Language Model to refine the captions of the images.
2 code implementations • 16 Apr 2024 • Bin Ren, Nancy Mehta, Radu Timofte, Hongyuan Yu, Cheng Wan, Yuxin Hong, Bingnan Han, Zhuoyuan Wu, Yajun Zou, Yuqing Liu, Jizhe Li, Keji He, Chao Fan, Heng Zhang, Xiaolin Zhang, Xuanwu Yin, Kunlong Zuo, Bohao Liao, Peizhe Xia, Long Peng, Zhibo Du, Xin Di, Wangkai Li, Yang Wang, Wei Zhai, Renjing Pei, Jiaming Guo, Songcen Xu, Yang Cao, ZhengJun Zha, Yan Wang, Yi Liu, Qing Wang, Gang Zhang, Liou Zhang, Shijie Zhao, Long Sun, Jinshan Pan, Jiangxin Dong, Jinhui Tang, Xin Liu, Min Yan, Menghan Zhou, Yiqiang Yan, Yixuan Liu, Wensong Chan, Dehua Tang, Dong Zhou, Li Wang, Lu Tian, Barsoum Emad, Bohan Jia, Junbo Qiao, Yunshuai Zhou, Yun Zhang, Wei Li, Shaohui Lin, Shenglong Zhou, Binbin Chen, Jincheng Liao, Suiyi Zhao, Zhao Zhang, Bo wang, Yan Luo, Yanyan Wei, Feng Li, Mingshen Wang, Yawei Li, Jinhan Guan, Dehua Hu, Jiawei Yu, Qisheng Xu, Tao Sun, Long Lan, Kele Xu, Xin Lin, Jingtong Yue, Lehan Yang, Shiyi Du, Lu Qi, Chao Ren, Zeyu Han, YuHan Wang, Chaolin Chen, Haobo Li, Mingjun Zheng, Zhongbao Yang, Lianhong Song, Xingzhuo Yan, Minghan Fu, Jingyi Zhang, Baiang Li, Qi Zhu, Xiaogang Xu, Dan Guo, Chunle Guo, Jiadi Chen, Huanhuan Long, Chunjiang Duanmu, Xiaoyan Lei, Jie Liu, Weilin Jia, Weifeng Cao, Wenlong Zhang, Yanyu Mao, Ruilong Guo, Nihao Zhang, Qian Wang, Manoj Pandey, Maksym Chernozhukov, Giang Le, Shuli Cheng, Hongyuan Wang, Ziyan Wei, Qingting Tang, Liejun Wang, Yongming Li, Yanhui Guo, Hao Xu, Akram Khatami-Rizi, Ahmad Mahmoudi-Aznaveh, Chih-Chung Hsu, Chia-Ming Lee, Yi-Shiuan Chou, Amogh Joshi, Nikhil Akalwadi, Sampada Malagi, Palani Yashaswini, Chaitra Desai, Ramesh Ashok Tabib, Ujwala Patil, Uma Mudenagudi
In sub-track 1, the practical runtime performance of the submissions was evaluated, and the corresponding score was used to determine the ranking.
no code implementations • 8 Apr 2024 • Jie Liu, Tao Feng, Yan Jiang, Peizheng Wang, Chao Wu
However, the inherent replicability and privacy concerns of data make it challenging to directly apply traditional trading theories to data markets.
no code implementations • 14 Mar 2024 • Jie Liu, Xuequn Shang, Xiaolin Han, Wentao Zhang, Hongzhi Yin
Then STRIPE incorporates separate spatial and temporal memory networks, which capture and store prototypes of normal patterns, thereby preserving the uniqueness of spatial and temporal normality.
1 code implementation • 9 Mar 2024 • Jie Liu, Zhongyuan Zhao, Zijian Ding, Benjamin Brock, Hongbo Rong, Zhiru Zhang
The ongoing trend of hardware specialization has led to a growing use of custom data formats when processing sparse workloads, which are typically memory-bound.
no code implementations • 6 Mar 2024 • SiQi Zhou, Ling Wang, Jie Liu, Jinshan Tang
However, there are large difference between the simulation results obtained by the crop models and the actual results, thus in this paper, we proposed to combine the simulation results with the collected crop data for data assimilation so that the accuracy of prediction will be improved.
1 code implementation • 27 Feb 2024 • Tao Tang, Guangrun Wang, Yixing Lao, Peng Chen, Jie Liu, Liang Lin, Kaicheng Yu, Xiaodan Liang
Through extensive experiments across various datasets and scenes, we demonstrate the effectiveness of our approach in facilitating better interaction between LiDAR and camera modalities within a unified neural field.
no code implementations • 22 Feb 2024 • Ge Bai, Jie Liu, Xingyuan Bu, Yancheng He, Jiaheng Liu, Zhanhui Zhou, Zhuoran Lin, Wenbo Su, Tiezheng Ge, Bo Zheng, Wanli Ouyang
By conducting a detailed analysis of real multi-turn dialogue data, we construct a three-tier hierarchical ability taxonomy comprising 4208 turns across 1388 multi-turn dialogues in 13 distinct tasks.
1 code implementation • 22 Feb 2024 • Yanan Wu, Jie Liu, Xingyuan Bu, Jiaheng Liu, Zhanhui Zhou, Yuanxing Zhang, Chenchen Zhang, Zhiqi Bai, Haibin Chen, Tiezheng Ge, Wanli Ouyang, Wenbo Su, Bo Zheng
This paper introduces ConceptMath, a bilingual (English and Chinese), fine-grained benchmark that evaluates concept-wise mathematical reasoning of Large Language Models (LLMs).
no code implementations • 21 Feb 2024 • Jianghui Zhou, Ya Gao, Jie Liu, Xuemin Zhao, Zhaohua Yang, Yue Wu, Lirong Shi
Large language models(LLM) such as ChatGPT have substantially simplified the generation of marketing copy, yet producing content satisfying domain specific requirements, such as effectively engaging customers, remains a significant challenge.
1 code implementation • 21 Feb 2024 • Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, Jie Liu, Lei Qi, Zhiyuan Liu, Maosong Sun
Notably, the best-performing model, GPT-4V, attains an average score of 17. 23% on OlympiadBench, with a mere 11. 28% in physics, highlighting the benchmark rigor and the intricacy of physical reasoning.
1 code implementation • 19 Feb 2024 • Zhanhui Zhou, Jie Liu, Zhichen Dong, Jiaheng Liu, Chao Yang, Wanli Ouyang, Yu Qiao
Large language models (LLMs) need to undergo safety alignment to ensure safe conversations with humans.
no code implementations • 17 Feb 2024 • Wenxuan Wang, Yihang Su, Jingyuan Huan, Jie Liu, WenTing Chen, Yudi Zhang, Cheng-Yi Li, Kao-Jung Chang, Xiaohan Xin, Linlin Shen, Michael R. Lyu
However, these models are often evaluated on benchmarks that are unsuitable for the Med-MLLMs due to the intricate nature of the real-world diagnostic frameworks, which encompass diverse medical specialties and involve complex clinical decisions.
no code implementations • 29 Jan 2024 • Jie Liu, Wenzhe Yin, Haochen Wang, Yunlu Chen, Jan-Jakob Sonke, Efstratios Gavves
Existing prototype-based methods rely on support prototypes to guide the segmentation of query point clouds, but they encounter challenges when significant object variations exist between the support prototypes and query features.
1 code implementation • 26 Jan 2024 • Chao Chen, Jie Liu, Chang Zhou, Jie Tang, Gangshan Wu
At the "Sketch" stage, local directions of keypoints can be easily estimated by fast convolutional layers.
1 code implementation • 8 Jan 2024 • Youbing Hu, Yun Cheng, Anqi Lu, Zhiqiang Cao, Dawei Wei, Jie Liu, Zhijun Li
To address this, we present the Localization and Focus Vision Transformer (LF-ViT).
no code implementations • 21 Dec 2023 • Zheshun Wu, Zenglin Xu, Dun Zeng, Junfan Li, Jie Liu
To address these challenges, we conduct a thorough theoretical convergence analysis for DFL and derive a convergence bound.
no code implementations • 19 Dec 2023 • Jie Liu, Yijia Cao, Yong Li, Yixiu Guo, Wei Deng
Accurately predicting line loss rates is vital for effective line loss management in distribution networks, especially over short-term multi-horizons ranging from one hour to one week.
1 code implementation • 12 Dec 2023 • Yinmin Zhang, Jie Liu, Chuming Li, Yazhe Niu, Yaodong Yang, Yu Liu, Wanli Ouyang
In this paper, from a novel perspective, we systematically study the challenges that remain in O2O RL and identify that the reason behind the slow improvement of the performance and the instability of online finetuning lies in the inaccurate Q-value estimation inherited from offline pretraining.
no code implementations • 4 Dec 2023 • Jie Liu, Qilin Li, Senjian An, Bradley Ezard, Ling Li
Transformer-based models for anomaly detection in multivariate time series can benefit from the self-attention mechanism due to its advantage in modeling long-term dependencies.
no code implementations • 25 Oct 2023 • Zheshun Wu, Zenglin Xu, Hongfang Yu, Jie Liu
In FEEL, both mobile devices transmitting model parameters over noisy channels and collecting data in diverse environments pose challenges to the generalization of trained models.
no code implementations • 18 Oct 2023 • Jie Liu, Yinmin Zhang, Chuming Li, Chao Yang, Yaodong Yang, Yu Liu, Wanli Ouyang
Building a single generalist agent with strong zero-shot capability has recently sparked significant advancements.
no code implementations • 13 Oct 2023 • Lu Li, Yuxin Pan, RuoBing Chen, Jie Liu, Zilin Wang, Yu Liu, Zhiheng Li
Considering that obtaining expert demonstrations can be costly, the focus of current IRL techniques is on learning a better-than-demonstrator policy using a reward function derived from sub-optimal demonstrations.
no code implementations • 11 Oct 2023 • Zheshun Wu, Zenglin Xu, Dun Zeng, Qifan Wang, Jie Liu
Federated Learning (FL) has surged in prominence due to its capability of collaborative model training without direct data sharing.
1 code implementation • 5 Oct 2023 • Zhanhui Zhou, Jie Liu, Chao Yang, Jing Shao, Yu Liu, Xiangyu Yue, Wanli Ouyang, Yu Qiao
A single language model (LM), despite aligning well with an average labeler through reinforcement learning from human feedback (RLHF), may not universally suit diverse human preferences.
no code implementations • 29 Sep 2023 • Hongfei Xue, Qijie Shao, Kaixun Huang, Peikun Chen, Jie Liu, Lei Xie
Multilingual automatic speech recognition (ASR) systems have garnered attention for their potential to extend language coverage globally.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 24 Sep 2023 • Zhichao Wang, Xinhai Chen, Junjun Yan, Jie Liu
With a lightweight model, GMSNet can effectively smoothing mesh nodes with varying degrees and remain unaffected by the order of input data.
1 code implementation • 11 Sep 2023 • Jinzuomu Zhong, Yang Li, Hui Huang, Jie Liu, Zhiba Su, Jing Guo, Benlai Tang, Fengjie Zhu
While human prosody annotation contributes a lot to the performance, it is a labor-intensive and time-consuming process, often resulting in inconsistent outcomes.
no code implementations • ICCV 2023 • Jiacong Xu, Yi Zhang, Jiawei Peng, Wufei Ma, Artur Jesslen, Pengliang Ji, Qixin Hu, Jiehua Zhang, Qihao Liu, Jiahao Wang, Wei Ji, Chen Wang, Xiaoding Yuan, Prakhar Kaushik, Guofeng Zhang, Jie Liu, Yushan Xie, Yawen Cui, Alan Yuille, Adam Kortylewski
Animal3D consists of 3379 images collected from 40 mammal species, high-quality annotations of 26 keypoints, and importantly the pose and shape parameters of the SMAL model.
Ranked #1 on Animal Pose Estimation on Animal3D
1 code implementation • ICCV 2023 • Yidong Cai, Jie Liu, Jie Tang, Gangshan Wu
To enjoy the merits of both methods, we propose a robust object modeling framework for visual tracking (ROMTrack), which simultaneously models the inherent template and the hybrid template features.
no code implementations • 5 Aug 2023 • Jie Liu, Tao Zhang, Shuyu Sun
In this way, computational complexity of this method is greatly reduced compared to that of traditional pixel-based extraction methods, thus enabling large-scale pore-network extraction.
1 code implementation • 31 Jul 2023 • Haonan Wang, Jie Liu, Jie Tang, Gangshan Wu
We first propose the SR head, which predicts heatmaps with a spatial resolution higher than the input feature maps (or even consistent with the input image) by super-resolution, to effectively reduce the quantization error and the dependence on further post-processing.
no code implementations • 28 Jul 2023 • Jie Liu, Mengting He, Xuequn Shang, Jieming Shi, Bin Cui, Hongzhi Yin
By swapping the context embeddings between nodes and edges and measuring the agreement in the embedding space, we enable the mutual detection of node and edge anomalies.
no code implementations • 24 Jul 2023 • Chuming Li, Ruonan Jia, Jie Liu, Yinmin Zhang, Yazhe Niu, Yaodong Yang, Yu Liu, Wanli Ouyang
Model-based reinforcement learning (RL) has demonstrated remarkable successes on a range of continuous control tasks due to its high sample efficiency.
1 code implementation • 12 Jul 2023 • Junjun Yan, Xinhai Chen, Zhichao Wang, Enqiang Zhou, Jie Liu
To alleviate these issues, we proposed auxiliary-task learning-based physics-informed neural networks (ATL-PINNs), which provide four different auxiliary-task learning modes and investigate their performance compared with original PINNs.
1 code implementation • 11 Jul 2023 • Long Chen, Jian Jiang, Bozheng Dou, Hongsong Feng, Jie Liu, Yueying Zhu, Bengong Zhang, Tianshou Zhou, Guo-Wei Wei
Pain is a significant global health issue, and the current treatment options for pain management have limitations in terms of effectiveness, side effects, and potential for addiction.
no code implementations • 27 Jun 2023 • Jie Liu, Zhiba Su, Hui Huang, Caiyan Wan, Quanxiu Wang, Jiangli Hong, Benlai Tang, Fengjie Zhu
We propose our novel TranssionADD system as a solution to the challenging problem of model robustness and audio segment outliers in the trace competition.
1 code implementation • 15 Jun 2023 • Junjun Yan, Xinhai Chen, Zhichao Wang, Enqiang Zhoui, Jie Liu
To address the issue of low accuracy and convergence problems of existing PINNs, we propose a self-training physics-informed neural network, ST-PINN.
no code implementations • 2 Jun 2023 • Ziyang Zhang, Yang Zhao, Huan Li, Changyao Lin, Jie Liu
Due to limited resources on edge and different characteristics of deep neural network (DNN) models, it is a big challenge to optimize DNN inference performance in terms of energy consumption and end-to-end latency on edge devices.
no code implementations • 23 May 2023 • Hongfei Xue, Qijie Shao, Peikun Chen, Pengcheng Guo, Lei Xie, Jie Liu
Different from UniSpeech, UniData2vec replaces the quantized discrete representations with continuous and contextual representations from a teacher model for phonetically-aware pre-training.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
1 code implementation • NeurIPS 2023 • Chongyu Qu, Tiezheng Zhang, Hualin Qiao, Jie Liu, Yucheng Tang, Alan Yuille, Zongwei Zhou
Annotating medical images, particularly for organ segmentation, is laborious and time-consuming.
no code implementations • 15 May 2023 • Peipei Liu, Hong Li, Yimo Ren, Jie Liu, Shuaizong Si, Hongsong Zhu, Limin Sun
Mining structured knowledge from tweets using named entity recognition (NER) can be beneficial for many down stream applications such as recommendation and intention understanding.
1 code implementation • 8 May 2023 • Letian Wang, Jie Liu, Hao Shao, Wenshuo Wang, RuoBing Chen, Yu Liu, Steven L. Waslander
Inspired by this, we propose ASAP-RL, an efficient reinforcement learning algorithm for autonomous driving that simultaneously leverages motion skills and expert priors.
1 code implementation • 2 May 2023 • Jie Liu, Peizheng Wang, Chao Wu
Data valuation using Shapley value has emerged as a prevalent research domain in machine learning applications.
no code implementations • 1 May 2023 • Ziyang Zhang, Huan Li, Yang Zhao, Changyao Lin, Jie Liu
As deep neural networks (DNNs) are being applied to a wide range of edge intelligent applications, it is critical for edge inference platforms to have both high-throughput and low-latency at the same time.
no code implementations • 28 Apr 2023 • Jie Liu, Mengting He, Guangtao Wang, Nguyen Quoc Viet Hung, Xuequn Shang, Hongzhi Yin
minority classes to balance the label and topology distribution.
1 code implementation • 26 Apr 2023 • Chang Zhou, Jie Liu, Jie Tang, Gangshan Wu
To better model correlations and to produce more accurate motion fields, we propose the Densely Queried Bilateral Correlation (DQBC) that gets rid of the receptive field dependency problem and thus is more friendly to small and fast-moving objects.
Ranked #1 on Video Frame Interpolation on MSU Video Frame Interpolation (VMAF metric)
1 code implementation • 23 Apr 2023 • Cilin Yan, Haochen Wang, Jie Liu, XiaoLong Jiang, Yao Hu, Xu Tang, Guoliang Kang, Efstratios Gavves
Click-based interactive segmentation aims to generate target masks via human clicking, which facilitates efficient pixel-level annotation and image editing.
no code implementations • 16 Apr 2023 • Zhifeng Ma, Hao Zhang, Jie Liu
The drastic variation of motion in spatial and temporal dimensions makes the video prediction task extremely challenging.
no code implementations • 26 Mar 2023 • Zhuoying Zhao, Ziling Tan, Pinghui Mo, Xiaonan Wang, Dan Zhao, Xin Zhang, Ming Tao, Jie Liu
This paper proposes a special-purpose system to achieve high-accuracy and high-efficiency machine learning (ML) molecular dynamics (MD) calculations.
no code implementations • 12 Mar 2023 • Hao Chen, Zhe-Ming Lu, Jie Liu
This paper focuses on proposing a deep learning-based monkey swing counting algorithm.
no code implementations • 9 Mar 2023 • Jie Liu, Yixuan Liu, Xue Han, Chao Deng, Junlan Feng
Previous contrastive learning methods for sentence representations often focus on insensitive transformations to produce positive pairs, but neglect the role of sensitive transformations that are harmful to semantic representations.
no code implementations • 8 Feb 2023 • Xubo Qin, Xiyuan Liu, Xiongfeng Zheng, Jie Liu, Yutao Zhu
Specifically, when the student models are in cross-encoder architecture, a pairwise loss of hard labels is critical for training student models, whereas the distillation objectives of intermediate Transformer layers may hurt performance.
no code implementations • 16 Jan 2023 • Shanshan Chen, Jie Liu, Yixiang Wu
In this paper, we study a three-patch two-species Lotka-Volterra competition patch model over a stream network.
no code implementations • 16 Jan 2023 • Shanshan Chen, Jie Liu, Yixiang Wu
In this paper, we study a two stream species Lotka-Volterra competition patch model with the patches aligned along a line.
no code implementations • 9 Jan 2023 • Jie Liu, Yanqi Bao, Wenzhe Yin, Haochen Wang, Yang Gao, Jan-Jakob Sonke, Efstratios Gavves
However, the appearance variations between objects from the same category could be extremely large, leading to unreliable feature matching and query mask prediction.
Ranked #40 on Few-Shot Semantic Segmentation on PASCAL-5i (1-Shot)
2 code implementations • ICCV 2023 • Jie Liu, Yixiao Zhang, Jie-Neng Chen, Junfei Xiao, Yongyi Lu, Bennett A. Landman, Yixuan Yuan, Alan Yuille, Yucheng Tang, Zongwei Zhou
The proposed model is developed from an assembly of 14 datasets, using a total of 3, 410 CT scans for training and then evaluated on 6, 162 external CT scans from 3 additional datasets.
Ranked #1 on Organ Segmentation on BTCV
1 code implementation • CVPR 2023 • Wuyang Li, Jie Liu, Bo Han, Yixuan Yuan
In a nutshell, ANNA consists of Front-Door Adjustment (FDA) to correct the biased learning in the source domain and Decoupled Causal Alignment (DCA) to transfer the model unbiasedly.
no code implementations • 28 Dec 2022 • Hao Zhang, Tingting Wu, Siyao Cheng, Jie Liu
Federated learning (FL) is an emerging paradigm to train model with distributed data from numerous Internet of Things (IoT) devices.
no code implementations • 30 Nov 2022 • Yue Li, Li Zhang, Namin Wang, Jie Liu, Lei Xie
Specifically, the weight transfer fine-tuning aims to constrain the distance of the weights between the pre-trained model and the fine-tuned model, which takes advantage of the previously acquired discriminative ability from the large-scale out-domain datasets and avoids catastrophic forgetting and overfitting at the same time.
1 code implementation • 30 Nov 2022 • Jie Liu, Chao Chen, Jie Tang, Gangshan Wu
In the fine area, we use an Intra-Patch Self-Attention (IPSA) module to model long-range pixel dependencies in a local patch, and then a $3\times3$ convolution is applied to process the finest details.
1 code implementation • 29 Nov 2022 • Chuming Li, Jie Liu, Yinmin Zhang, Yuhong Wei, Yazhe Niu, Yaodong Yang, Yu Liu, Wanli Ouyang
In the learning phase, each agent minimizes the TD error that is dependent on how the subsequent agents have reacted to their chosen action.
Ranked #1 on SMAC on SMAC 3s5z_vs_3s6z
2 code implementations • 7 Nov 2022 • Andrey Ignatov, Radu Timofte, Maurizio Denna, Abdel Younes, Ganzorig Gankhuyag, Jingang Huh, Myeong Kyun Kim, Kihwan Yoon, Hyeon-Cheol Moon, Seungho Lee, Yoonsik Choe, Jinwoo Jeong, Sungjei Kim, Maciej Smyl, Tomasz Latkowski, Pawel Kubik, Michal Sokolski, Yujie Ma, Jiahao Chao, Zhou Zhou, Hongfan Gao, Zhengfeng Yang, Zhenbing Zeng, Zhengyang Zhuge, Chenghua Li, Dan Zhu, Mengdi Sun, Ran Duan, Yan Gao, Lingshun Kong, Long Sun, Xiang Li, Xingdong Zhang, Jiawei Zhang, Yaqi Wu, Jinshan Pan, Gaocheng Yu, Jin Zhang, Feng Zhang, Zhe Ma, Hongbin Wang, Hojin Cho, Steve Kim, Huaen Li, Yanbo Ma, Ziwei Luo, Youwei Li, Lei Yu, Zhihong Wen, Qi Wu, Haoqiang Fan, Shuaicheng Liu, Lize Zhang, Zhikai Zong, Jeremy Kwon, Junxi Zhang, Mengyuan Li, Nianxiang Fu, Guanchen Ding, Han Zhu, Zhenzhong Chen, Gen Li, Yuanfan Zhang, Lei Sun, Dafeng Zhang, Neo Yang, Fitz Liu, Jerry Zhao, Mustafa Ayazoglu, Bahri Batuhan Bilecen, Shota Hirose, Kasidis Arunruangsirilert, Luo Ao, Ho Chun Leung, Andrew Wei, Jie Liu, Qiang Liu, Dahai Yu, Ao Li, Lei Luo, Ce Zhu, Seongmin Hong, Dongwon Park, Joonhee Lee, Byeong Hyun Lee, Seunggyu Lee, Se Young Chun, Ruiyuan He, Xuhao Jiang, Haihang Ruan, Xinjian Zhang, Jing Liu, Garas Gendy, Nabil Sabor, Jingchao Hou, Guanghui He
While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints.
no code implementations • 6 Nov 2022 • Jixun Yao, Qing Wang, Yi Lei, Pengcheng Guo, Lei Xie, Namin Wang, Jie Liu
By directly scaling the formant and F0, the speaker distinguishability degradation of the anonymized speech caused by the introduction of other speakers is prevented.
no code implementations • 28 Oct 2022 • Peipei Liu, Xin Zheng, Hong Li, Jie Liu, Yimo Ren, Hongsong Zhu, Limin Sun
At the second stage, a self-supervised contrastive learning is designed for the improvement of the distilled unimodal representations after cross-modal interaction.
1 code implementation • 19 Oct 2022 • Peipei Liu, Gaosheng Wang, Hong Li, Jie Liu, Yimo Ren, Hongsong Zhu, Limin Sun
With social media posts tending to be multimodal, Multimodal Named Entity Recognition (MNER) for the text with its accompanying image is attracting more and more attention since some textual components can only be understood in combination with visual information.
no code implementations • 19 Oct 2022 • Peipei Liu, Hong Li, Zhiyu Wang, Yimo Ren, Jie Liu, Fei Lyu, Hongsong Zhu, Limin Sun
Enterprise relation extraction aims to detect pairs of enterprise entities and identify the business relations between them from unstructured or semi-structured text data, and it is crucial for several real-world applications such as risk analysis, rating research and supply chain security.
no code implementations • 18 Oct 2022 • Xinhai Chen, Jie Liu, Junjun Yan, Zhichao Wang, Chunye Gong
To improve the prediction accuracy of the neural network, we also introduce a novel auxiliary line strategy and an efficient network model during meshing.
no code implementations • 17 Oct 2022 • Joey Wang, Yingcan Wei, Minseok Lee, Matthias Langer, Fan Yu, Jie Liu, Alex Liu, Daniel Abel, Gems Guo, Jianbing Dong, Jerry Shi, Kunlun Li
In this talk, we introduce Merlin HugeCTR.
no code implementations • 8 Oct 2022 • Jie Liu, Jingjing Wang, Peng Zhang, Chunmao Wang, Di Xie, ShiLiang Pu
To overcome these limitations, we propose a multi-scale wavelet transformer framework for face forgery detection.
no code implementations • 23 Sep 2022 • Tan Yu, Zhipeng Jin, Jie Liu, Yi Yang, Hongliang Fei, Ping Li
To overcome the limitations of behavior ID features in modeling new ads, we exploit the visual content in ads to boost the performance of CTR prediction models.
no code implementations • 19 Sep 2022 • Tan Yu, Jie Liu, Yi Yang, Yi Li, Hongliang Fei, Ping Li
How to pair the video ads with the user search is the core task of Baidu video advertising.
1 code implementation • 1 Aug 2022 • Yilan Zhang, Fengying Xie, Xuedong Song, Hangning Zhou, Yiguang Yang, Haopeng Zhang, Jie Liu
As such they have made great improvements in many tasks of dermoscopy images.
1 code implementation • 1 Jul 2022 • Peipei Liu, Hong Li, Zuoguang Wang, Jie Liu, Yimo Ren, Hongsong Zhu
Extracting cybersecurity entities such as attackers and vulnerabilities from unstructured network texts is an important part of security analysis.
1 code implementation • 7 Jun 2022 • Zhifeng Ma, Hao Zhang, Jie Liu
Spatiotemporal predictive learning, which predicts future frames through historical prior knowledge with the aid of deep learning, is widely used in many fields.
1 code implementation • ICLR 2022 • Wei Ji, Jingjing Li, Qi Bi, Chuan Guo, Jie Liu, Li Cheng
The laborious and time-consuming manual annotation has become a real bottleneck in various practical scenarios.
2 code implementations • 11 May 2022 • Yawei Li, Kai Zhang, Radu Timofte, Luc van Gool, Fangyuan Kong, Mingxi Li, Songwei Liu, Zongcai Du, Ding Liu, Chenhui Zhou, Jingyi Chen, Qingrui Han, Zheyuan Li, Yingqi Liu, Xiangyu Chen, Haoming Cai, Yu Qiao, Chao Dong, Long Sun, Jinshan Pan, Yi Zhu, Zhikai Zong, Xiaoxiao Liu, Zheng Hui, Tao Yang, Peiran Ren, Xuansong Xie, Xian-Sheng Hua, Yanbo Wang, Xiaozhong Ji, Chuming Lin, Donghao Luo, Ying Tai, Chengjie Wang, Zhizhong Zhang, Yuan Xie, Shen Cheng, Ziwei Luo, Lei Yu, Zhihong Wen, Qi Wu1, Youwei Li, Haoqiang Fan, Jian Sun, Shuaicheng Liu, Yuanfei Huang, Meiguang Jin, Hua Huang, Jing Liu, Xinjian Zhang, Yan Wang, Lingshun Long, Gen Li, Yuanfan Zhang, Zuowei Cao, Lei Sun, Panaetov Alexander, Yucong Wang, Minjie Cai, Li Wang, Lu Tian, Zheyuan Wang, Hongbing Ma, Jie Liu, Chao Chen, Yidong Cai, Jie Tang, Gangshan Wu, Weiran Wang, Shirui Huang, Honglei Lu, Huan Liu, Keyan Wang, Jun Chen, Shi Chen, Yuchun Miao, Zimo Huang, Lefei Zhang, Mustafa Ayazoğlu, Wei Xiong, Chengyi Xiong, Fei Wang, Hao Li, Ruimian Wen, Zhijing Yang, Wenbin Zou, Weixin Zheng, Tian Ye, Yuncheng Zhang, Xiangzhen Kong, Aditya Arora, Syed Waqas Zamir, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Dandan Gaoand Dengwen Zhouand Qian Ning, Jingzhu Tang, Han Huang, YuFei Wang, Zhangheng Peng, Haobo Li, Wenxue Guan, Shenghua Gong, Xin Li, Jun Liu, Wanjun Wang, Dengwen Zhou, Kun Zeng, Hanjiang Lin, Xinyu Chen, Jinsheng Fang
The aim was to design a network for single image super-resolution that achieved improvement of efficiency measured according to several metrics including runtime, parameters, FLOPs, activations, and memory consumption while at least maintaining the PSNR of 29. 00dB on DIV2K validation set.
no code implementations • CVPR 2022 • Jie Liu, Yanqi Bao, Guo-Sen Xie, Huan Xiong, Jan-Jakob Sonke, Efstratios Gavves
Specifically, in DPCN, a dynamic convolution module (DCM) is firstly proposed to generate dynamic kernels from support foreground, then information interaction is achieved by convolution operations over query features using these kernels.
Ranked #32 on Few-Shot Semantic Segmentation on PASCAL-5i (1-Shot)
1 code implementation • 18 Apr 2022 • Zongcai Du, Ding Liu, Jie Liu, Jie Tang, Gangshan Wu, Lean Fu
Besides, FMEN-S achieves the lowest memory consumption and the second shortest runtime in NTIRE 2022 challenge on efficient super-resolution.
1 code implementation • 7 Apr 2022 • Hao Zhang, Tingting Wu, Siyao Cheng, Jie Liu
On the other hand, it enlarges the distances between local models, resulting in an aggregated global model with poor performance.
1 code implementation • CVPR 2022 • Xiaoqing Guo, Jie Liu, Tongliang Liu, Yixuan Yuan
By exploiting computational geometry analysis and properties of segmentation, we design three complementary regularizers, i. e. volume regularization, anchor guidance, convex guarantee, to approximate the true SimT.
1 code implementation • 16 Mar 2022 • Feiyang Cai, Zhenkai Zhang, Jie Liu, Xenofon Koutsoukos
However, in a more realistic open set scenario, traditional classifiers with incomplete knowledge cannot tackle test data that are not from the training classes.
no code implementations • 13 Feb 2022 • Hao Wang, Yu Bai, Guangmin Sun, Jie Liu
Powerful recognition algorithms are widely used in the Internet or important medical systems, which poses a serious threat to personal privacy.
no code implementations • 7 Dec 2021 • Huiling Zhou, Jie Liu, Zhikang Li, Jin Yu, Hongxia Yang
With user history represented by a domain-aware sequential model, a frequency encoder is applied to the underlying tags for user content preference learning.
1 code implementation • 27 Nov 2021 • Jie Liu, Jie Tang, Gangshan Wu
We found that the standard deviation of the residual feature shrinks a lot after normalization layers, which causes the performance degradation in SR networks.
no code implementations • 24 Nov 2021 • Shiqi Liu, Lu Wang, Jie Lian, Ting Chen, Cong Liu, Xuchen Zhan, Jintao Lu, Jie Liu, Ting Wang, Dong Geng, Hongwei Duan, Yuze Tian
Relative radiometric normalization(RRN) of different satellite images of the same terrain is necessary for change detection, object classification/segmentation, and map-making tasks.
no code implementations • 9 Nov 2021 • Shang Li, GuiXuan Zhang, Zhengxiong Luo, Jie Liu, Zhi Zeng, Shuwu Zhang
In this paper, instead of directly applying the LR guidance, we propose an additional invertible flow guidance module (FGM), which can transform the downscaled representation to the visually plausible image during downscaling and transform it back during upscaling.
no code implementations • 24 Sep 2021 • Tai-Hsien Wu, Chunfeng Lian, Sanghee Lee, Matthew Pastewait, Christian Piers, Jie Liu, Fang Wang, Li Wang, Chiung-Ying Chiu, Wenchi Wang, Christina Jackson, Wei-Lun Chao, Dinggang Shen, Ching-Chang Ko
Our TS-MDL first adopts an end-to-end \emph{i}MeshSegNet method (i. e., a variant of the existing MeshSegNet with both improved accuracy and efficiency) to label each tooth on the downsampled scan.
no code implementations • 29 Aug 2021 • Zhiqiang Cao, Zhijun Li, Pan Heng, Yongrui Chen, Daqi Xie, Jie Liu
To address this challenge, we propose a small-big model framework that deploys a big model in the cloud and a small model on the edge devices.
no code implementations • 9 Jul 2021 • Zhenhou Hong, Jianzong Wang, Xiaoyang Qu, Jie Liu, Chendong Zhao, Jing Xiao
Text to speech (TTS) is a crucial task for user interaction, but TTS model training relies on a sizable set of high-quality original datasets.
no code implementations • 6 Jul 2021 • Shang Li, GuiXuan Zhang, Zhengxiong Luo, Jie Liu, Zhi Zeng, Shuwu Zhang
As a result, most previous methods may suffer a performance drop when the degradations of test images are unknown and various (i. e. the case of blind SR).
1 code implementation • CVPR 2021 • Guo-Sen Xie, Jie Liu, Huan Xiong, Ling Shao
However, they fail to fully leverage the high-order appearance relationships between multi-scale features among the support-query image pairs, thus leading to an inaccurate localization of the query objects.
no code implementations • 9 Jun 2021 • Chunzhi Yi, Feng Jiang, Baichun Wei, Chifu Yang, Zhen Ding, Jubo Jin, Jie Liu
The results demonstrate our method is a promising solution to detecting and correcting IMU movements during JAE.
3 code implementations • 20 May 2021 • Zongcai Du, Jie Liu, Jie Tang, Gangshan Wu
Along with the rapid development of real-world applications, higher requirements on the accuracy and efficiency of image super-resolution (SR) are brought forward.
1 code implementation • 17 May 2021 • Andrey Ignatov, Radu Timofte, Maurizio Denna, Abdel Younes, Andrew Lek, Mustafa Ayazoglu, Jie Liu, Zongcai Du, Jiaming Guo, Xueyi Zhou, Hao Jia, Youliang Yan, Zexin Zhang, Yixin Chen, Yunbo Peng, Yue Lin, Xindong Zhang, Hui Zeng, Kun Zeng, Peirong Li, Zhihuang Liu, Shiqi Xue, Shengpeng Wang
Image super-resolution is one of the most popular computer vision problems with many important applications to mobile devices.
no code implementations • 26 Apr 2021 • Jie Chen, Jie Liu, Chang Liu, Jian Zhang, Bing Han
To overcome this issue and to further improve the recognition performance, we adopt a deep learning approach for underwater target recognition and propose a LOFAR spectrum enhancement (LSE)-based underwater target recognition scheme, which consists of preprocessing, offline training, and online testing.
no code implementations • 16 Apr 2021 • Weiqi Shu, Ling Wang, Bolong Liu, Jie Liu
How to measure LAI accurately and efficiently is the key to the crop yield estimation problem.
2 code implementations • 13 Mar 2021 • Shaowei Chen, Yu Wang, Jie Liu, Yuelin Wang
Aspect sentiment triplet extraction (ASTE), which aims to identify aspects from review sentences along with their corresponding opinion expressions and sentiments, is an emerging task in fine-grained opinion mining.
Aspect Sentiment Triplet Extraction Machine Reading Comprehension +2
no code implementations • 1 Mar 2021 • Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, Jie Zhang, Jianwei Zhang, Xu Zou, Zhikang Li, Xiaodong Deng, Jie Liu, Jinbao Xue, Huiling Zhou, Jianxin Ma, Jin Yu, Yong Li, Wei Lin, Jingren Zhou, Jie Tang, Hongxia Yang
In this work, we construct the largest dataset for multimodal pretraining in Chinese, which consists of over 1. 9TB images and 292GB texts that cover a wide range of domains.
no code implementations • 18 Feb 2021 • Jin Li, Jie Liu, Shangzhou Li, Yao Xu, Ran Cao, Qi Li, Biye Jiang, Guan Wang, Han Zhu, Kun Gai, Xiaoqiang Zhu
When receiving a user request, matching system (i) finds the crowds that the user belongs to; (ii) retrieves all ads that have targeted those crowds.
1 code implementation • ICCV 2021 • Miao Zhang, Jie Liu, Yifei Wang, Yongri Piao, Shunyu Yao, Wei Ji, Jingjing Li, Huchuan Lu, Zhongxuan Luo
Our bidirectional dynamic fusion strategy encourages the interaction of spatial and temporal information in a dynamic manner.
Ranked #12 on Video Polyp Segmentation on SUN-SEG-Easy (Unseen)
no code implementations • ICCV 2021 • Guo-Sen Xie, Huan Xiong, Jie Liu, Yazhou Yao, Ling Shao
Specifically, we first generate N pairs (key and value) of multi-resolution query features guided by the support feature and its mask.
no code implementations • 31 Dec 2020 • Zhi-Qin Zhan, Huazhu Fu, Yan-Yao Yang, Jingjing Chen, Jie Liu, Yu-Gang Jiang
However, there are several issues between the image-based training and video-based inference, including domain differences, lack of positive samples, and temporal smoothness.
1 code implementation • CVPR 2021 • Jie Liu, Chuming Li, Feng Liang, Chen Lin, Ming Sun, Junjie Yan, Wanli Ouyang, Dong Xu
To develop a practical method for learning complex inception convolution based on the data, a simple but effective search algorithm, referred to as efficient dilation optimization (EDO), is developed.
no code implementations • 27 Oct 2020 • Yitong Meng, Jie Liu, Xiao Yan, James Cheng
When a new user just signs up on a website, we usually have no information about him/her, i. e. no interaction with items, no user profile and no social links with other users.
no code implementations • 21 Oct 2020 • Jie Liu, Chen Lin, Chuming Li, Lu Sheng, Ming Sun, Junjie Yan, Wanli Ouyang
Several variants of stochastic gradient descent (SGD) have been proposed to improve the learning effectiveness and efficiency when training deep neural networks, among which some recent influential attempts would like to adaptively control the parameter-wise learning rate (e. g., Adam and RMSProp).
no code implementations • 15 Oct 2020 • Ling Wang, Cheng Zhang, Zejian Luo, ChenGuang Liu, Jie Liu, Xi Zheng, Athanasios Vasilakos
To reduce the computational cost without loss of generality, we present a defense strategy called a progressive defense against adversarial attacks (PDAAA) for efficiently and effectively filtering out the adversarial pixel mutations, which could mislead the neural network towards erroneous outputs, without a-priori knowledge about the attack type.
2 code implementations • 24 Sep 2020 • Jie Liu, Jie Tang, Gangshan Wu
Thanks to FDC, we can rethink the information multi-distillation network (IMDN) and propose a lightweight and accurate SISR model called residual feature distillation network (RFDN).
3 code implementations • 15 Sep 2020 • Kai Zhang, Martin Danelljan, Yawei Li, Radu Timofte, Jie Liu, Jie Tang, Gangshan Wu, Yu Zhu, Xiangyu He, Wenjie Xu, Chenghua Li, Cong Leng, Jian Cheng, Guangyang Wu, Wenyi Wang, Xiaohong Liu, Hengyuan Zhao, Xiangtao Kong, Jingwen He, Yu Qiao, Chao Dong, Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan, Xiaochuan Li, Zhiqiang Lang, Jiangtao Nie, Wei Wei, Lei Zhang, Abdul Muqeet, Jiwon Hwang, Subin Yang, JungHeum Kang, Sung-Ho Bae, Yongwoo Kim, Geun-Woo Jeon, Jun-Ho Choi, Jun-Hyuk Kim, Jong-Seok Lee, Steven Marty, Eric Marty, Dongliang Xiong, Siang Chen, Lin Zha, Jiande Jiang, Xinbo Gao, Wen Lu, Haicheng Wang, Vineeth Bhaskara, Alex Levinshtein, Stavros Tsogkas, Allan Jepson, Xiangzhen Kong, Tongtong Zhao, Shanshan Zhao, Hrishikesh P. S, Densen Puthussery, Jiji C. V, Nan Nan, Shuai Liu, Jie Cai, Zibo Meng, Jiaming Ding, Chiu Man Ho, Xuehui Wang, Qiong Yan, Yuzhi Zhao, Long Chen, Jiangtao Zhang, Xiaotong Luo, Liang Chen, Yanyun Qu, Long Sun, Wenhao Wang, Zhenbing Liu, Rushi Lan, Rao Muhammad Umer, Christian Micheloni
This paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results.
no code implementations • 26 Aug 2020 • Wenqian Dong, Jie Liu, Zhen Xie, Dong Li
Evaluating with 20, 480 input problems, we show that Smartfluidnet achieves 1. 46x and 590x speedup comparing with a state-of-the-art neural network model and the original fluid simulation respectively on an NVIDIA Titan X Pascal GPU, while providing better simulation quality than the state-of-the-art model.
1 code implementation • 16 Aug 2020 • Shengyu Zhang, Ziqi Tan, Jin Yu, Zhou Zhao, Kun Kuang, Jie Liu, Jingren Zhou, Hongxia Yang, Fei Wu
Then, based on the aspects of the video-associated product, we perform knowledge-enhanced spatial-temporal inference on those graphs for capturing the dynamic change of fine-grained product-part characteristics.
1 code implementation • ACL 2020 • Shaowei Chen, Jie Liu, Yu Wang, Wenzheng Zhang, Ziming Chi
The opinion entity extraction unit and the relation detection unit are developed as two channels to extract opinion entities and relations simultaneously.
no code implementations • ACL 2020 • Liting Liu, Jie Liu, Wenzheng Zhang, Ziming Chi, Wenxuan Shi, YaLou Huang
To deal with this task, we devise a data-driven global Skill-Aware Multi-Attention generation model, named SAMA.
no code implementations • CVPR 2020 • Jie Liu, Wenjie Zhang, Yuting Tang, Jie Tang, Gangshan Wu
To maximize the power of the RFA framework, we further propose an enhanced spatial attention (ESA) block to make the residual features to be more focused on critical spatial contents.
Ranked #21 on Image Super-Resolution on Manga109 - 4x upscaling
no code implementations • 24 May 2020 • Zhongxu Hu, Yang Xing, Chen Lv, Peng Hang, Jie Liu
This paper proposes a novel Bernoulli heatmap for head pose estimation from a single RGB image.
no code implementations • 13 May 2020 • Forrest Sheng Bao, Youbiao He, Jie Liu, Yuanfang Chen, Qian Li, Christina R. Zhang, Lei Han, Baoli Zhu, Yaorong Ge, Shi Chen, Ming Xu, Liu Ouyang
The COVID-19 is sweeping the world with deadly consequences.
no code implementations • 12 May 2020 • Sinong Geng, Zhaobin Kuang, Jie Liu, Stephen Wright, David Page
We study the $L_1$-regularized maximum likelihood estimator/estimation (MLE) problem for discrete Markov random fields (MRFs), where efficient and scalable learning requires both sparse regularization and approximate inference.
no code implementations • 2 May 2020 • Yu Wang, Yuelin Wang, Jie Liu, Zhuo Liu
More importantly, we discuss four kinds of basic approaches, including statistical machine translation based approach, neural machine translation based approach, classification based approach and language model based approach, six commonly applied performance boosting techniques for GEC systems and two data augmentation methods.
1 code implementation • ACL 2021 • He Bai, Peng Shi, Jimmy Lin, Luchen Tan, Kun Xiong, Wen Gao, Jie Liu, Ming Li
Experimental results show that the Chinese GPT2 can generate better essay endings with \eop.
no code implementations • 1 Apr 2020 • Jie Liu, Xiaotian Wu, Kai Zhang, Bing Liu, Renyi Bao, Xiao Chen, Yiran Cai, Yiming Shen, Xinjun He, Jun Yan, Weixing Ji
With the booming of next generation sequencing technology and its implementation in clinical practice and life science research, the need for faster and more efficient data analysis methods becomes pressing in the field of sequencing.
no code implementations • 30 Mar 2020 • Junyang Lin, An Yang, Yichang Zhang, Jie Liu, Jingren Zhou, Hongxia Yang
We pretrain the model with three pretraining tasks, including masked segment modeling (MSM), masked region modeling (MRM) and image-text matching (ITM); and finetune the model on a series of vision-and-language downstream tasks.
no code implementations • 3 Mar 2020 • Jie Liu, Jiawen Liu, Zhen Xie, Dong Li
How to accurately and efficiently label data on a mobile device is critical for the success of training machine learning models on mobile devices.
no code implementations • 8 Feb 2020 • Qian Liu, Tao Wang, Jie Liu, Yang Guan, Qi Bu, Longfei Yang
In order to learn powerful feature of videos, we propose a Collaborative Temporal Modeling (CTM) block (Figure 1) to learn temporal information for action recognition.
no code implementations • 7 Feb 2020 • Qian Liu, Dongyang Cai, Jie Liu, Nan Ding, Tao Wang
The standard non-local (NL) module is effective in aggregating frame-level features on the task of video classification but presents low parameters efficiency and high computational cost.
no code implementations • 1 Dec 2019 • Tinghao Zhang, Jingxu Li, Jingfeng Li, Ling Wang, Feng Li, Jie Liu
Greenhouse environment is the key to influence crops production.
no code implementations • 1 Dec 2019 • Tinghao Zhang, Jing Luo, Ping Chen, Jie Liu
At high latitudes, many cities adopt a centralized heating system to improve the energy generation efficiency and to reduce pollution.
no code implementations • 18 Nov 2019 • XiaoQian Li, Jie Liu, Shuwu Zhang, GuiXuan Zhang
At present, multi-oriented text detection methods based on deep neural network have achieved promising performances on various benchmarks.
2 code implementations • 12 Nov 2019 • Xinyan Dai, Xiao Yan, Kelvin K. W. Ng, Jie Liu, James Cheng
In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error.
no code implementations • 12 Nov 2019 • Liangyi Kang, Jie Liu, Lingqiao Liu, Qinfeng Shi, Dan Ye
Thus, we propose to create auxiliary fact representations from charge definitions to augment fact descriptions representation.
no code implementations • 30 Sep 2019 • Jie Liu, Xiao Yan, Xinyan Dai, Zhirong Li, James Cheng, Ming-Chang Yang
Then we explain the good performance of ip-NSW as matching the norm bias of the MIPS problem - large norm items have big in-degrees in the ip-NSW proximity graph and a walk on the graph spends the majority of computation on these items, thus effectively avoids unnecessary computation on small norm items.
no code implementations • 20 Aug 2019 • Yuan Liu, Zhongwei Cheng, Jie Liu, Bourhan Yassin, Zhe Nan, Jiebo Luo
Saving rainforests is a key to halting adverse climate changes.
BIG-bench Machine Learning Environmental Sound Classification +2
1 code implementation • 1 Jul 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
In this work, we alleviate the NAS search cost down to less than 3 hours, while achieving state-of-the-art image classification results under mobile latency constraints.
no code implementations • 10 Jun 2019 • Jie Liu, Jiawen Liu, Wan Du, Dong Li
In this paper, we perform a variety of experiments on a representative mobile device (the NVIDIA TX2) to study the performance of training deep learning models.
no code implementations • 10 May 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the latency constraint of a mobile device?
9 code implementations • 5 Apr 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the runtime constraint of a mobile device?
Ranked #892 on Image Classification on ImageNet
no code implementations • 23 Mar 2019 • Zhixin Zhang, Xudong Chen, Jie Liu, Kaibo Zhou
General detectors follow the pipeline that feature maps extracted from ConvNets are shared between classification and regression tasks.
1 code implementation • 7 Jan 2019 • Baoyuan Wu, Weidong Chen, Yanbo Fan, Yong Zhang, Jinlong Hou, Jie Liu, Tong Zhang
In this work, we propose to train CNNs from images annotated with multiple tags, to enhance the quality of visual representation of the trained CNN model.
1 code implementation • 22 Oct 2018 • Xiao Yan, Xinyan Dai, Jie Liu, Kaiwen Zhou, James Cheng
Recently, locality sensitive hashing (LSH) was shown to be effective for MIPS and several algorithms including $L_2$-ALSH, Sign-ALSH and Simple-LSH have been proposed.
no code implementations • 3 Oct 2018 • Andrey Ignatov, Radu Timofte, Thang Van Vu, Tung Minh Luu, Trung X. Pham, Cao Van Nguyen, Yongwoo Kim, Jae-Seok Choi, Munchurl Kim, Jie Huang, Jiewen Ran, Chen Xing, Xingguang Zhou, Pengfei Zhu, Mingrui Geng, Yawei Li, Eirikur Agustsson, Shuhang Gu, Luc van Gool, Etienne de Stoutz, Nikolay Kobyshev, Kehui Nie, Yan Zhao, Gen Li, Tong Tong, Qinquan Gao, Liu Hanwen, Pablo Navarrete Michelini, Zhu Dan, Hu Fengshuo, Zheng Hui, Xiumei Wang, Lirui Deng, Rang Meng, Jinghui Qin, Yukai Shi, Wushao Wen, Liang Lin, Ruicheng Feng, Shixiang Wu, Chao Dong, Yu Qiao, Subeesh Vasu, Nimisha Thekke Madam, Praveen Kandula, A. N. Rajagopalan, Jie Liu, Cheolkon Jung
This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones.
no code implementations • 14 Jul 2018 • Jie Liu, Yu Rong, Martin Takac, Junzhou Huang
This paper proposes a framework of L-BFGS based on the (approximate) second-order information with stochastic batches, as a novel approach to the finite-sum minimization problems.
no code implementations • 3 Jul 2018 • Jie Liu, Cheng Sun, Xiang Xu, Baomin Xu, Shuangyuan Yu
In this paper we propose a novel Spatial and Temporal Features Mixture Model (STFMM) based on convolutional neural network (CNN) and recurrent neural network (RNN), in which the human body is split into $N$ parts in horizontal direction so that we can obtain more specific features.
no code implementations • 10 Apr 2018 • Yingqi Qu, Jie Liu, Liangyi Kang, Qinfeng Shi, Dan Ye
To preserve more original information, we propose an attentive recurrent neural network with similarity matrix based convolutional neural network (AR-SMCNN) model, which is able to capture comprehensive hierarchical information utilizing the advantages of both RNN and CNN.
no code implementations • 27 Mar 2018 • Jie Liu, Hao Zheng
Especially as the size of the MRF increases, both the numerical performance and the computational cost of our approach remain consistently satisfactory, whereas Laplace approximation deteriorates and pseudolikelihood becomes computationally unbearable.
no code implementations • 20 May 2017 • Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč
In this paper, we study and analyze the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses.
no code implementations • ICML 2017 • Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč
In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems.
no code implementations • 16 Dec 2016 • Jie Liu, Martin Takac
We propose a projected semi-stochastic gradient descent method with mini-batch for improving both the theoretical complexity and practical performance of the general stochastic gradient descent method (SGD).
1 code implementation • 21 Jun 2016 • Chen Xing, Wei Wu, Yu Wu, Jie Liu, YaLou Huang, Ming Zhou, Wei-Ying Ma
We consider incorporating topic information into the sequence-to-sequence framework to generate informative and interesting responses for chatbots.
no code implementations • 16 Apr 2015 • Jakub Konečný, Jie Liu, Peter Richtárik, Martin Takáč
Our method first performs a deterministic step (computation of the gradient of the objective function at the starting point), followed by a large number of stochastic steps.
no code implementations • 17 Oct 2014 • Jakub Konečný, Jie Liu, Peter Richtárik, Martin Takáč
Our method first performs a deterministic step (computation of the gradient of the objective function at the starting point), followed by a large number of stochastic steps.
no code implementations • BioMedical Engineering OnLine 2014 • Huifang Huang, Jie Liu, Qiang Zhu, Ruiping Wang, Guangshu Hu
This was done in order to improve the classification performance of these two classes of heartbeats by using different features and classification methods.
Ranked #2 on Heartbeat Classification on MIT-BIH AR
no code implementations • 20 Mar 2014 • Yunpeng Li, Ya Li, Jie Liu, Yong Deng
The results of defuzzification at the first step are not coincide with the results of defuzzification at the final step. It seems that the alternative is to defuzzification in the final step in fuzzy DEMATEL.
no code implementations • NeurIPS 2013 • Jie Liu, David Page
In large-scale applications of undirected graphical models, such as social networks and biological networks, similar patterns occur frequently and give rise to similar parameters.
no code implementations • 23 Nov 2013 • Yunpeng Li, Jie Liu, Yong Deng
In this paper, we present an illustration to the history of Artificial Intelligence(AI) with a statistical analysis of publish since 1940.