1 code implementation • ECCV 2020 • Zheng Xie, Zhiquan Wen, Jing Liu, Zhi-Qiang Liu, Xixian Wu, Mingkui Tan
Specifically, we propose a method named deep transferring quantization (DTQ) to effectively exploit the knowledge in a pre-trained full-precision model.
1 code implementation • ACL 2019 • Zhi-Qiang Liu, Zuohui Fu, Jie Cao, Gerard de Melo, Yik-Cheung Tam, Cheng Niu, Jie zhou
Rhetoric is a vital element in modern poetry, and plays an essential role in improving its aesthetics.
1 code implementation • 4 May 2019 • Yushu Chen, Hao Jing, Wenlai Zhao, Zhi-Qiang Liu, Ouyi Li, Liang Qiao, Wei Xue, Guangwen Yang
RSG is further combined with adaptive methods to construct ARSG for acceleration.
no code implementations • 16 Jul 2018 • Ruiguo Yu, Zhi-Qiang Liu, Xuewei Li, Wenhuan Lu, Mei Yu, Jianrong Wang, Bin Li
There have been a lot of researches based on the time series of the wind power or speed, but In fact, these time series cannot express the temporal and spatial changes of wind, which fundamentally hinders the advance of wind power prediction.
no code implementations • 13 Jun 2014 • Sijin Li, Zhi-Qiang Liu, Antoni B. Chan
We propose an heterogeneous multi-task learning framework for human pose estimation from monocular image with deep convolutional neural network.
no code implementations • 19 May 2014 • Wei Feng, Jiaya Jia, Zhi-Qiang Liu
From our study, we make some reasonable recommendations of combining existing methods that perform the best in different situations for this challenging problem.
no code implementations • 17 Nov 2013 • Jian-Feng Yan, Jia Zeng, Zhi-Qiang Liu, Yang Gao
Although parallel LDA algorithms on the multi-processor architecture have low time and space complexities, their communication costs among processors often scale linearly with the vocabulary size and the number of topics, leading to a serious scalability problem.
no code implementations • 8 Oct 2012 • Jia Zeng, Zhi-Qiang Liu, Xiao-Qin Cao
The expectation-maximization (EM) algorithm can compute the maximum-likelihood (ML) or maximum a posterior (MAP) point estimate of the mixture models or latent variable models such as latent Dirichlet allocation (LDA), which has been one of the most popular probabilistic topic modeling methods in the past decade.
no code implementations • 1 Apr 2012 • Jia Zeng, Zhi-Qiang Liu, Xiao-Qin Cao
To accelerate the training speed, ABP actively scans the subset of corpus and searches the subset of topic space for topic modeling, therefore saves enormous training time in each iteration.