no code implementations • ICML 2020 • Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu
Theoretically, we show that the small networks pruned using our method achieve provably lower loss than small networks trained from scratch with the same size.
1 code implementation • 12 Mar 2024 • Song Tang, Wenxin Su, Mao Ye, Jianwei Zhang, Xiatian Zhu
To tackle this unified SFDA problem, we propose a novel approach called Latent Causal Factors Discovery (LCFD).
1 code implementation • 27 Nov 2023 • Song Tang, Wenxin Su, Mao Ye, Xiatian Zhu
We find that directly applying the ViL model to the target domain in a zero-shot fashion is unsatisfactory, as it is not specialized for this particular task but largely generic.
no code implementations • 21 Sep 2023 • Yanbo Gao, Wenjia Huang, Shuai Li, Hui Yuan, Mao Ye, Siwei Ma
Similar as the traditional video coding, LVC inherits motion estimation/compensation, residual coding and other modules, all of which are implemented with neural networks (NNs).
no code implementations • 31 May 2023 • Mao Ye, Haitao Wang, Zheqian Chen
To solve the problem of poor performance of deep neural network models due to insufficient data, a simple yet effective interpolation-based data augmentation method is proposed: MSMix (Manifold Swap Mixup).
1 code implementation • IJCAI 2023 • Qichen He, Siying Xiao, Mao Ye, Xiatian Zhu, Ferrante Neri and Dongde Hou
Existing Unsupervised Domain Adaptation (UDA) methods typically attempt to perform knowledge transfer in a domain-invariant space explicitly or implicitly.
no code implementations • ICCV 2023 • Mao Ye, Gregory P. Meyer, Yuning Chai, Qiang Liu
Although halting a token is a non-differentiable operation, our method allows for differentiable end-to-end learning by leveraging an equivalent differentiable forward-pass.
1 code implementation • ICCV 2023 • Lihua Zhou, Mao Ye, Xiatian Zhu, Siying Xiao, Xu-Qian Fan, Ferrante Neri
With distribution alignment, it is challenging to acquire a common space which maintains fully the discriminative structure of both domains.
1 code implementation • 20 Sep 2022 • Tongda Xu, Han Gao, Chenjian Gao, Yuanyuan Wang, Dailan He, Jinyong Pi, Jixiang Luo, Ziyu Zhu, Mao Ye, Hongwei Qin, Yan Wang, Jingjing Liu, Ya-Qin Zhang
In this paper, we consider the problem of bit allocation in Neural Video Compression (NVC).
no code implementations • 19 Sep 2022 • Mao Ye, Bo Liu, Stephen Wright, Peter Stone, Qiang Liu
Bilevel optimization (BO) is useful for solving a variety of important machine learning problems including but not limited to hyperparameter optimization, meta-learning, continual learning, and reinforcement learning.
no code implementations • 2 Sep 2022 • Mao Ye, Ruichen Jiang, Haoxiang Wang, Dhruv Choudhary, Xiaocong Du, Bhargav Bhushanam, Aryan Mokhtari, Arun Kejariwal, Qiang Liu
One of the key challenges of learning an online recommendation model is the temporal domain shift, which causes the mismatch between the training and testing data distribution and hence domain generalization error.
no code implementations • 2 Sep 2022 • Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, Qiang Liu
AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development.
no code implementations • 2 Sep 2022 • Mao Ye, Lemeng Wu, Qiang Liu
We propose a family of First Hitting Diffusion Models (FHDM), deep generative models that generate data with a diffusion process that terminates at a random first hitting time.
no code implementations • 31 Aug 2022 • Xingchao Liu, Lemeng Wu, Mao Ye, Qiang Liu
Diffusion-based generative models have achieved promising results recently, but raise an array of open questions in terms of conceptual understanding, theoretical analysis, algorithm improvement and extensions to discrete, structured, non-Euclidean domains.
3 code implementations • 23 Aug 2022 • Ren Yang, Radu Timofte, Qi Zhang, Lin Zhang, Fanglong Liu, Dongliang He, Fu Li, He Zheng, Weihang Yuan, Pavel Ostyakov, Dmitry Vyal, Magauiya Zhussip, Xueyi Zou, Youliang Yan, Lei LI, Jingzhu Tang, Ming Chen, Shijie Zhao, Yu Zhu, Xiaoran Qin, Chenghua Li, Cong Leng, Jian Cheng, Claudio Rota, Marco Buzzelli, Simone Bianco, Raimondo Schettini, Dafeng Zhang, Feiyu Huang, Shizhuo Liu, Xiaobing Wang, Zhezhu Jin, Bingchen Li, Xin Li, Mingxi Li, Ding Liu, Wenbin Zou, Peijie Dong, Tian Ye, Yunchen Zhang, Ming Tan, Xin Niu, Mustafa Ayazoglu, Marcos Conde, Ui-Jin Choi, Zhuang Jia, Tianyu Xu, Yijian Zhang, Mao Ye, Dengyan Luo, Xiaofeng Pan, Liuhan Peng
The homepage of this challenge is at https://github. com/RenYang-home/AIM22_CompressSR.
1 code implementation • The 31st International Joint Conference On Artificial Intelligence 2022 • Hu Wang, Mao Ye, Xiatian Zhu, Shuai Li, Ce Zhu, Xue Li
Recently, with the rise of high dynamic range (HDR) display devices, there is a great demand to transfer traditional low dynamic range (LDR) images into HDR versions.
no code implementations • 11 May 2022 • Mao Ye, Chenxi Liu, Maoqing Yao, Weiyue Wang, Zhaoqi Leng, Charles R. Qi, Dragomir Anguelov
While multi-class 3D detectors are needed in many robotics applications, training them with fully labeled datasets can be expensive in labeling cost.
no code implementations • 29 Apr 2022 • Long Chen, Mao Ye, Alistair Milne, John Hillier, Frances Oglesby
This report, commissioned by the WTW research network, investigates the use of AI in property risk assessment.
2 code implementations • 20 Apr 2022 • Ren Yang, Radu Timofte, Meisong Zheng, Qunliang Xing, Minglang Qiao, Mai Xu, Lai Jiang, Huaida Liu, Ying Chen, Youcheng Ben, Xiao Zhou, Chen Fu, Pei Cheng, Gang Yu, Junyi Li, Renlong Wu, Zhilu Zhang, Wei Shang, Zhengyao Lv, Yunjin Chen, Mingcai Zhou, Dongwei Ren, Kai Zhang, WangMeng Zuo, Pavel Ostyakov, Vyal Dmitry, Shakarim Soltanayev, Chervontsev Sergey, Zhussip Magauiya, Xueyi Zou, Youliang Yan, Pablo Navarrete Michelini, Yunhua Lu, Diankai Zhang, Shaoli Liu, Si Gao, Biao Wu, Chengjian Zheng, Xiaofeng Zhang, Kaidi Lu, Ning Wang, Thuong Nguyen Canh, Thong Bach, Qing Wang, Xiaopeng Sun, Haoyu Ma, Shijie Zhao, Junlin Li, Liangbin Xie, Shuwei Shi, Yujiu Yang, Xintao Wang, Jinjin Gu, Chao Dong, Xiaodi Shi, Chunmei Nian, Dong Jiang, Jucai Lin, Zhihuai Xie, Mao Ye, Dengyan Luo, Liuhan Peng, Shengjie Chen, Qian Wang, Xin Liu, Boyang Liang, Hang Dong, Yuhao Huang, Kai Chen, Xingbei Guo, Yujing Sun, Huilei Wu, Pengxu Wei, Yulin Huang, Junying Chen, Ik Hyun Lee, Sunder Ali Khowaja, Jiseok Yoon
This challenge includes three tracks.
1 code implementation • CVPR 2022 • Shuaifeng Li, Mao Ye, Xiatian Zhu, Lihua Zhou, Lin Xiong
This approach suffers from both unsatisfactory accuracy of pseudo labels due to the presence of domain shift and limited use of target domain training data.
no code implementations • NeurIPS 2021 • Chengyue Gong, Mao Ye, Qiang Liu
We propose a general method to construct centroid approximation for the distribution of maximum points of a random function (a. k. a.
no code implementations • 17 Oct 2021 • Mao Ye, Qiang Liu
The notion of the Pareto set allows us to focus on the set of (often infinite number of) models that cannot be strictly improved.
no code implementations • 17 Oct 2021 • Mao Ye, Qiang Liu
In this work, we propose an efficient method to explicitly \emph{optimize} a small set of high quality ``centroid'' points to better approximate the ideal bootstrap distribution.
no code implementations • 29 Sep 2021 • Mao Ye, Qiang Liu
The notion of the Pareto set allows us to focus on the set of (often infinite number of) models that cannot be strictly improved.
no code implementations • CVPR 2021 • Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu
The idea is to generate a set of augmented data with some random perturbations or transforms, and minimize the maximum, or worst case loss over the augmented data.
1 code implementation • 14 Mar 2021 • Lizhen Nie, Mao Ye, Qiang Liu, Dan Nicolae
Motivated by the rising abundance of observational data with continuous treatments, we investigate the problem of estimating the average dose-response curve (ADRF).
no code implementations • 1 Feb 2021 • Yong Zhang, Mao Ye, Lin Guan
The original contributions of this paper are summarized as follows: (1) Model the packets collision probability of broadcast or NACK transmission in VANET with the combination theory and investigate the potential influence of miss my packets (MMP) problem.
Networking and Internet Architecture
no code implementations • ICLR 2021 • Lizhen Nie, Mao Ye, Qiang Liu, Dan Nicolae
With the rising abundance of observational data with continuous treatments, we investigate the problem of estimating average dose-response curve (ADRF).
1 code implementation • NeurIPS 2020 • Mao Ye, Lemeng Wu, Qiang Liu
Despite the great success of deep learning, recent works show that large deep neural networks are often highly redundant and can be significantly reduced in size.
no code implementations • 16 Oct 2020 • Mao Ye, Dhruv Choudhary, Jiecao Yu, Ellie Wen, Zeliang Chen, Jiyan Yang, Jongsoo Park, Qiang Liu, Arun Kejariwal
To the best of our knowledge, this is the first work to provide in-depth analysis and discussion of applying pruning to online recommendation systems with non-stationary data distribution.
no code implementations • ICML 2020 • Denny Zhou, Mao Ye, Chen Chen, Tianjian Meng, Mingxing Tan, Xiaodan Song, Quoc Le, Qiang Liu, Dale Schuurmans
This is achieved by layerwise imitation, that is, forcing the thin network to mimic the intermediate outputs of the wide network from layer to layer.
no code implementations • 29 May 2020 • Yan Min, Mao Ye, Liang Tian, Yulin Jian, Ce Zhu, Shangming Yang
Our main contributions are a novel feature section approach which uses multi-step transition probability to characterize the data structure, and three algorithms proposed from the positive and negative aspects for keeping data structure.
1 code implementation • ACL 2020 • Mao Ye, Chengyue Gong, Qiang Liu
For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution.
no code implementations • 28 May 2020 • Chenpeng Zhang, Shuai Li, Mao Ye, Ce Zhu, Xue Li
Many variants of RNN have been proposed to solve the gradient problems of training RNNs and process long sequences.
no code implementations • 28 May 2020 • Lihua Zhou, Mao Ye, Xinpeng Li, Ce Zhu, Yiguang Liu, Xue Li
By this reconstructor, we can construct prototypes for the original features using class prototypes and domain prototypes correspondingly.
no code implementations • 23 Mar 2020 • Lemeng Wu, Mao Ye, Qi Lei, Jason D. Lee, Qiang Liu
Recently, Liu et al.[19] proposed a splitting steepest descent (S2D) method that jointly optimizes the neural parameters and architectures based on progressively growing network structures by splitting neurons into multiple copies in a steepest descent fashion.
no code implementations • 13 Mar 2020 • Hanbin Dai, Liangbo Zhou, Feng Zhang, Zhengyu Zhang, Hong Hu, Xiatian Zhu, Mao Ye
Taking them together, we formulate a novel Distribution-Aware coordinate Representation for Keypoint (DARK) method.
1 code implementation • 3 Mar 2020 • Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu
This differs from the existing methods based on backward elimination, which remove redundant neurons from the large network.
1 code implementation • NeurIPS 2020 • Mao Ye, Tongzheng Ren, Qiang Liu
Our idea is to introduce Stein variational gradient as a repulsive force to push the samples of Langevin dynamics away from the past trajectories.
no code implementations • NeurIPS 2020 • Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, Qiang Liu
Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning.
1 code implementation • 20 Feb 2020 • Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu
The idea is to generate a set of augmented data with some random perturbations or transforms and minimize the maximum, or worst case loss over the augmented data.
Ranked #188 on Image Classification on ImageNet
no code implementations • 20 Feb 2020 • Xingchao Liu, Mao Ye, Dengyong Zhou, Qiang Liu
We propose multipoint quantization, a quantization method that approximates a full-precision weight vector using a linear combination of multiple vectors of low-bit numbers; this is in contrast to typical quantization methods that approximate each weight using a single low precision number.
1 code implementation • 7 Feb 2020 • Qifan Song, Yan Sun, Mao Ye, Faming Liang
Stochastic gradient Markov chain Monte Carlo (MCMC) algorithms have received much attention in Bayesian computing for big data problems, but they are only applicable to a small class of problems for which the parameter space has a fixed dimension and the log-posterior density is differentiable with respect to the parameters.
6 code implementations • CVPR 2020 • Feng Zhang, Xiatian Zhu, Hanbin Dai, Mao Ye, Ce Zhu
Interestingly, we found that the process of decoding the predicted heatmaps into the final joint coordinates in the original image space is surprisingly significant for human pose estimation performance, which nevertheless was not recognised before.
Ranked #2 on Multi-Person Pose Estimation on MS COCO (using extra training data)
no code implementations • 3 May 2019 • Xiong Deng, Chao Chen, Deyang Chen, Xiangbin Cai, Xiaozhe Yin, Chao Xu, Fei Sun, Caiwen Li, Yan Li, Han Xu, Mao Ye, Guo Tian, Zhen Fan, Zhipeng Hou, Minghui Qin, Yu Chen, Zhenlin Luo, Xubing Lu, Guofu Zhou, Lang Chen, Ning Wang, Ye Zhu, Xingsen Gao, Jun-Ming Liu
The limitation of commercially available single-crystal substrates and the lack of continuous strain tunability preclude the ability to take full advantage of strain engineering for further exploring novel properties and exhaustively studying fundamental physics in complex oxides.
Materials Science
1 code implementation • CVPR 2019 • Feng Zhang, Xiatian Zhu, Mao Ye
In this work, we investigate the under-studied but practically critical pose model efficiency problem.
Ranked #9 on Pose Estimation on Leeds Sports Poses
1 code implementation • 8 Oct 2018 • Tianyang Hu, Zixiang Chen, Hanxi Sun, Jincheng Bai, Mao Ye, Guang Cheng
We propose two novel samplers to generate high-quality samples from a given (un-normalized) probability density.
no code implementations • ICML 2018 • Mao Ye, Yan Sun
We propose a variable selection method for high dimensional regression models, which allows for complex, nonlinear, and high-order interactions among variables.
no code implementations • 17 Oct 2017 • Bilal Alsallakh, Amin Jourabloo, Mao Ye, Xiaoming Liu, Liu Ren
We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data.
no code implementations • ICCV 2017 • Amin Jourabloo, Mao Ye, Xiaoming Liu, Liu Ren
Face alignment has witnessed substantial progress in the last decade.
Ranked #12 on Facial Landmark Detection on 300W
no code implementations • CVPR 2015 • Mao Ye, Yu Zhang, Ruigang Yang, Dinesh Manocha
We present a novel sensor fusion algorithm that first segments the depth map into different categories such as opaque/transparent/infinity (e. g., too far to measure) and then updates the depth map based on the segmentation outcome.
no code implementations • CVPR 2014 • Qing Zhang, Bo Fu, Mao Ye, Ruigang Yang
In this paper we present a novel autonomous pipeline to build a personalized parametric model (pose-driven avatar) using a single depth sensor.
no code implementations • CVPR 2014 • Mao Ye, Ruigang Yang
In this paper we present a novel real-time algorithm for simultaneous pose and shape estimation for articulated objects, such as human beings and animals.
no code implementations • CVPR 2014 • Chenxi Zhang, Mao Ye, Bo Fu, Ruigang Yang
Each segmented petal is then fitted with a scale-invariant morphable petal shape model, which is constructed from individually scanned exemplar petals.
no code implementations • CVPR 2013 • Mao Ye, Cha Zhang, Ruigang Yang
With the wide-spread of consumer 3D-TV technology, stereoscopic videoconferencing systems are emerging.
no code implementations • 4 Sep 2011 • Mao Ye, Xingjie Liu, Wang-Chien Lee
The experimental results also confirm that our social influence based group recommendation algorithm outperforms the state-of-the-art algorithms for group recommendation.