1 code implementation • 18 Jan 2024 • Xuangeng Chu, Yu Li, Ailing Zeng, Tianyu Yang, Lijian Lin, Yunfei Liu, Tatsuya Harada
Head avatar reconstruction, crucial for applications in virtual reality, online meetings, gaming, and film industries, has garnered substantial attention within the computer vision community.
no code implementations • 9 Jan 2024 • Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, Qifeng Chen
We propose DiffSHEG, a Diffusion-based approach for Speech-driven Holistic 3D Expression and Gesture generation with arbitrary length.
no code implementations • 10 Dec 2023 • Maomao Li, Yu Li, Tianyu Yang, Yunfei Liu, Dongxu Yue, Zhihui Lin, Dong Xu
This paper presents a video inversion approach for zero-shot video editing, which aims to model the input video with low-rank representation during the inversion process.
no code implementations • 28 Nov 2023 • Jintang Li, Jiawang Dan, Ruofan Wu, Jing Zhou, Sheng Tian, Yunfei Liu, Baokun Wang, Changhua Meng, Weiqiang Wang, Yuchang Zhu, Liang Chen, Zibin Zheng
Over the past few years, graph neural networks (GNNs) have become powerful and practical tools for learning on (static) graph-structure data.
no code implementations • 1 Aug 2023 • Chaoqun Zhuang, Yunfei Liu, Sijia Wen, Feng Lu
The simulation is designed for generating the dataset with ground-truths by the proposed low-light hazy imaging model.
no code implementations • ICCV 2023 • Yunfei Liu, Lijian Lin, Fei Yu, Changyin Zhou, Yu Li
Audio-driven portrait animation aims to synthesize portrait videos that are conditioned by given audio.
no code implementations • 20 Jun 2023 • Liying Lu, Tianke Zhang, Yunfei Liu, Xuangeng Chu, Yu Li
Consequently, their generalization capability remains limited.
no code implementations • ICCV 2023 • Tianke Zhang, Xuangeng Chu, Yunfei Liu, Lijian Lin, Zhendong Yang, Zhengzhuo Xu, Chengkun Cao, Fei Yu, Changyin Zhou, Chun Yuan, Yu Li
However, the current deep learning-based methods face significant challenges in achieving accurate reconstruction with disentangled facial parameters and ensuring temporal stability in single-frame methods for 3D face tracking on video data.
no code implementations • 5 Oct 2022 • Ruicong Liu, Yiwei Bao, Mingjie Xu, Haofei Wang, Yunfei Liu, Feng Lu
We evaluate the proposed method on four cross-domain gaze estimation tasks, and experimental results demonstrate that it significantly reduces the gaze jitter and improves the gaze estimation performance in target domains.
no code implementations • 25 Sep 2022 • Zongji Wang, Yunfei Liu, Feng Lu
Intrinsic image decomposition is an important and long-standing computer vision problem.
1 code implementation • 10 Aug 2022 • Huarui He, Jie Wang, Yunfei Liu, Feng Wu
The goal of single-step retrosynthesis is to identify the possible reactants that lead to the synthesis of the target product in one reaction.
no code implementations • CVPR 2022 • Mingfang Zhang, Yunfei Liu, Feng Lu
In this paper, we propose the first one-stage end-to-end gaze estimation method, GazeOnce, which is capable of simultaneously predicting gaze directions for multiple faces (>10) in an image.
no code implementations • CVPR 2022 • Yiwei Bao, Yunfei Liu, Haofei Wang, Feng Lu
Consequently, we propose the Rotation-enhanced Unsupervised Domain Adaptation (RUDA) for gaze estimation.
no code implementations • 21 Dec 2021 • Zongji Wang, Yunfei Liu, Feng Lu
In the area of 3D shape analysis, the geometric properties of a shape have long been studied.
1 code implementation • 27 Oct 2021 • Yunfei Liu, Haofei Wang, Yang Yue, Feng Lu
Unsupervised image-to-image translation aims to learn the mapping between two visual domains with unpaired samples.
1 code implementation • ICCV 2021 • Yunfei Liu, Ruicong Liu, Haofei Wang, Feng Lu
Deep neural networks have significantly improved appearance-based gaze estimation accuracy.
no code implementations • 24 Mar 2021 • Mingjie Xu, Haofei Wang, Yunfei Liu, Feng Lu
However, many deep learning-based methods suffer from the vulnerability property, i. e., perturbing the raw image using noise confuses the gaze estimation models.
no code implementations • 22 Mar 2021 • Yunfei Liu, Chaoqun Zhuang, Feng Lu
Anomaly detection from a single image is challenging since anomaly data is always rare and can be with highly unpredictable types.
Ranked #19 on Anomaly Detection on One-class CIFAR-10
no code implementations • 20 Mar 2021 • Yiwei Bao, Yihua Cheng, Yunfei Liu, Feng Lu
Meanwhile, we also propose Adaptive Group Normalization to recalibrate eye features with the guidance of facial feature.
1 code implementation • 9 Dec 2020 • Yunfei Liu, Yang Yang, Xianyu Chen, Jian Shen, Haifeng Zhang, Yong Yu
Knowledge tracing (KT) defines the task of predicting whether students can correctly answer questions based on their historical response.
Ranked #3 on Knowledge Tracing on EdNet
3 code implementations • 13 Sep 2020 • Yang Yang, Jian Shen, Yanru Qu, Yunfei Liu, Kerong Wang, Yaoming Zhu, Wei-Nan Zhang, Yong Yu
With the rapid development in online education, knowledge tracing (KT) has become a fundamental problem which traces students' knowledge status and predicts their performance on new questions.
Ranked #7 on Knowledge Tracing on EdNet
3 code implementations • ECCV 2020 • Yunfei Liu, Xingjun Ma, James Bailey, Feng Lu
A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data.
2 code implementations • CVPR 2020 • Yunfei Liu, Yu Li, ShaoDi You, Feng Lu
Intrinsic image decomposition, which is an essential task in computer vision, aims to infer the reflectance and shading of the scene.
1 code implementation • 27 Jul 2019 • Yunfei Liu, Yu Li, ShaoDi You, Feng Lu
Reflection is common in images capturing scenes behind a glass window, which is not only a disturbance visually but also influence the performance of other computer vision algorithms.
no code implementations • 3 Jun 2019 • Yunfei Liu, Feng Lu
Many real world vision tasks, such as reflection removal from a transparent surface and intrinsic image decomposition, can be modeled as single image layer separation.
no code implementations • 16 Apr 2019 • Huangyue Yu, Minjie Cai, Yunfei Liu, Feng Lu
However, techniques for analyzing the first-person video can be fundamentally different from those for the third-person video, and it is even more difficult to explore the shared information from both viewpoints.