no code implementations • 20 Mar 2024 • Yu Deng, Duomin Wang, Baoyuan Wang
In this paper, we propose a novel learning approach for feed-forward one-shot 4D head avatar synthesis.
no code implementations • 7 Dec 2023 • Shuliang Ning, Duomin Wang, Yipeng Qin, Zirong Jin, Baoyuan Wang, Xiaoguang Han
Unlike prior arts constrained by specific input types, our method allows flexible specification of style (text or image) and texture (full garment, cropped sections, or texture patches) conditions.
no code implementations • 30 Nov 2023 • Yu Deng, Duomin Wang, Xiaohang Ren, Xingyu Chen, Baoyuan Wang
The key is to first learn a part-wise 4D generative model from monocular images via adversarial learning, to synthesize multi-view images of diverse identities and full motions as training data; then leverage a transformer-based animatable triplane reconstructor to learn 4D head reconstruction using the synthetic data.
no code implementations • 29 Nov 2023 • Duomin Wang, Bin Dai, Yu Deng, Baoyuan Wang
In this study, our goal is to create interactive avatar agents that can autonomously plan and animate nuanced facial movements realistically, from both visual and behavioral perspectives.
no code implementations • ICCV 2023 • Zhentao Yu, Zixin Yin, Deyu Zhou, Duomin Wang, Finn Wong, Baoyuan Wang
In this paper, we introduce a simple and novel framework for one-shot audio-driven talking head generation.
1 code implementation • CVPR 2023 • Duomin Wang, Yu Deng, Zixin Yin, Heung-Yeung Shum, Baoyuan Wang
We present a novel one-shot talking head synthesis method that achieves disentangled and fine-grained control over lip motion, eye gaze&blink, head pose, and emotional expression.