Search Results for author: Zeyi Zhang

Found 2 papers, 2 papers with code

Semantic Gesticulator: Semantics-Aware Co-Speech Gesture Synthesis

1 code implementation16 May 2024 Zeyi Zhang, Tenglong Ao, Yuyao Zhang, Qingzhe Gao, Chuan Lin, Baoquan Chen, Libin Liu

In this work, we present Semantic Gesticulator, a novel framework designed to synthesize realistic gestures accompanying speech with strong semantic correspondence.

Language Modelling Large Language Model +1

GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents

1 code implementation26 Mar 2023 Tenglong Ao, Zeyi Zhang, Libin Liu

We leverage the power of the large-scale Contrastive-Language-Image-Pre-training (CLIP) model and present a novel CLIP-guided mechanism that extracts efficient style representations from multiple input modalities, such as a piece of text, an example motion clip, or a video.

Contrastive Learning Gesture Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.