no code implementations • 30 Apr 2024 • Wenxun Dai, Ling-Hao Chen, Jingbo Wang, Jinpeng Liu, Bo Dai, Yansong Tang
By employing one-step (or few-step) inference, we further improve the runtime efficiency of the motion latent diffusion model for motion generation.
1 code implementation • 13 Dec 2023 • Ling-Hao Chen, Yuanshuo Zhang, Taohua Huang, Liangcai Su, Zeyi Lin, Xi Xiao, Xiaobo Xia, Tongliang Liu
To tackle this challenge and enhance the robustness of deep learning models against label noise in graph-based tasks, we propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE).
no code implementations • 19 Oct 2023 • Shunlin Lu, Ling-Hao Chen, Ailing Zeng, Jing Lin, Ruimao Zhang, Lei Zhang, Heung-Yeung Shum
This work targets a novel text-driven whole-body motion generation task, which takes a given textual description as input and aims at generating high-quality, diverse, and coherent facial expressions, hand gestures, and body motions simultaneously.
no code implementations • 16 Oct 2023 • Shaokun Zhang, Xiaobo Xia, Zhaoqing Wang, Ling-Hao Chen, Jiale Liu, Qingyun Wu, Tongliang Liu
However, since the prompts need to be sampled from a large volume of annotated examples, finding the right prompt may result in high annotation costs.
1 code implementation • ICCV 2023 • Ling-Hao Chen, Jiawei Zhang, Yewen Li, Yiren Pang, Xiaobo Xia, Tongliang Liu
In the training stage, we learn a motion diffusion model that generates motions from random noise.
no code implementations • 8 Jan 2022 • Ling-Hao Chen, He Li, Wanyuan Zhang, Jianbin Huang, Xiaoke Ma, Jiangtao Cui, Ning li, Jaesoo Yoo
It remains a challenging task to jointly consider all different kinds of interactions and detect anomalous instances on multi-view attributed networks.