no code implementations • 18 Apr 2024 • Yufei Ye, Abhinav Gupta, Kris Kitani, Shubham Tulsiani
We propose G-HOP, a denoising diffusion based generative prior for hand-object interactions that allows modeling both the 3D object and a human hand, conditioned on the object category.
no code implementations • ICCV 2023 • Yufei Ye, Poorvi Hebbar, Abhinav Gupta, Shubham Tulsiani
We tackle the task of reconstructing hand-object interactions from short video clips.
no code implementations • CVPR 2023 • Yufei Ye, Xueting Li, Abhinav Gupta, Shalini De Mello, Stan Birchfield, Jiaming Song, Shubham Tulsiani, Sifei Liu
In contrast, in this work we focus on synthesizing complex interactions (ie, an articulated hand) with a given object.
1 code implementation • CVPR 2022 • Yufei Ye, Abhinav Gupta, Shubham Tulsiani
Our work aims to reconstruct hand-held objects given a single RGB image.
1 code implementation • CVPR 2021 • Yufei Ye, Shubham Tulsiani, Abhinav Gupta
We first infer a volumetric representation in a canonical frame, along with the camera pose.
1 code implementation • 8 Oct 2019 • Yufei Ye, Dhiraj Gandhi, Abhinav Gupta, Shubham Tulsiani
We present an approach to learn an object-centric forward model, and show that this allows us to plan for sequences of actions to achieve distant desired goals.
2 code implementations • ICCV 2019 • Yufei Ye, Maneesh Singh, Abhinav Gupta, Shubham Tulsiani
We present an approach for pixel-level future prediction given an input image of a scene.
no code implementations • 21 Jun 2018 • Yufei Ye, Xiaoqin Ren, Jin Wang, Lingxiao Xu, Wenxia Guo, Wenqiang Huang, Wenhong Tian
Based on the previous research on DRL in the literature, we introduce online resource scheduling algorithm DeepRM2 and the offline resource scheduling algorithm DeepRM_Off.
3 code implementations • CVPR 2018 • Xiaolong Wang, Yufei Ye, Abhinav Gupta
Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category).