Search Results for author: Haoran Lu

Found 3 papers, 0 papers with code

UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence

no code implementations11 May 2024 Ruihai Wu, Haoran Lu, Yiyan Wang, YuBo Wang, Hao Dong

Garment manipulation (e. g., unfolding, folding and hanging clothes) is essential for future robots to accomplish home-assistant tasks, while highly challenging due to the diversity of garment configurations, geometries and deformations.

ImageManip: Image-based Robotic Manipulation with Affordance-guided Next View Selection

no code implementations13 Oct 2023 Xiaoqi Li, Yanzi Wang, Yan Shen, Ponomarenko Iaroslav, Haoran Lu, Qianxu Wang, Boshi An, Jiaming Liu, Hao Dong

This framework is designed to capture multiple perspectives of the target object and infer depth information to complement its geometry.

Object Robot Manipulation

Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects

no code implementations NeurIPS 2023 Chuanruo Ning, Ruihai Wu, Haoran Lu, Kaichun Mo, Hao Dong

Our framework explicitly estimates the geometric similarity across different categories, identifying local areas that differ from shapes in the training categories for efficient exploration while concurrently transferring affordance knowledge to similar parts of the objects.

Efficient Exploration Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.