no code implementations • 14 May 2024 • Lingdong Kong, Shaoyuan Xie, Hanjiang Hu, Yaru Niu, Wei Tsang Ooi, Benoit R. Cottereau, Lai Xing Ng, Yuexin Ma, Wenwei Zhang, Liang Pan, Kai Chen, Ziwei Liu, Weichao Qiu, Wei zhang, Xu Cao, Hao Lu, Ying-Cong Chen, Caixin Kang, Xinning Zhou, Chengyang Ying, Wentao Shang, Xingxing Wei, Yinpeng Dong, Bo Yang, Shengyin Jiang, Zeliang Ma, Dengyi Ji, Haiwen Li, Xingliang Huang, Yu Tian, Genghua Kou, Fan Jia, Yingfei Liu, Tiancai Wang, Ying Li, Xiaoshuai Hao, Yifan Yang, HUI ZHANG, Mengchuan Wei, Yi Zhou, Haimei Zhao, Jing Zhang, Jinke Li, Xiao He, Xiaoqiang Cheng, Bingyang Zhang, Lirong Zhao, Dianlei Ding, Fangsheng Liu, Yixiang Yan, Hongming Wang, Nanfei Ye, Lun Luo, Yubo Tian, Yiwei Zuo, Zhe Cao, Yi Ren, Yunfan Li, Wenjie Liu, Xun Wu, Yifan Mao, Ming Li, Jian Liu, Jiayang Liu, Zihan Qin, Cunxi Chu, Jialei Xu, Wenbo Zhao, Junjun Jiang, Xianming Liu, Ziyan Wang, Chiwei Li, Shilong Li, Chendong Yuan, Songyue Yang, Wentao Liu, Peng Chen, Bin Zhou, YuBo Wang, Chi Zhang, Jianhang Sun, Hai Chen, Xiao Yang, Lizhong Wang, Dongyi Fu, Yongchun Lin, Huitong Yang, Haoang Li, Yadan Luo, Xianjing Cheng, Yong Xu
In the realm of autonomous driving, robust perception under out-of-distribution conditions is paramount for the safe deployment of vehicles.
no code implementations • 7 May 2024 • Dingrui Wang, Zheyuan Lai, Yuda Li, Yi Wu, Yuexin Ma, Johannes Betz, Ruigang Yang, Wei Li
Furthermore, a new metric named clamped temporal error (CTE) is proposed to give a more comprehensive evaluation of prediction performance, especially in time-sensitive emergency events of subseconds.
no code implementations • 2 May 2024 • Youquan Liu, Lingdong Kong, Xiaoyang Wu, Runnan Chen, Xin Li, Liang Pan, Ziwei Liu, Yuexin Ma
A unified and versatile LiDAR segmentation model with strong robustness and generalizability is desirable for safe autonomous driving perception.
no code implementations • 29 Mar 2024 • Yiteng Xu, Kecheng Ye, Xiao Han, Yiming Ren, Xinge Zhu, Yuexin Ma
Human-centric Point Cloud Video Understanding (PVU) is an emerging field focused on extracting and interpreting human-related features from sequences of human point clouds, further advancing downstream human-centric tasks and applications.
no code implementations • 28 Mar 2024 • Ming Yan, Yan Zhang, Shuqiang Cai, Shuqi Fan, Xincheng Lin, Yudi Dai, Siqi Shen, Chenglu Wen, Lan Xu, Yuexin Ma, Cheng Wang
Comprehensive capturing of human motions requires both accurate captures of complex poses and precise localization of the human within scenes.
no code implementations • 24 Mar 2024 • Jie Tian, Lingxiao Yang, Ran Ji, Yuexin Ma, Lan Xu, Jingyi Yu, Ye Shi, Jingya Wang
Here, the object motion diffusion model generates sequences of object motions based on gaze conditions, while the hand motion diffusion model produces hand motions based on the generated object motion.
1 code implementation • 20 Mar 2024 • Peishan Cong, Ziyi Wang, Zhiyang Dou, Yiming Ren, Wei Yin, Kai Cheng, Yujing Sun, Xiaoxiao Long, Xinge Zhu, Yuexin Ma
Language-guided scene-aware human motion generation has great significance for entertainment and robotics.
no code implementations • 18 Mar 2024 • Xiao Fu, Wei Yin, Mu Hu, Kaixuan Wang, Yuexin Ma, Ping Tan, Shaojie Shen, Dahua Lin, Xiaoxiao Long
We introduce GeoWizard, a new generative foundation model designed for estimating geometric attributes, e. g., depth and normals, from single images.
no code implementations • 5 Mar 2024 • Yichen Yao, Zimo Jiang, Yujing Sun, Zhencai Zhu, Xinge Zhu, Runnan Chen, Yuexin Ma
Human-centric 3D scene understanding has recently drawn increasing attention, driven by its critical impact on robotics.
no code implementations • 27 Feb 2024 • Yiming Ren, Xiao Han, Chengfeng Zhao, Jingya Wang, Lan Xu, Jingyi Yu, Yuexin Ma
For human-centric large-scale scenes, fine-grained modeling for 3D human global pose and shape is significant for scene understanding and can benefit many real-world applications.
no code implementations • 22 Feb 2024 • Kai Cheng, Xiaoxiao Long, Kaizhi Yang, Yao Yao, Wei Yin, Yuexin Ma, Wenping Wang, Xuejin Chen
The advent of 3D Gaussian Splatting (3DGS) has recently brought about a revolution in the field of neural rendering, facilitating high-quality renderings at real-time speed.
no code implementations • 21 Feb 2024 • Yumeng Liu, Yaxun Yang, Youzhuo Wang, Xiaofei Wu, Jiamin Wang, Yichen Yao, Sören Schwertfeger, Sibei Yang, Wenping Wang, Jingyi Yu, Xuming He, Yuexin Ma
In this paper, we introduce RealDex, a pioneering dataset capturing authentic dexterous hand grasping motions infused with human behavioral patterns, enriched by multi-view and multimodal visual data.
1 code implementation • 5 Feb 2024 • Yujing Sun, Caiyi Sun, YuAn Liu, Yuexin Ma, Siu Ming Yiu
Human has an incredible ability to effortlessly perceive the viewpoint difference between two images containing the same object, even when the viewpoint change is astonishingly vast with no co-visible regions in the images.
1 code implementation • 30 Dec 2023 • Yilan Dong, Chunlin Yu, Ruiyang Ha, Ye Shi, Yuexin Ma, Lan Xu, Yanwei Fu, Jingya Wang
Existing gait recognition benchmarks mostly include minor clothing variations in the laboratory environments, but lack persistent changes in appearance over time and space.
1 code implementation • 6 Dec 2023 • Yuhang Lu, Xinge Zhu, Tai Wang, Yuexin Ma
Occupancy prediction has increasingly garnered attention in recent years for its fine-grained understanding of 3D scenes.
no code implementations • 29 Nov 2023 • Yingwenqi Jiang, Jiadong Tu, YuAn Liu, Xifeng Gao, Xiaoxiao Long, Wenping Wang, Yuexin Ma
In this paper, we present GaussianShader, a novel method that applies a simplified shading function on 3D Gaussians to enhance the neural rendering in scenes with reflective surfaces while preserving the training and rendering efficiency.
no code implementations • 28 Nov 2023 • Kai Cheng, Xiaoxiao Long, Wei Yin, Jin Wang, Zhiqiang Wu, Yuexin Ma, Kaixuan Wang, Xiaozhi Chen, Xuejin Chen
Multi-camera setups find widespread use across various applications, such as autonomous driving, as they greatly expand sensing capabilities.
1 code implementation • 23 Oct 2023 • Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, YuAn Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, Wenping Wang
In this work, we introduce Wonder3D, a novel method for efficiently generating high-fidelity textured meshes from single-view images. Recent methods based on Score Distillation Sampling (SDS) have shown the potential to recover 3D geometry from 2D diffusion priors, but they typically suffer from time-consuming per-shape optimization and inconsistent geometry.
no code implementations • 13 Oct 2023 • Xidong Peng, Runnan Chen, Feng Qiao, Lingdong Kong, Youquan Liu, Tai Wang, Xinge Zhu, Yuexin Ma
Unsupervised domain adaptation (UDA) in 3D segmentation tasks presents a formidable challenge, primarily stemming from the sparse and unordered nature of point cloud data.
no code implementations • 29 Sep 2023 • Runnan Chen, Xinge Zhu, Nenglun Chen, Dawei Wang, Wei Li, Yuexin Ma, Ruigang Yang, Tongliang Liu, Wenping Wang
In this paper, we propose Model2Scene, a novel paradigm that learns free 3D scene representation from Computer-Aided Design (CAD) models and languages.
no code implementations • 19 Sep 2023 • Jingyu Zhang, Huitong Yang, Dai-Jie Wu, Jacky Keung, Xuesong Li, Xinge Zhu, Yuexin Ma
Current state-of-the-art point cloud-based perception methods usually rely on large-scale labeled data, which requires expensive manual annotations.
1 code implementation • ICCV 2023 • Youquan Liu, Runnan Chen, Xin Li, Lingdong Kong, Yuchen Yang, Zhaoyang Xia, Yeqi Bai, Xinge Zhu, Yuexin Ma, Yikang Li, Yu Qiao, Yuenan Hou
Besides, we construct the OpenPCSeg codebase, which is the largest and most comprehensive outdoor LiDAR segmentation codebase.
Ranked #2 on 3D Semantic Segmentation on SemanticKITTI (using extra training data)
1 code implementation • ICCV 2023 • Yiteng Xu, Peishan Cong, Yichen Yao, Runnan Chen, Yuenan Hou, Xinge Zhu, Xuming He, Jingyi Yu, Yuexin Ma
Human-centric scene understanding is significant for real-world applications, but it is extremely challenging due to the existence of diverse human poses and actions, complex human-environment interactions, severe occlusions in crowds, etc.
no code implementations • ICCV 2023 • Yuhang Lu, Qi Jiang, Runnan Chen, Yuenan Hou, Xinge Zhu, Yuexin Ma
They typically align visual features with semantic features obtained from word embedding by the supervision of seen classes' annotations.
1 code implementation • NeurIPS 2023 • Runnan Chen, Youquan Liu, Lingdong Kong, Nenglun Chen, Xinge Zhu, Yuexin Ma, Tongliang Liu, Wenping Wang
For nuImages and nuScenes datasets, the performance is 22. 1\% and 26. 8\% with improvements of 3. 5\% and 6. 0\%, respectively.
no code implementations • 25 Apr 2023 • Xiangze Jia, Hui Zhou, Xinge Zhu, Yandong Guo, Ji Zhang, Yuexin Ma
In this paper, we propose a novel self-supervised motion estimator for LiDAR-based autonomous driving via BEV representation.
no code implementations • 12 Apr 2023 • Zhenxiang Lin, Xidong Peng, Peishan Cong, Yuenan Hou, Xinge Zhu, Sibei Yang, Yuexin Ma
We introduce the task of 3D visual grounding in large-scale dynamic scenes based on natural linguistic descriptions and online captured multi-modal visual data, including 2D images and 3D LiDAR point clouds.
no code implementations • 2 Apr 2023 • Huitong Yang, Xuyang Bai, Xinge Zhu, Yuexin Ma
Current on-board chips usually have different computing power, which means multiple training processes are needed for adapting the same learning-based algorithm to different chips, costing huge computing resources.
no code implementations • CVPR 2023 • Ming Yan, Xin Wang, Yudi Dai, Siqi Shen, Chenglu Wen, Lan Xu, Yuexin Ma, Cheng Wang
The core of this dataset is a blending optimization process, which corrects for the pose as it drifts and is affected by the magnetic conditions.
1 code implementation • CVPR 2023 • Yudi Dai, Yitai Lin, Xiping Lin, Chenglu Wen, Lan Xu, Hongwei Yi, Siqi Shen, Yuexin Ma, Cheng Wang
We present SLOPER4D, a novel scene-aware dataset collected in large urban environments to facilitate the research of global human pose estimation (GHPE) with human-scene interaction in the wild.
1 code implementation • CVPR 2023 • Zhaoyang Xia, Youquan Liu, Xin Li, Xinge Zhu, Yuexin Ma, Yikang Li, Yuenan Hou, Yu Qiao
We propose a simple yet effective label rectification strategy, which uses off-the-shelf panoptic segmentation labels to remove the traces of dynamic objects in completion labels, greatly improving the performance of deep models especially for those moving objects.
Ranked #1 on 3D Semantic Scene Completion on SemanticKITTI
no code implementations • ICCV 2023 • Lingdong Kong, Youquan Liu, Runnan Chen, Yuexin Ma, Xinge Zhu, Yikang Li, Yuenan Hou, Yu Qiao, Ziwei Liu
We show that, for the first time, a range view method is able to surpass the point, voxel, and multi-view fusion counterparts in the competing LiDAR semantic and panoptic segmentation benchmarks, i. e., SemanticKITTI, nuScenes, and ScribbleKITTI.
Ranked #4 on 3D Semantic Segmentation on SemanticKITTI
1 code implementation • 2 Feb 2023 • Juze Zhang, Ye Shi, Yuexin Ma, Lan Xu, Jingyi Yu, Jingya Wang
This paper presents an inverse kinematic optimization layer (IKOL) for 3D human pose and shape estimation that leverages the strength of both optimization- and regression-based methods within an end-to-end framework.
Ranked #25 on 3D Human Pose Estimation on 3DPW
1 code implementation • CVPR 2023 • Runnan Chen, Youquan Liu, Lingdong Kong, Xinge Zhu, Yuexin Ma, Yikang Li, Yuenan Hou, Yu Qiao, Wenping Wang
For the first time, our pre-trained network achieves annotation-free 3D semantic segmentation with 20. 8% and 25. 08% mIoU on nuScenes and ScanNet, respectively.
1 code implementation • 1 Dec 2022 • Xidong Peng, Xinge Zhu, Yuexin Ma
Second, we present Temporal Motion Alignment module to utilize motion features in sequential frames of data to match two domains.
no code implementations • 30 Nov 2022 • Peishan Cong, Yiteng Xu, Yiming Ren, Juze Zhang, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma
Motivated by this, we propose a monocular camera and single LiDAR-based method for 3D multi-person pose estimation in large-scale scenes, which is easy to deploy and insensitive to light.
no code implementations • 22 Nov 2022 • Xiao Han, Yiming Ren, Peishan Cong, Yujing Sun, Jingya Wang, Lan Xu, Yuexin Ma
In this paper, based on a single LiDAR, we present the Hierarchical Multi-representation Feature Interaction Network (HMRNet) for robust gait recognition.
no code implementations • 15 Nov 2022 • Wenxi Liu, Qi Li, Weixiang Yang, Jiaxin Cai, Yuanlong Yu, Yuexin Ma, Shengfeng He, Jia Pan
We propose a front-to-top view projection (FTVP) module, which takes the constraint of cycle consistency between views into account and makes full use of their correlation to strengthen the view transformation and scene understanding.
no code implementations • 18 Oct 2022 • Runnan Chen, Xinge Zhu, Nenglun Chen, Wei Li, Yuexin Ma, Ruigang Yang, Wenping Wang
To this end, we propose a novel framework to learn the geometric primitives shared in seen and unseen categories' objects and employ a fine-grained alignment between language and the learned geometric primitives.
1 code implementation • 20 Sep 2022 • Mingkun Wang, Xinge Zhu, Changqian Yu, Wei Li, Yuexin Ma, Ruochun Jin, Xiaoguang Ren, Dongchun Ren, Mingxu Wang, Wenjing Yang
In view of this, we propose a new goal area-based framework, named Goal Area Network (GANet), for motion forecasting, which models goal areas rather than exact goal coordinates as preconditions for trajectory prediction, performing more robustly and accurately.
Ranked #15 on Motion Forecasting on Argoverse CVPR 2020
1 code implementation • 4 Aug 2022 • Yuexin Ma, Tai Wang, Xuyang Bai, Huitong Yang, Yuenan Hou, Yaming Wang, Yu Qiao, Ruigang Yang, Dinesh Manocha, Xinge Zhu
In recent years, vision-centric Bird's Eye View (BEV) perception has garnered significant interest from both industry and academia due to its inherent advantages, such as providing an intuitive representation of the world and being conducive to data fusion.
no code implementations • CVPR 2022 • Yuenan Hou, Xinge Zhu, Yuexin Ma, Chen Change Loy, Yikang Li
This article addresses the problem of distilling knowledge from a large teacher model to a slim student network for LiDAR semantic segmentation.
Ranked #8 on LIDAR Semantic Segmentation on nuScenes (val mIoU metric)
no code implementations • 30 May 2022 • Yiming Ren, Chengfeng Zhao, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan Xu, Yuexin Ma
We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn lightly.
1 code implementation • CVPR 2022 • Peishan Cong, Xinge Zhu, Feng Qiao, Yiming Ren, Xidong Peng, Yuenan Hou, Lan Xu, Ruigang Yang, Dinesh Manocha, Yuexin Ma
In addition, considering the property of sparse global distribution and density-varying local distribution of pedestrians, we further propose a novel method, Density-aware Hierarchical heatmap Aggregation (DHA), to enhance pedestrian perception in crowded scenes.
no code implementations • CVPR 2022 • Jialian Li, Jingyi Zhang, Zhiyong Wang, Siqi Shen, Chenglu Wen, Yuexin Ma, Lan Xu, Jingyi Yu, Cheng Wang
Quantitative and qualitative experiments show that our method outperforms the techniques based only on RGB images.
Ranked #3 on 3D Human Pose Estimation on SLOPER4D (using extra training data)
no code implementations • 20 Mar 2022 • Yiming Ren, Peishan Cong, Xinge Zhu, Yuexin Ma
In this paper, we propose a self-supervised point cloud completion method (TraPCC) for vehicles in real traffic scenes without any complete data.
no code implementations • 20 Mar 2022 • Runnan Chen, Xinge Zhu, Nenglun Chen, Dawei Wang, Wei Li, Yuexin Ma, Ruigang Yang, Wenping Wang
Promising performance has been achieved for visual perception on the point cloud.
1 code implementation • CVPR 2022 • Yudi Dai, Yitai Lin, Chenglu Wen, Siqi Shen, Lan Xu, Jingyi Yu, Yuexin Ma, Cheng Wang
We propose Human-centered 4D Scene Capture (HSC4D) to accurately and efficiently create a dynamic digital world, containing large-scale indoor-outdoor scenes, diverse human motions, and rich interactions between humans and environments.
no code implementations • 9 Feb 2022 • Yuwei Li, Longwen Zhang, Zesong Qiu, Yingwenqi Jiang, Nianyi Li, Yuexin Ma, Yuyao Zhang, Lan Xu, Jingyi Yu
Emerging Metaverse applications demand reliable, accurate, and photorealistic reproductions of human hands to perform sophisticated operations as if in the physical world.
no code implementations • 9 Dec 2021 • Xiao Song, Guorun Yang, Xinge Zhu, Hui Zhou, Yuexin Ma, Zhe Wang, Jianping Shi
Compared to previous methods, our AdaStereo realizes a more standard, complete and effective domain adaptation pipeline.
no code implementations • 29 Sep 2021 • Runnan Chen, Xinge Zhu, Nenglun Chen, Dawei Wang, Wei Li, Yuexin Ma, Ruigang Yang, Wenping Wang
In this paper, we study a new problem named Referring Self-supervised Learning (RSL) on 3D scene understanding: Given the 3D synthetic models with labels and the unlabeled 3D real scene scans, our goal is to distinguish the identical semantic objects on an unseen scene according to the referring synthetic 3D models.
1 code implementation • 12 Sep 2021 • Xinge Zhu, Hui Zhou, Tai Wang, Fangzhou Hong, Wei Li, Yuexin Ma, Hongsheng Li, Ruigang Yang, Dahua Lin
In this paper, we benchmark our model on these three tasks.
no code implementations • 22 Aug 2021 • Xidong Peng, Xinge Zhu, Tai Wang, Yuexin Ma
Due to the information sparsity of local cost volume, we further introduce match reweighting and structure-aware attention, to make the depth information more concentrated.
1 code implementation • 21 Jul 2021 • Runnan Chen, Yuexin Ma, Nenglun Chen, Lingjie Liu, Zhiming Cui, Yanhong Lin, Wenping Wang
Detecting 3D landmarks on cone-beam computed tomography (CBCT) is crucial to assessing and quantifying the anatomical abnormalities in 3D cephalometric analysis.
1 code implementation • CVPR 2021 • Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Yuexin Ma, Shengfeng He, Jia Pan
Furthermore, our model runs at 35 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
Autonomous Driving Monocular Cross-View Road Scene Parsing(Road) +2
no code implementations • 28 May 2021 • Runnan Chen, Yuexin Ma, Lingjie Liu, Nenglun Chen, Zhiming Cui, Guodong Wei, Wenping Wang
The global shape constraint is the inherent property of anatomical landmarks that provides valuable guidance for more consistent pseudo labelling of the unlabeled data, which is ignored in the previously semi-supervised methods.
1 code implementation • 23 Apr 2021 • Xin Chen, Anqi Pang, Wei Yang, Yuexin Ma, Lan Xu, Jingyi Yu
In this paper, we propose SportsCap -- the first approach for simultaneously capturing 3D human motions and understanding fine-grained actions from monocular challenging sports video input.
no code implementations • 26 Mar 2021 • Peishan Cong, Xinge Zhu, Yuexin Ma
A thorough and holistic scene understanding is crucial for autonomous vehicles, where LiDAR semantic segmentation plays an indispensable role.
2 code implementations • CVPR 2021 • Yannan He, Anqi Pang, Xin Chen, Han Liang, Minye Wu, Yuexin Ma, Lan Xu
We propose a hybrid motion inference stage with a generation network, which utilizes a temporal encoder-decoder to extract the motion details from the pair-wise sparse-view reference, as well as a motion discriminator to utilize the unpaired marker-based references to extract specific challenging motion characteristics in a data-driven manner.
no code implementations • 1 Jan 2021 • Keke Tang, Guodong Wei, Jie Zhu, Yuexin Ma, Runnan Chen, Zhaoquan Gu, Wenping Wang
Deep neural networks have achieved great success in computer vision, thanks to their ability in extracting category-relevant semantic features.
2 code implementations • CVPR 2021 • Xinge Zhu, Hui Zhou, Tai Wang, Fangzhou Hong, Yuexin Ma, Wei Li, Hongsheng Li, Dahua Lin
However, we found that in the outdoor point cloud, the improvement obtained in this way is quite limited.
Ranked #3 on 3D Semantic Segmentation on ScribbleKITTI
3 code implementations • 4 Aug 2020 • Hui Zhou, Xinge Zhu, Xiao Song, Yuexin Ma, Zhe Wang, Hongsheng Li, Dahua Lin
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
Ranked #11 on LIDAR Semantic Segmentation on nuScenes
no code implementations • ECCV 2020 • Yuexin Ma, Xinge ZHU, Xinjing Cheng, Ruigang Yang, Jiming Liu, Dinesh Manocha
Then we aggregate dynamic points to instance points, which stand for moving objects such as pedestrians in videos.
1 code implementation • 6 Apr 2020 • Sibo Zhang, Yuexin Ma, Ruigang Yang
This paper reviews the CVPR 2019 challenge on Autonomous Driving.
1 code implementation • 6 Apr 2020 • Xinge Zhu, Yuexin Ma, Tai Wang, Yan Xu, Jianping Shi, Dahua Lin
Multi-class 3D object detection aims to localize and classify objects of multiple categories from point clouds.
no code implementations • 27 Nov 2019 • Keke Tang, Peng Song, Yuexin Ma, Zhaoquan Gu, Yu Su, Zhihong Tian, Wenping Wang
High-level (e. g., semantic) features encoded in the latter layers of convolutional neural networks are extensively exploited for image classification, leaving low-level (e. g., color) features in the early layers underexplored.
no code implementations • 10 Oct 2019 • Runnan Chen, Yuexin Ma, Nenglun Chen, Daniel Lee, and Wenping Wang
Marking anatomical landmarks in cephalometric radiography is a critical operation in cephalometric analysis.
2 code implementations • 23 Aug 2019 • Runnan Chen, Yuexin Ma, Nenglun Chen, Daniel Lee, Wenping Wang
Marking anatomical landmarks in cephalometric radiography is a critical operation in cephalometric analysis.
1 code implementation • 23 Jan 2019 • Wei Li, Chengwei Pan, Rong Zhang, Jiaping Ren, Yuexin Ma, Jin Fang, Feilong Yan, Qichuan Geng, Xinyu Huang, Huajun Gong, Weiwei Xu, Guoping Wang, Dinesh Manocha, Ruigang Yang
Our augmented approach combines the flexibility in a virtual environment (e. g., vehicle movements) with the richness of the real world to allow effective simulation of anywhere on earth.
1 code implementation • 6 Nov 2018 • Yuexin Ma, Xinge Zhu, Sibo Zhang, Ruigang Yang, Wenping Wang, Dinesh Manocha
To safely and efficiently navigate in complex urban traffic, autonomous vehicles must make responsible predictions in relation to surrounding traffic-agents (vehicles, bicycles, pedestrians, etc.).
Ranked #1 on Trajectory Prediction on Apolloscape Trajectory
no code implementations • 7 Apr 2018 • Yuexin Ma, Dinesh Manocha, Wenping Wang
We present a novel algorithm for reciprocal collision avoidance between heterogeneous agents of different shapes and sizes.