Motion Prediction Models

MotionNet is a system for joint perception and motion prediction based on a bird's eye view (BEV) map, which encodes the object category and motion information from 3D point clouds in each grid cell. MotionNet takes a sequence of LiDAR sweeps as input and outputs the bird's eye view (BEV) map. The backbone of MotionNet is a spatio-temporal pyramid network, which extracts deep spatial and temporal features in a hierarchical fashion. To enforce the smoothness of predictions over both space and time, the training of MotionNet is further regularized with novel spatial and temporal consistency losses.

Source: MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird's Eye View Maps

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
3D Object Detection 1 25.00%
Autonomous Driving 1 25.00%
motion prediction 1 25.00%
Object Detection 1 25.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories