no code implementations • 21 Apr 2021 • Ankit Laddha, Shivam Gautam, Stefan Palombo, Shreyash Pandey, Carlos Vallespi-Gonzalez
In this work, we propose \textit{MVFuseNet}, a novel end-to-end method for joint object detection and motion forecasting from a temporal sequence of LiDAR data.
no code implementations • CVPR 2022 • Zhaoen Su, Chao Wang, David Bradley, Carlos Vallespi-Gonzalez, Carl Wellington, Nemanja Djuric
In many different fields interactions between objects play a critical role in determining their behavior.
no code implementations • 9 Jan 2021 • Abhishek Mohta, Fang-Chieh Chou, Brian C. Becker, Carlos Vallespi-Gonzalez, Nemanja Djuric
Detection of surrounding objects and their motion prediction are critical components of a self-driving system.
no code implementations • 5 Nov 2020 • Henggang Cui, Fang-Chieh Chou, Jake Charland, Carlos Vallespi-Gonzalez, Nemanja Djuric
Object detection is a critical component of a self-driving system, tasked with inferring the current states of the surrounding traffic actors.
no code implementations • 1 Nov 2020 • Zhaoen Su, Chao Wang, Henggang Cui, Nemanja Djuric, Carlos Vallespi-Gonzalez, David Bradley
To address this issue we propose a simple and general representation for temporally continuous probabilistic trajectory prediction that is based on polynomial trajectory parameterization.
no code implementations • 2 Oct 2020 • Meet Shah, Zhiling Huang, Ankit Laddha, Matthew Langford, Blake Barber, Sidney Zhang, Carlos Vallespi-Gonzalez, Raquel Urtasun
In this paper, we present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
no code implementations • 27 Aug 2020 • Sudeep Fadadu, Shreyash Pandey, Darshan Hegde, Yi Shi, Fang-Chieh Chou, Nemanja Djuric, Carlos Vallespi-Gonzalez
Our model builds on a state-of-the-art Bird's-Eye View (BEV) network that fuses voxelized features from a sequence of historical LiDAR data as well as rasterized high-definition map to perform detection and prediction tasks.
no code implementations • 3 Jun 2020 • Nemanja Djuric, Henggang Cui, Zhaoen Su, Shangxuan Wu, Huahua Wang, Fang-Chieh Chou, Luisa San Martin, Song Feng, Rui Hu, Yang Xu, Alyssa Dayan, Sidney Zhang, Brian C. Becker, Gregory P. Meyer, Carlos Vallespi-Gonzalez, Carl K. Wellington
One of the critical pieces of the self-driving puzzle is understanding the surroundings of a self-driving vehicle (SDV) and predicting how these surroundings will change in the near future.
no code implementations • 21 May 2020 • Ankit Laddha, Shivam Gautam, Gregory P. Meyer, Carlos Vallespi-Gonzalez, Carl K. Wellington
We show that our approach significantly improves motion forecasting performance over the existing state-of-the-art.
no code implementations • 12 Mar 2020 • Gregory P. Meyer, Jake Charland, Shreyash Pandey, Ankit Laddha, Shivam Gautam, Carlos Vallespi-Gonzalez, Carl K. Wellington
In this work, we present LaserFlow, an efficient method for 3D object detection and motion forecasting from LiDAR.
no code implementations • 9 Mar 2020 • Shivam Gautam, Gregory P. Meyer, Carlos Vallespi-Gonzalez, Brian C. Becker
Accurate motion state estimation of Vulnerable Road Users (VRUs), is a critical requirement for autonomous vehicles that navigate in urban environments.
no code implementations • 1 Mar 2020 • Siheng Chen, Baoan Liu, Chen Feng, Carlos Vallespi-Gonzalez, Carl Wellington
We present a review of 3D point cloud processing and learning for autonomous driving.
no code implementations • 25 Apr 2019 • Gregory P. Meyer, Jake Charland, Darshan Hegde, Ankit Laddha, Carlos Vallespi-Gonzalez
In this paper, we present an extension to LaserNet, an efficient and state-of-the-art LiDAR based 3D object detector.
no code implementations • CVPR 2019 • Gregory P. Meyer, Ankit Laddha, Eric Kee, Carlos Vallespi-Gonzalez, Carl K. Wellington
The efficiency results from processing LiDAR data in the native range view of the sensor, where the input data is naturally compact.