GPR-based Subsurface Object Detection and Reconstruction Using Random Motion and DepthNet

20 Aug 2020  ·  Jinglun Feng, Liang Yang, HaiYan Wang, Yifeng Song, Jizhong Xiao ·

Ground Penetrating Radar (GPR) is one of the most important non-destructive evaluation (NDE) devices to detect the subsurface objects (i.e. rebars, utility pipes) and reveal the underground scene. One of the biggest challenges in GPR based inspection is the subsurface targets reconstruction. In order to address this issue, this paper presents a 3D GPR migration and dielectric prediction system to detect and reconstruct underground targets. This system is composed of three modules: 1) visual inertial fusion (VIF) module to generate the pose information of GPR device, 2) deep neural network module (i.e., DepthNet) which detects B-scan of GPR image, extracts hyperbola features to remove the noise in B-scan data and predicts dielectric to determine the depth of the objects, 3) 3D GPR migration module which synchronizes the pose information with GPR scan data processed by DepthNet to reconstruct and visualize the 3D underground targets. Our proposed DepthNet processes the GPR data by removing the noise in B-scan image as well as predicting depth of subsurface objects. For DepthNet model training and testing, we collect the real GPR data in the concrete test pit at Geophysical Survey System Inc. (GSSI) and create the synthetic GPR data by using gprMax3.0 simulator. The dataset we create includes 350 labeled GPR images. The DepthNet achieves an average accuracy of 92.64% for B-scan feature detection and an 0.112 average error for underground target depth prediction. In addition, the experimental results verify that our proposed method improve the migration accuracy and performance in generating 3D GPR image compared with the traditional migration methods.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here