Markerless Motion Tracking with Noisy Video and IMU Data

Marker-based motion capture, considered the gold standard in human motion analysis, is expensive and requires trained personnel. Advances in inertial sensing and computer vision offer new opportunities to obtain research-grade assessments in clinics and natural environments. A challenge that discourages clinical adoption, however, is the need for careful sensor-to-body alignment, which slows the data collection process in clinics and is prone to errors when patients take the sensors home. We propose deep learning models to estimate human movement with noisy data from videos (VideoNet), inertial sensors (IMUNet), and a combination of the two (FusionNet), obviating the need for careful calibration. The video and inertial sensing data used to train the models were generated synthetically from a marker-based motion capture dataset of a broad range of activities and augmented to account for sensor-misplacement and camera-occlusion errors. The models were tested using real data that included walking, jogging, squatting, sit-to-stand, and other activities. On calibrated data, IMUNet was as accurate as state-of-the-art models, while VideoNet and FusionNet reduced mean ± std root-mean-squared errors by 7.6 ± 5.4° and 5.9 ± 3.3°, respectively. Importantly, all the newly proposed models were less sensitive to noise than existing approaches, reducing errors by up to 14.0 ± 5.3° for sensor-misplacement errors of up to 30.0 ± 13.7° and by up to 7.4 ± 5.5° for joint-center-estimation errors of up to 101.1 ± 11.2 mm, across joints. These tools offer clinicians and patients the opportunity to estimate movement with research-grade accuracy, without the need for time-consuming calibration steps or the high costs associated with commercial products such as Theia3D or Xsens, helping democratize the diagnosis, prognosis, and treatment of neuromusculoskeletal conditions.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods