Paper

Multidimensional ground reaction forces and moments from wearable sensor accelerations via deep learning

Monitoring athlete internal workload exposure, including prevention of catastrophic non-contact knee injuries, relies on the existence of a custom early-warning detection system. This system must be able to estimate accurate, reliable, and valid musculoskeletal joint loads, for sporting maneuvers in near real-time and during match play. However, current methods are constrained to laboratory instrumentation, are labor and cost intensive, and require highly trained specialist knowledge, thereby limiting their ecological validity and wider deployment. An informative next step towards this goal would be a new method to obtain ground kinetics in the field. Here we show that kinematic data obtained from wearable sensor accelerometers, in lieu of embedded force platforms, can leverage recent supervised learning techniques to predict near real-time multidimensional ground reaction forces and moments (GRF/M). Competing convolutional neural network (CNN) deep learning models were trained using laboratory-derived stance phase GRF/M data and simulated sensor accelerations for running and sidestepping maneuvers derived from nearly half a million legacy motion trials. Then, predictions were made from each model driven by five sensor accelerations recorded during independent inter-laboratory data capture sessions. The proposed deep learning workbench achieved correlations to ground truth, by maximum discrete GRF component, of vertical Fz 0.97, anterior Fy 0.96 (both running), and lateral Fx 0.87 (sidestepping), with the strongest mean recorded across GRF components 0.89, and for GRM 0.65 (both sidestepping). These best-case correlations indicate the plausibility of the approach although the range of results was disappointing. The goal to accurately estimate near real-time on-field GRF/M will be improved by the lessons learned in this study [truncated].

Results in Papers With Code
(↓ scroll down to see all results)