Deep Filtering with DNN, CNN and RNN

19 Dec 2021  ·  Bin Xie, Qing Zhang ·

This paper is about a deep learning approach for linear and nonlinear filtering. The idea is to train a neural network with Monte Carlo samples generated from a nominal dynamic model. Then the network weights are applied to Monte Carlo samples from an actual dynamic model. A main focus of this paper is on the deep filters with three major neural network architectures (DNN, CNN, RNN). Our deep filter compares favorably to the traditional Kalman filter in linear cases and outperform the extended Kalman filter in nonlinear cases. Then a switching model with jumps is studied to show the adaptiveness and power of our deep filtering. Among the three major NNs, the CNN outperform the others on average. while the RNN does not seem to be suitable for the filtering problem. One advantage of the deep filter is its robustness when the nominal model and actual model differ. The other advantage of deep filtering is real data can be used directly to train the deep neutral network. Therefore, model calibration can be by-passed all together.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here