Monte Carlo Grid Dynamic Programming: Almost Sure Convergence and Probability Constraints
Dynamic Programming (DP) suffers from the well-known ``curse of dimensionality'', further exacerbated by the need to compute expectations over process noise in stochastic models. This paper presents a Monte Carlo-based sampling approach for the state space and an interpolation procedure for the resulting value function, dependent on the process noise density, in a "self-approximating" fashion, eliminating the need for ordering or set-membership tests. We provide proof of almost sure convergence for the value iteration (and consequently, policy iteration) procedure. The proposed meshless sampling and interpolation algorithm alleviates the burden of gridding the state space, traditionally required in DP, and avoids constructing a piecewise constant value function over a grid. Moreover, we demonstrate that the proposed interpolation procedure is well-suited for handling probabilistic constraints by sampling both infeasible and feasible regions. While the curse of dimensionality cannot be entirely avoided, this approach offers a practical framework for addressing lower-order stochastic nonlinear systems with probabilistic constraints, where traditional DP methods may be intractable or inefficient. Numerical examples are presented to illustrate the convergence and convenience of the proposed algorithms.
PDF Abstract