Paper

Efficient Stochastic Optimal Control through Approximate Bayesian Input Inference

Optimal control under uncertainty is a prevailing challenge for many reasons. One of the critical difficulties lies in producing tractable solutions for the underlying stochastic optimization problem. We show how advanced approximate inference techniques can be used to handle the statistical approximations principled and practically by framing the control problem as a problem of input estimation. Analyzing the Gaussian setting, we present an inference-based solver that is effective in stochastic and deterministic settings and was found to be superior to popular baselines on nonlinear simulated tasks. We draw connections that relate this inference formulation to previous approaches for stochastic optimal control and outline several advantages that this inference view brings due to its statistical nature.

Results in Papers With Code
(↓ scroll down to see all results)