Provably Stabilizing Model-Free Q-Learning for Unknown Bilinear Systems
In this paper, we present a provably convergent Model-Free ${Q}$-Learning algorithm that learns a stabilizing control policy for an unknown Bilinear System from a single online run. Given an unknown bilinear system, we study the interplay between its equivalent control-affine linear time-varying and linear time-invariant representations to derive i) from Pontryagin's Minimum Principle, a pair of point-to-point model-free policy improvement and evaluation laws that iteratively solves for an optimal state-dependent control policy; and ii) the properties under which the state-input data is sufficient to characterize system behavior in a model-free manner. We demonstrate the performance of the proposed algorithm via illustrative numerical examples and compare it to the model-based case.
PDF Abstract