Provably Stabilizing Model-Free Q-Learning for Unknown Bilinear Systems

29 Aug 2022  ·  Shanelle G. Clarke, Omanshu Thapliyal, Inseok Hwang ·

In this paper, we present a provably convergent Model-Free ${Q}$-Learning algorithm that learns a stabilizing control policy for an unknown Bilinear System from a single online run. Given an unknown bilinear system, we study the interplay between its equivalent control-affine linear time-varying and linear time-invariant representations to derive i) from Pontryagin's Minimum Principle, a pair of point-to-point model-free policy improvement and evaluation laws that iteratively solves for an optimal state-dependent control policy; and ii) the properties under which the state-input data is sufficient to characterize system behavior in a model-free manner. We demonstrate the performance of the proposed algorithm via illustrative numerical examples and compare it to the model-based case.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here