A Parameter-Free Learning Automaton Scheme

28 Nov 2017  ·  Hao Ge ·

For a learning automaton, a proper configuration of its learning parameters, which are crucial for the automaton's performance, is relatively difficult due to the necessity of a manual parameter tuning before real applications. To ensure a stable and reliable performance in stochastic environments, parameter tuning can be a time-consuming and interaction-costing procedure in the field of LA. Especially, it is a fatal limitation for LA-based applications where the interactions with environments are expensive. In this paper, we propose a parameter-free learning automaton scheme to avoid parameter tuning by a Bayesian inference method. In contrast to existing schemes where the parameters should be carefully tuned according to the environment, the performance of this scheme is not sensitive to external environments because a set of parameters can be consistently applied to various environments, which dramatically reduce the difficulty of applying a learning automaton to an unknown stochastic environment. A rigorous proof of $\epsilon$-optimality for the proposed scheme is provided and numeric experiments are carried out on benchmark environments to verify its effectiveness. The results show that, without any parameter tuning cost, the proposed parameter-free learning automaton (PFLA) can achieve a competitive performance compared with other well-tuned schemes and outperform untuned schemes on consistency of performance.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here