Adaptive Rate of Convergence of Thompson Sampling for Gaussian Process Optimization

18 May 2017  ·  Kinjal Basu, Souvik Ghosh ·

We consider the problem of global optimization of a function over a continuous domain. In our setup, we can evaluate the function sequentially at points of our choice and the evaluations are noisy. We frame it as a continuum-armed bandit problem with a Gaussian Process prior on the function. In this regime, most algorithms have been developed to minimize some form of regret. In this paper, we study the convergence of the sequential point $x^t$ to the global optimizer $x^*$ for the Thompson Sampling approach. Under some assumptions and regularity conditions, we prove concentration bounds for $x^t$ where the probability that $x^t$ is bounded away from $x^*$ decays exponentially fast in $t$. Moreover, the result allows us to derive adaptive convergence rates depending on the function structure.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods