Diffusion Approximations for Thompson Sampling

19 May 2021  ·  Lin Fan, Peter W. Glynn ·

We study the behavior of Thompson sampling from the perspective of weak convergence. In the regime where the gaps between arm means scale as $1/\sqrt{n}$ with the time horizon $n$, we show that the dynamics of Thompson sampling evolve according to discrete versions of SDEs and random ODEs. As $n \to \infty$, we show that the dynamics converge weakly to solutions of the corresponding SDEs and random ODEs. (Recently, Wager and Xu (arXiv:2101.09855) independently proposed this regime and developed similar SDE and random ODE approximations for Thompson sampling in the multi-armed bandit setting.) Our weak convergence theory, which covers both multi-armed and linear bandit settings, is developed from first principles using the Continuous Mapping Theorem and can be directly adapted to analyze other sampling-based bandit algorithms, for example, algorithms using the bootstrap for exploration. We also establish an invariance principle for multi-armed bandits with gaps scaling as $1/\sqrt{n}$ -- for Thompson sampling and related algorithms involving posterior approximation or the bootstrap, the weak diffusion limits are in general the same regardless of the specifics of the reward distributions or the choice of prior. In particular, as suggested by the classical Bernstein-von Mises normal approximation for posterior distributions, the weak diffusion limits generally coincide with the limit for normally-distributed rewards and priors.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods