Variational Likelihood-Free Gradient Descent

In many scientific applications, we do not have explicit access to the likelihood function. However simulations of the process of interest, using different parameter settings, may give us access to the likelihood function implicitly. The methodology for approximating likelihoods and posterior distributions based on simulated observations can be described as simulation-based inference. In this paper, we propose a simulation-based inference algorithm in which we iteratively update particles to more closely resemble the posterior. Our approach utilises simulations to estimate a density ratio function at each iteration and then uses it to approximate the KL divergence between the particle density and the posterior density. By alternating between gradient descent and density ratio estimation, the approximated KL divergence is minimized. We benchmark the performance of our algorithm on a Gaussian mixture model and the M/G/1 queue process model and report promising results.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here