no code implementations • NeurIPS 2023 • Kyurae Kim, Jisu Oh, Kaiwen Wu, Yi-An Ma, Jacob R. Gardner
We provide the first convergence guarantee for full black-box variational inference (BBVI), also known as Monte Carlo variational inference.
no code implementations • 18 Mar 2023 • Kyurae Kim, Kaiwen Wu, Jisu Oh, Jacob R. Gardner
Understanding the gradient variance of black-box variational inference (BBVI) is a crucial step for establishing its convergence and developing algorithmic improvements.
1 code implementation • 13 Jun 2022 • Kyurae Kim, Jisu Oh, Jacob R. Gardner, Adji Bousso Dieng, HongSeok Kim
Minimizing the inclusive Kullback-Leibler (KL) divergence with stochastic gradient descent (SGD) is challenging since its gradient is defined as an integral over the posterior.