1 code implementation • 6 Jun 2023 • Paul E. Chang, Prakhar Verma, S. T. John, Arno Solin, Mohammad Emtiyaz Khan
Sequential learning with Gaussian processes (GPs) is challenging when access to past data is limited, for example, in continual and active learning.
1 code implementation • 3 Jun 2023 • Prakhar Verma, Vincent Adam, Arno Solin
Diffusion processes are a class of stochastic differential equations (SDEs) providing a rich family of expressive models that arise naturally in dynamic modelling tasks.
2 code implementations • 9 Apr 2023 • Elizaveta Semenova, Prakhar Verma, Max Cairney-Leeming, Arno Solin, Samir Bhatt, Seth Flaxman
Recent advances have shown that GP priors, or their finite realisations, can be encoded using deep generative models such as variational autoencoders (VAEs).
no code implementations • 2 Nov 2022 • Paul E. Chang, Prakhar Verma, ST John, Victor Picheny, Henry Moss, Arno Solin
Gaussian processes (GPs) are the main surrogate functions used for sequential modelling such as Bayesian Optimization and Active Learning.
1 code implementation • NeurIPS 2021 • Arno Solin, Ella Tamir, Prakhar Verma
Simulation-based techniques such as variants of stochastic Runge-Kutta are the de facto approach for inference with stochastic differential equations (SDEs) in machine learning.
no code implementations • NeurIPS Workshop DLDE 2021 • Prakhar Verma, Vincent Adam, Arno Solin
We frame the problem of learning stochastic differential equations (SDEs) from noisy observations as an inference problem and aim to maximize the marginal likelihood of the observations in a joint model of the latent paths and the noisy observations.
1 code implementation • NeurIPS 2021 • Arno Solin, Ella Maija Tamir, Prakhar Verma
Simulation-based techniques such as variants of stochastic Runge–Kutta are the de facto approach for inference with stochastic differential equations (SDEs) in machine learning.