no code implementations • 8 Nov 2023 • Neil K. Chada, Benedict Leimkuhler, Daniel Paulin, Peter A. Whalley
We exhibit similar bounds using both approximate and stochastic gradients, and our method's computational cost is shown to scale logarithmically with the size of the dataset.
no code implementations • 29 Dec 2022 • Neil K. Chada, Quanjun Lang, Fei Lu, Xiong Wang
However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator.
1 code implementation • 14 Jun 2022 • Hamza Ruzayqat, Neil K. Chada, Ajay Jasra
In this work we consider the unbiased estimation of expectations w. r. t.~probability measures that have non-negative Lebesgue density, and which are known point-wise up-to a normalizing constant.
no code implementations • 24 Mar 2022 • Neil K. Chada, Ajay Jasra, Kody J. H. Law, Sumeetpal S. Singh
In this article we consider Bayesian inference associated to deep neural networks (DNNs) and in particular, trace-class neural network (TNN) priors which were proposed by Sell et al. [39].
no code implementations • 6 Jul 2020 • Neil K. Chada, Claudia Schillings, Xin T. Tong, Simon Weissmann
One fundamental problem when solving inverse problems is how to find regularization parameters.
no code implementations • 26 Jul 2018 • Neil K. Chada, Jordan Franks, Ajay Jasra, Kody J. H. Law, Matti Vihola
The resulting estimator leads to inference without a bias from the time-discretisation as the number of Markov chain iterations increases.
Bayesian Inference Methodology Probability Computation 65C05 (primary), 60H35, 65C35, 65C40 (secondary)
no code implementations • 2 Jan 2018 • Neil K. Chada
We discuss properties of hierarchical Bayesian inversion through the ensemble Kalman filter (EnKF).
Numerical Analysis