no code implementations • 7 Jul 2022 • Ambrish Rawat, James Requeima, Wessel Bruinsma, Richard Turner
Machine unlearning refers to the task of removing a subset of training data, thereby removing its contributions to a trained model.
1 code implementation • NeurIPS 2021 • Marcin Tomczak, Siddharth Swaroop, Andrew Foong, Richard Turner
Recent interest in learning large variational Bayesian Neural Networks (BNNs) has been partly hampered by poor predictive performance caused by underfitting, and their performance is known to be very sensitive to the prior over weights.
1 code implementation • NeurIPS 2021 • Andrew Foong, Wessel Bruinsma, David Burt, Richard Turner
Interestingly, this lower bound recovers the Chernoff test set bound if the posterior is equal to the prior.
no code implementations • 22 Aug 2021 • Stratis Markou, James Requeima, Wessel Bruinsma, Richard Turner
Conditional Neural Processes (CNP; Garnelo et al., 2018) are an attractive family of meta-learning models which produce well-calibrated predictions, enable fast inference at test time, and are trainable via a simple maximum likelihood procedure.
1 code implementation • NeurIPS 2020 • Marcin Tomczak, Siddharth Swaroop, Richard Turner
Bayesian neural networks are enjoying a renaissance driven in part by recent advances in variational inference (VI).
2 code implementations • NeurIPS 2020 • Chao Ma, Sebastian Tschiatschek, José Miguel Hernández-Lobato, Richard Turner, Cheng Zhang
Deep generative models often perform poorly in real-world applications due to the heterogeneity of natural data sets.
no code implementations • ICLR 2020 • Kamil Ciosek, Vincent Fortuin, Ryota Tomioka, Katja Hofmann, Richard Turner
Obtaining high-quality uncertainty estimates is essential for many applications of deep neural networks.
no code implementations • IJCNLP 2019 • Bo-Hsiang Tseng, Marek Rei, Pawe{\l} Budzianowski, Richard Turner, Bill Byrne, Anna Korhonen
Dialogue systems benefit greatly from optimizing on detailed annotations, such as transcribed utterances, internal dialogue state representations and dialogue act labels.
no code implementations • pproximateinference AABI Symposium 2019 • Chao Ma, Sebastian Tschiatschek, Yingzhen Li, Richard Turner, Jose Miguel Hernandez-Lobato, Cheng Zhang
In this paper, we focused on improving VAEs for real-valued data that has heterogeneous marginal distributions.
1 code implementation • 13 Aug 2019 • Wenbo Gong, Sebastian Tschiatschek, Richard Turner, Sebastian Nowozin, José Miguel Hernández-Lobato, Cheng Zhang
In this paper we introduce the ice-start problem, i. e., the challenge of deploying machine learning models when only little or no training data is initially available, and acquiring each feature element of data is associated with costs.
no code implementations • ICLR 2019 • Jan Stühmer, Richard Turner, Sebastian Nowozin
Extensive quantitative and qualitative experiments demonstrate that the proposed prior mitigates the trade-off introduced by modified cost functions like beta-VAE and TCVAE between reconstruction loss and disentanglement.
no code implementations • 18 Jan 2018 • Brian Trippe, Richard Turner
The motivations for using variational inference (VI) in neural networks differ significantly from those in latent variable models.
no code implementations • ICML 2017 • Nilesh Tripuraneni, Mark Rowland, Zoubin Ghahramani, Richard Turner
We establish a theoretical basis for the use of non-canonical Hamiltonian dynamics in MCMC, and construct a symplectic, leapfrog-like integrator allowing for the implementation of magnetic HMC.
1 code implementation • 19 Jul 2015 • Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, Jonas Peters
We focus on the problem of Domain Generalization, in which no examples from the test task are observed.
1 code implementation • 9 Mar 2015 • Yarin Gal, Richard Turner
We model the covariance function with a finite Fourier series approximation and treat it as a random variable.
no code implementations • NeurIPS 2011 • Richard Turner, Maneesh Sahani
A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time.
no code implementations • NeurIPS 2009 • Jörg Lücke, Richard Turner, Maneesh Sahani, Marc Henniges
We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another.
no code implementations • NeurIPS 2007 • Pietro Berkes, Richard Turner, Maneesh Sahani
Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention.
no code implementations • NeurIPS 2007 • Richard Turner, Maneesh Sahani
Natural sounds are structured on many time-scales.