no code implementations • ICML 2020 • Rob Brekelmans, Vaden Masrani, Frank Wood, Greg Ver Steeg, Aram Galstyan
While the Evidence Lower Bound (ELBO) has become a ubiquitous objective for variational inference, the recently proposed Thermodynamic Variational Objective (TVO) leverages thermodynamic integration to provide a tighter and more general family of bounds.
no code implementations • 7 May 2024 • Jonathan Wilder Lavington, Ke Zhang, Vasileios Lioutas, Matthew Niedoba, Yunpeng Liu, Dylan Green, Saeid Naderiparizi, Xiaoxuan Liang, Setareh Dabiri, Adam Ścibior, Berend Zwartsenberg, Frank Wood
Moreover, because of the high variability between different problems presented in different autonomous systems, these simulators need to be easy to use, and easy to modify.
no code implementations • 30 Apr 2024 • Dylan Green, William Harvey, Saeid Naderiparizi, Matthew Niedoba, Yunpeng Liu, Xiaoxuan Liang, Jonathan Lavington, Ke Zhang, Vasileios Lioutas, Setareh Dabiri, Adam Scibior, Berend Zwartsenberg, Frank Wood
Current state-of-the-art methods for video inpainting typically rely on optical flow or attention-based approaches to inpaint masked regions by propagating visual information across frames.
1 code implementation • 15 Apr 2024 • Manuel Gloeckler, Michael Deistler, Christian Weilbach, Frank Wood, Jakob H. Macke
Amortized Bayesian inference trains neural networks to solve stochastic inference problems using model simulations, thereby making it possible to rapidly perform Bayesian inference for any newly observed data.
no code implementations • 28 Feb 2024 • Laura Manduchi, Kushagra Pandey, Robert Bamler, Ryan Cotterell, Sina Däubener, Sophie Fellenz, Asja Fischer, Thomas Gärtner, Matthias Kirchler, Marius Kloft, Yingzhen Li, Christoph Lippert, Gerard de Melo, Eric Nalisnick, Björn Ommer, Rajesh Ranganath, Maja Rudolph, Karen Ullrich, Guy Van Den Broeck, Julia E Vogt, Yixin Wang, Florian Wenzel, Frank Wood, Stephan Mandt, Vincent Fortuin
The field of deep generative modeling has grown rapidly and consistently over the years.
no code implementations • 14 Feb 2024 • Jason Yoo, Yunpeng Liu, Frank Wood, Geoff Pleiss
Our solution, Layerwise Proximal Replay (LPR), balances learning from new and replay data while only allowing for gradual changes in the hidden activation of past data.
no code implementations • 12 Feb 2024 • Matthew Niedoba, Dylan Green, Saeid Naderiparizi, Vasileios Lioutas, Jonathan Wilder Lavington, Xiaoxuan Liang, Yunpeng Liu, Ke Zhang, Setareh Dabiri, Adam Ścibior, Berend Zwartsenberg, Frank Wood
Score function estimation is the cornerstone of both training and sampling from diffusion generative models.
no code implementations • 31 Jul 2023 • Saeid Naderiparizi, Xiaoxuan Liang, Berend Zwartsenberg, Frank Wood
The maximum likelihood principle advocates parameter estimation via optimization of the data likelihood function.
1 code implementation • 24 May 2023 • Setareh Dabiri, Vasileios Lioutas, Berend Zwartsenberg, Yunpeng Liu, Matthew Niedoba, Xiaoxuan Liang, Dylan Green, Justice Sefas, Jonathan Wilder Lavington, Frank Wood, Adam Scibior
When training object detection models on synthetic data, it is important to make the distribution of synthetic data as close as possible to the distribution of real data.
no code implementations • 19 May 2023 • Yunpeng Liu, Vasileios Lioutas, Jonathan Wilder Lavington, Matthew Niedoba, Justice Sefas, Setareh Dabiri, Dylan Green, Xiaoxuan Liang, Berend Zwartsenberg, Adam Ścibior, Frank Wood
The development of algorithms that learn multi-agent behavioral models using human demonstrations has led to increasingly realistic simulations in the field of autonomous driving.
1 code implementation • 28 Mar 2023 • William Harvey, Frank Wood
Recent progress with conditional image diffusion models has been stunning, and this holds true whether we are speaking about models conditioned on a text description, a scene layout, or a sketch.
no code implementations • 21 Oct 2022 • Andreas Munk, Alexander Mead, Frank Wood
We consider the problem of performing Bayesian inference in probabilistic models where observations are accompanied by uncertainty, referred to as "uncertain evidence."
1 code implementation • 20 Oct 2022 • Christian Weilbach, William Harvey, Frank Wood
We introduce a framework for automatically defining and learning deep generative models with problem-specific structure.
no code implementations • 9 Aug 2022 • Yunpeng Liu, Jonathan Wilder Lavington, Adam Scibior, Frank Wood
We develop a generic mechanism for generating vehicle-type specific sequences of waypoints from a probabilistic foundation model of driving behavior.
no code implementations • 17 Jun 2022 • Berend Zwartsenberg, Adam Ścibior, Matthew Niedoba, Vasileios Lioutas, Yunpeng Liu, Justice Sefas, Setareh Dabiri, Jonathan Wilder Lavington, Trevor Campbell, Frank Wood
We present a novel, conditional generative probabilistic model of set-valued data with a tractable log density.
no code implementations • 30 May 2022 • Vasileios Lioutas, Jonathan Wilder Lavington, Justice Sefas, Matthew Niedoba, Yunpeng Liu, Berend Zwartsenberg, Setareh Dabiri, Frank Wood, Adam Scibior
We introduce CriticSMC, a new algorithm for planning as inference built from a composition of sequential Monte Carlo with learned Soft-Q function heuristic factors.
1 code implementation • 23 May 2022 • William Harvey, Saeid Naderiparizi, Vaden Masrani, Christian Weilbach, Frank Wood
We present a framework for video modeling based on denoising diffusion probabilistic models that produces long-duration video completions in a variety of realistic environments.
1 code implementation • 20 May 2022 • Jason Yoo, Frank Wood
Associative memory plays an important role in human intelligence and its mechanisms have been linked to attention in machine learning.
2 code implementations • 17 Feb 2022 • Atılım Güneş Baydin, Barak A. Pearlmutter, Don Syme, Frank Wood, Philip Torr
Using backpropagation to compute gradients of objective functions for optimization has remained a mainstay of machine learning.
no code implementations • 6 Feb 2022 • Michael Teng, Michiel Van de Panne, Frank Wood
Distributional reinforcement learning (RL) aims to learn a value-network that predicts the full distribution of the returns for a given state, often modeled via a quantile-based critic.
2 code implementations • 13 Jan 2022 • Peyman Bateni, Jarred Barber, Raghav Goyal, Vaden Masrani, Jan-Willem van de Meent, Leonid Sigal, Frank Wood
The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks.
1 code implementation • 1 Jul 2021 • Vaden Masrani, Rob Brekelmans, Thang Bui, Frank Nielsen, Aram Galstyan, Greg Ver Steeg, Frank Wood
Many common machine learning methods involve the geometric annealing path, a sequence of intermediate densities between two distributions of interest constructed using the geometric average.
no code implementations • 18 Jun 2021 • Adam Ścibior, Frank Wood
Particle filters are not compatible with automatic differentiation due to the presence of discrete resampling steps.
no code implementations • 22 Apr 2021 • Adam Scibior, Vasileios Lioutas, Daniele Reda, Peyman Bateni, Frank Wood
We develop a deep generative model built on a fully differentiable simulator for multi-agent trajectory prediction.
1 code implementation • ICLR 2022 • William Harvey, Saeid Naderiparizi, Frank Wood
We present a conditional variational auto-encoder (VAE) which, to avoid the substantial cost of training from scratch, uses an architecture and training objective capable of leveraging a foundation model in the form of a pretrained unconditional VAE.
no code implementations • 1 Jan 2021 • William Harvey, Michael Teng, Frank Wood
We introduce methodology from the BOED literature to approximate this optimal behaviour, and use it to generate `near-optimal' sequences of attention locations.
1 code implementation • 31 Dec 2020 • Andrew Warrington, J. Wilder Lavington, Adam Ścibior, Mark Schmidt, Frank Wood
Policies for partially observed Markov decision processes can be efficiently learned by imitating policies for the corresponding fully observed Markov decision processes.
2 code implementations • NeurIPS Workshop DL-IG 2020 • Rob Brekelmans, Vaden Masrani, Thang Bui, Frank Wood, Aram Galstyan, Greg Ver Steeg, Frank Nielsen
Annealed importance sampling (AIS) is the gold standard for estimating partition functions or marginal likelihoods, corresponding to importance sampling over a path of distributions between a tractable base and an unnormalized target.
no code implementations • 10 Dec 2020 • Jason Yoo, Tony Joseph, Dylan Yung, S. Ali Nasseri, Frank Wood
There are currently many barriers that prevent non-experts from exploiting machine learning solutions ranging from the lack of intuition on statistical learning techniques to the trickiness of hyperparameter tuning.
1 code implementation • NeurIPS 2020 • Vu Nguyen, Vaden Masrani, Rob Brekelmans, Michael A. Osborne, Frank Wood
Achieving the full promise of the Thermodynamic Variational Objective (TVO), a recently proposed variational lower bound on the log evidence involving a one-dimensional Riemann integral approximation, requires choosing a "schedule" of sorted discretization points.
no code implementations • 8 Oct 2020 • Saeid Naderiparizi, Kenny Chiu, Benjamin Bloem-Reddy, Frank Wood
We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large.
no code implementations • 3 Oct 2020 • Andreas Munk, William Harvey, Frank Wood
Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator.
2 code implementations • 28 Sep 2020 • Peyman Bateni, Jarred Barber, Jan-Willem van de Meent, Frank Wood
We propose a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance.
1 code implementation • 1 Jul 2020 • Rob Brekelmans, Vaden Masrani, Frank Wood, Greg Ver Steeg, Aram Galstyan
We propose to choose intermediate distributions using equal spacing in the moment parameters of our exponential family, which matches grid search performance and allows the schedule to adaptively update over the course of training.
no code implementations • 30 Jun 2020 • Michael Teng, Tuan Anh Le, Adam Scibior, Frank Wood
We introduce a novel objective for training deep generative time-series models with discrete latent variables for which supervision is only sparsely available.
2 code implementations • 17 Jun 2020 • Peyman Bateni, Jarred Barber, Jan-Willem van de Meent, Frank Wood
We develop a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance.
Ranked #1 on Few-Shot Image Classification on Tiered ImageNet 10-way (1-shot) (using extra training data)
1 code implementation • 30 Mar 2020 • Frank Wood, Andrew Warrington, Saeid Naderiparizi, Christian Weilbach, Vaden Masrani, William Harvey, Adam Scibior, Boyan Beronov, John Grefenstette, Duncan Campbell, Ali Nasseri
In this work we demonstrate how to automate parts of the infectious disease-control policy-making process via performing inference in existing epidemiological models.
1 code implementation • 28 Mar 2020 • Andrew Warrington, Saeid Naderiparizi, Frank Wood
Deterministic models are approximations of reality that are easy to interpret and often easier to build than stochastic alternatives.
2 code implementations • CVPR 2020 • Peyman Bateni, Raghav Goyal, Vaden Masrani, Frank Wood, Leonid Sigal
Few-shot learning is a fundamental task in computer vision that carries the promise of alleviating the need for exhaustively labeled data.
Ranked #2 on Few-Shot Image Classification on Mini-Imagenet 10-way (5-shot) (using extra training data)
no code implementations • 25 Oct 2019 • Andreas Munk, Berend Zwartsenberg, Adam Ścibior, Atılım Güneş Baydin, Andrew Stewart, Goran Fernlund, Anoush Poursartip, Frank Wood
Our surrogates target stochastic simulators where the number of random variables itself can be stochastic and potentially unbounded.
no code implementations • 25 Oct 2019 • William Harvey, Andreas Munk, Atılım Güneş Baydin, Alexander Bergholm, Frank Wood
We present a new approach to automatic amortized inference in universal probabilistic programs which improves performance compared to current methods.
1 code implementation • 20 Oct 2019 • Saeid Naderiparizi, Adam Ścibior, Andreas Munk, Mehrdad Ghadiri, Atılım Güneş Baydin, Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Philip H. S. Torr, Tom Rainforth, Yee Whye Teh, Frank Wood
Naive approaches to amortized inference in probabilistic programs with unbounded loops can produce estimators with infinite variance.
no code implementations • pproximateinference AABI Symposium 2019 • Christian Weilbach, Boyan Beronov, William Harvey, Frank Wood
We introduce a more efficient neural architecture for amortized inference, which combines continuous and conditional normalizing flows using a principled choice of structure.
no code implementations • pproximateinference AABI Symposium 2019 • Andrew Warrington, Saeid Naderiparizi, Frank Wood
Deterministic models are approximations of reality that are often easier to build and interpret than stochastic alternatives.
no code implementations • pproximateinference AABI Symposium 2019 • Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Saeid Naderiparizi, Adam Scibior, Andreas Munk, Frank Wood, Mehrdad Ghadiri, Philip Torr, Yee Whye Teh, Atilim Gunes Baydin, Tom Rainforth
We introduce two approaches for conducting efficient Bayesian inference in stochastic simulators containing nested stochastic sub-procedures, i. e., internal procedures for which the density cannot be calculated directly such as rejection sampling loops.
no code implementations • 20 Sep 2019 • Renhao Wang, Adam Scibior, Frank Wood
On top of that, we extend our model with an additional latent variable and augment the dataset to train a controller that is robust to unsafe commands, such as asking it to turn into a wall.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Andrew Warrington, Arthur Spencer, Frank Wood
We develop a stochastic whole-brain and body simulator of the nematode roundworm Caenorhabditis elegans (C. elegans) and show that it is sufficiently regularizing to allow imputation of latent membrane potentials from partial calcium fluorescence imaging observations.
1 code implementation • 18 Jul 2019 • Adam Goliński, Frank Wood, Tom Rainforth
At runtime, samples are produced separately from each amortized proposal, before being combined to an overall estimate of the expectation.
3 code implementations • 8 Jul 2019 • Atılım Güneş Baydin, Lei Shao, Wahid Bhimji, Lukas Heinrich, Lawrence Meadows, Jialin Liu, Andreas Munk, Saeid Naderiparizi, Bradley Gram-Hansen, Gilles Louppe, Mingfei Ma, Xiaohui Zhao, Philip Torr, Victor Lee, Kyle Cranmer, Prabhat, Frank Wood
Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models.
1 code implementation • NeurIPS 2019 • Vaden Masrani, Tuan Anh Le, Frank Wood
We introduce the thermodynamic variational objective (TVO) for learning in both continuous and discrete deep generative models.
no code implementations • 13 Jun 2019 • William Harvey, Michael Teng, Frank Wood
We introduce methodology from the BOED literature to approximate this optimal behaviour, and use it to generate `near-optimal' sequences of attention locations.
no code implementations • ICLR 2019 • Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, Frank Wood
Discrete latent-variable models, while applicable in a variety of settings, can often be difficult to learn.
no code implementations • 12 Mar 2019 • Michael Teng, Tuan Anh Le, Adam Scibior, Frank Wood
We apply recent advances in deep generative modeling to the task of imitation learning from biological agents.
1 code implementation • 6 Mar 2019 • Yuan Zhou, Bradley J. Gram-Hansen, Tobias Kohn, Tom Rainforth, Hongseok Yang, Frank Wood
We develop a new Low-level, First-order Probabilistic Programming Language (LF-PPL) suited for models containing a mix of continuous, discrete, and/or piecewise-continuous variables.
no code implementations • NeurIPS 2018 • Michael Teng, Frank Wood
We introduce Bayesian distributed stochastic gradient descent (BDSGD), a high-throughput algorithm for training deep neural networks on parallel clusters.
3 code implementations • 27 Sep 2018 • Jan-Willem van de Meent, Brooks Paige, Hongseok Yang, Frank Wood
We start with a discussion of model-based reasoning and explain why conditioning is a foundational computation central to the fields of probabilistic machine learning and artificial intelligence.
3 code implementations • NeurIPS 2019 • Atılım Güneş Baydin, Lukas Heinrich, Wahid Bhimji, Lei Shao, Saeid Naderiparizi, Andreas Munk, Jialin Liu, Bradley Gram-Hansen, Gilles Louppe, Lawrence Meadows, Philip Torr, Victor Lee, Prabhat, Kyle Cranmer, Frank Wood
We present a novel probabilistic programming framework that couples directly to existing large-scale simulators through a cross-platform probabilistic execution protocol, which allows general-purpose inference engines to record and control random number draws within simulators in a language-agnostic way.
no code implementations • 25 Jun 2018 • Tom Rainforth, Yuan Zhou, Xiaoyu Lu, Yee Whye Teh, Frank Wood, Hongseok Yang, Jan-Willem van de Meent
We introduce inference trees (ITs), a new class of inference methods that build on ideas from Monte Carlo tree search to perform adaptive sampling in a manner that balances exploration with exploitation, ensures consistency, and alleviates pathologies in existing adaptive methods.
1 code implementation • ICML 2018 • Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, Shimon Whiteson
Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown.
1 code implementation • ICLR 2019 • Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, Frank Wood
Stochastic control-flow models (SCFMs) are a class of generative models that involve branching on choices from discrete random variables.
1 code implementation • 7 Apr 2018 • Bradley Gram-Hansen, Yuan Zhou, Tobias Kohn, Tom Rainforth, Hongseok Yang, Frank Wood
Hamiltonian Monte Carlo (HMC) is arguably the dominant statistical inference algorithm used in most popular "first-order differentiable" Probabilistic Programming Languages (PPLs).
no code implementations • 12 Mar 2018 • Michael Teng, Frank Wood
We introduce a new, high-throughput, synchronous, distributed, data-parallel, stochastic-gradient-descent learning algorithm.
3 code implementations • ICML 2018 • Tom Rainforth, Adam R. Kosiorek, Tuan Anh Le, Chris J. Maddison, Maximilian Igl, Frank Wood, Yee Whye Teh
We provide theoretical and empirical evidence that using tighter evidence lower bounds (ELBOs) can be detrimental to the process of learning an inference network by reducing the signal-to-noise ratio of the gradient estimator.
no code implementations • ICLR 2018 • Robert Cornish, Hongseok Yang, Frank Wood
We consider the question of how to assess generative adversarial networks, in particular with respect to whether or not they generalise beyond memorising the training data.
no code implementations • 21 Dec 2017 • Mario Lezcano Casado, Atilim Gunes Baydin, David Martinez Rubio, Tuan Anh Le, Frank Wood, Lukas Heinrich, Gilles Louppe, Kyle Cranmer, Karen Ng, Wahid Bhimji, Prabhat
We consider the problem of Bayesian inference in the family of probabilistic models implicitly defined by stochastic generative models of data.
no code implementations • NeurIPS 2018 • Stefan Webb, Adam Golinski, Robert Zinkov, N. Siddharth, Tom Rainforth, Yee Whye Teh, Frank Wood
Inference amortization methods share information across multiple posterior-inference problems, allowing each to be carried out more efficiently.
1 code implementation • 31 Oct 2017 • Andrew Warrington, Frank Wood
The original implementation makes use of a patch-based approach.
no code implementations • ICML 2018 • Tom Rainforth, Robert Cornish, Hongseok Yang, Andrew Warrington, Frank Wood
Many problems in machine learning and statistics involve nested expectations and thus do not permit conventional Monte Carlo (MC) estimation.
2 code implementations • NeurIPS 2016 • Tom Rainforth, Tuan Anh Le, Jan-Willem van de Meent, Michael A. Osborne, Frank Wood
We present the first general purpose framework for marginal maximum a posteriori estimation of probabilistic program variables.
1 code implementation • NeurIPS 2017 • N. Siddharth, Brooks Paige, Jan-Willem van de Meent, Alban Desmaison, Noah D. Goodman, Pushmeet Kohli, Frank Wood, Philip H. S. Torr
We propose to learn such representations using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder.
1 code implementation • ICLR 2018 • Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, Frank Wood
We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models.
3 code implementations • ICLR 2018 • Atilim Gunes Baydin, Robert Cornish, David Martinez Rubio, Mark Schmidt, Frank Wood
We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice.
no code implementations • 2 Mar 2017 • Tuan Anh Le, Atilim Gunes Baydin, Robert Zinkov, Frank Wood
We draw a formal connection between using synthetic training data to optimize neural network parameters and approximate, Bayesian, model-based reasoning.
no code implementations • 3 Dec 2016 • Tom Rainforth, Robert Cornish, Hongseok Yang, Frank Wood
In this paper, we analyse the behaviour of nested Monte Carlo (NMC) schemes, for which classical convergence proofs are insufficient.
no code implementations • 22 Nov 2016 • N. Siddharth, Brooks Paige, Alban Desmaison, Jan-Willem van de Meent, Frank Wood, Noah D. Goodman, Pushmeet Kohli, Philip H. S. Torr
We develop a framework for incorporating structured graphical models in the \emph{encoders} of variational autoencoders (VAEs) that allows us to induce interpretable representations through approximate variational inference.
no code implementations • 21 Nov 2016 • David Janz, Brooks Paige, Tom Rainforth, Jan-Willem van de Meent, Frank Wood
Existing methods for structure discovery in time series data construct interpretable, compositional kernels for Gaussian process regression models.
4 code implementations • 31 Oct 2016 • Tuan Anh Le, Atilim Gunes Baydin, Frank Wood
We introduce a method for using deep neural networks to amortize the cost of inference in models from the family induced by universal probabilistic programming languages, establishing a framework that combines the strengths of probabilistic programming and deep learning methods.
no code implementations • 14 Jun 2016 • Mike Wu, Yura Perov, Frank Wood, Hongseok Yang
We demonstrate this by developing a native Excel implementation of both a particle Markov Chain Monte Carlo variant and black-box variational inference for spreadsheet probabilistic programming.
1 code implementation • 22 Feb 2016 • Brooks Paige, Frank Wood
We introduce a new approach for amortizing inference in directed graphical models by learning heuristic approximations to stochastic inverses, designed specifically for use as proposal distributions in sequential Monte Carlo methods.
1 code implementation • 16 Feb 2016 • Tom Rainforth, Christian A. Naesseth, Fredrik Lindsten, Brooks Paige, Jan-Willem van de Meent, Arnaud Doucet, Frank Wood
We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers.
no code implementations • 19 Jan 2016 • Sam Staton, Hongseok Yang, Chris Heunen, Ohad Kammar, Frank Wood
We study the semantic foundation of expressive probabilistic programming languages, that support higher-order functions, continuous distributions, and soft constraints (such as Anglican, Church, and Venture).
no code implementations • 14 Dec 2015 • Yura N. Perov, Tuan Anh Le, Frank Wood
Most of Markov Chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) algorithms in existing probabilistic programming systems suboptimally use only model priors as proposal distributions.
3 code implementations • 20 Jul 2015 • Tom Rainforth, Frank Wood
We introduce canonical correlation forests (CCFs), a new decision tree ensemble method for classification and regression.
1 code implementation • 16 Jul 2015 • Jan-Willem van de Meent, Brooks Paige, David Tolpin, Frank Wood
In this work, we explore how probabilistic programs can be used to represent policies in sequential decision problems.
no code implementations • 3 Jul 2015 • Frank Wood, Jan Willem van de Meent, Vikash Mansinghka
We introduce and demonstrate a new approach to inference in expressive probabilistic programming languages based on particle Markov chain Monte Carlo.
no code implementations • 26 Apr 2015 • David Tolpin, Frank Wood
We introduce an approximate search algorithm for fast maximum a posteriori probability estimation in probabilistic programs, which we call Bayesian ascent Monte Carlo (BaMC).
no code implementations • 25 Feb 2015 • David Tolpin, Brooks Paige, Jan Willem van de Meent, Frank Wood
We introduce a new approach to solving path-finding problems under uncertainty by representing them as probabilistic models and applying domain-independent inference algorithms to the models.
no code implementations • 27 Jan 2015 • Jan-Willem van de Meent, Hongseok Yang, Vikash Mansinghka, Frank Wood
Particle Markov chain Monte Carlo techniques rank among current state-of-the-art methods for probabilistic program inference.
1 code implementation • 22 Jan 2015 • David Tolpin, Jan Willem van de Meent, Brooks Paige, Frank Wood
We introduce an adaptive output-sensitive Metropolis-Hastings algorithm for probabilistic models expressed as programs, Adaptive Lightweight Metropolis-Hastings (AdLMH).
no code implementations • NeurIPS 2014 • Brooks Paige, Frank Wood, Arnaud Doucet, Yee Whye Teh
We introduce a new sequential Monte Carlo algorithm we call the particle cascade.
no code implementations • 30 Jun 2014 • Jonathan H. Huggins, Frank Wood
This paper reviews recent advances in Bayesian nonparametric techniques for constructing and performing inference in infinite hidden Markov models.
no code implementations • 3 Mar 2014 • Brooks Paige, Frank Wood
Forward inference techniques such as sequential Monte Carlo and particle Markov chain Monte Carlo for probabilistic programming can be implemented in any programming language by creative use of standardized operating system functionality including processes, forking, mutexes, and shared memory.
no code implementations • 28 Jan 2014 • Jan-Willem van de Meent, Brooks Paige, Frank Wood
In this paper we demonstrate that tempering Markov chain Monte Carlo samplers for Bayesian models by recursively subsampling observations without replacement can improve the performance of baseline samplers in terms of effective sample size per computation.
no code implementations • 15 May 2013 • Jan-Willem van de Meent, Jonathan E. Bronson, Frank Wood, Ruben L. Gonzalez Jr., Chris H. Wiggins
We address the problem of analyzing sets of noisy time-varying signals that all report on the same process but confound straightforward analyses due to complex inter-signal heterogeneities and measurement artifacts.
no code implementations • 9 May 2013 • John Zech, Frank Wood
We propose an original model for inferring team strengths using a Markov Random Field, which can be used to generate historical estimates of the offensive and defensive strengths of a team over time.
no code implementations • NeurIPS 2011 • Adler J. Perotte, Frank Wood, Noemie Elhadad, Nicholas Bartlett
We introduce hierarchically supervised latent Dirichlet allocation (HSLDA), a model for hierarchically and multiply labeled bag-of-word data.
no code implementations • NeurIPS 2010 • David Pfau, Nicholas Bartlett, Frank Wood
We suggest that our method for averaging over PDFAs is a novel approach to predictive distribution smoothing.
no code implementations • NeurIPS 2008 • Pietro Berkes, Frank Wood, Jonathan W. Pillow
The coding of information by neural populations depends critically on the statistical dependencies between neuronal responses.
no code implementations • NeurIPS 2008 • Jan Gasthaus, Frank Wood, Dilan Gorur, Yee W. Teh
In this paper we propose a new incremental spike sorting model that automatically eliminates refractory period violations, accounts for action potential waveform drift, and can handle appearance" and "disappearance" of neurons.