Search Results for author: Bob Carpenter

Found 9 papers, 5 papers with code

GIST: Gibbs self-tuning for locally adaptive Hamiltonian Monte Carlo

1 code implementation23 Apr 2024 Nawaf Bou-Rabee, Bob Carpenter, Milo Marsden

We present a novel and flexible framework for localized tuning of Hamiltonian Monte Carlo samplers by sampling the algorithm's tuning parameters conditionally based on the position and momentum at each step.

Position

Ensemble reweighting using Cryo-EM particles

no code implementations10 Dec 2022 Wai Shing Tang, David Silva-Sánchez, Julian Giraldo-Barreto, Bob Carpenter, Sonya Hanson, Alex H. Barnett, Erik H. Thiede, Pilar Cossio

Cryo-electron microscopy (cryo-EM) has recently become a premier method for obtaining high-resolution structures of biological macromolecules.

Delayed rejection Hamiltonian Monte Carlo for sampling multiscale distributions

no code implementations1 Oct 2021 Chirag Modi, Alex Barnett, Bob Carpenter

The efficiency of Hamiltonian Monte Carlo (HMC) can suffer when sampling a distribution with a wide range of length scales, because the small step sizes needed for stability in high-curvature regions are inefficient elsewhere.

Pathfinder: Parallel quasi-Newton variational inference

5 code implementations9 Aug 2021 Lu Zhang, Bob Carpenter, Andrew Gelman, Aki Vehtari

Pathfinder returns draws from the approximation with the lowest estimated Kullback-Leibler (KL) divergence to the true posterior.

Pathfinder Variational Inference

Rank-normalization, folding, and localization: An improved $\widehat{R}$ for assessing convergence of MCMC

2 code implementations19 Mar 2019 Aki Vehtari, Andrew Gelman, Daniel Simpson, Bob Carpenter, Paul-Christian Bürkner

In this paper we show that the convergence diagnostic $\widehat{R}$ of Gelman and Rubin (1992) has serious flaws.

Computation Methodology

Comparing Bayesian Models of Annotation

no code implementations TACL 2018 Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, Massimo Poesio

We evaluate these models along four aspects: comparison to gold labels, predictive accuracy for new annotations, annotator characterization, and item difficulty, using four datasets with varying degrees of noise in the form of random (spammy) annotators.

Model Selection

The Stan Math Library: Reverse-Mode Automatic Differentiation in C++

1 code implementation23 Sep 2015 Bob Carpenter, Matthew D. Hoffman, Marcus Brubaker, Daniel Lee, Peter Li, Michael Betancourt

As computational challenges in optimization and statistical inference grow ever harder, algorithms that utilize derivatives are becoming increasingly more important.

Mathematical Software G.1.0; G.1.3; G.1.4; F.2.1

The Benefits of a Model of Annotation

no code implementations TACL 2014 Rebecca J. Passonneau, Bob Carpenter

Standard agreement measures for interannotator reliability are neither necessary nor sufficient to ensure a high quality corpus.

Epidemiology

Cannot find the paper you are looking for? You can Submit a new open access paper.