1 code implementation • 15 Apr 2024 • Manuel Gloeckler, Michael Deistler, Christian Weilbach, Frank Wood, Jakob H. Macke
Amortized Bayesian inference trains neural networks to solve stochastic inference problems using model simulations, thereby making it possible to rapidly perform Bayesian inference for any newly observed data.
1 code implementation • 19 Feb 2024 • Jonas Beck, Nathanael Bosch, Michael Deistler, Kyra L. Kadhim, Jakob H. Macke, Philipp Hennig, Philipp Berens
Ordinary differential equations (ODEs) are widely used to describe dynamical systems in science, but identifying parameters that explain experimental measurements is challenging.
1 code implementation • 12 Feb 2024 • Julius Vetter, Guy Moss, Cornelius Schröder, Richard Gao, Jakob H. Macke
Scientific modeling applications often require estimating a distribution of parameters consistent with a dataset of observations - an inference task also known as source distribution estimation.
1 code implementation • 5 Dec 2023 • Mila Gorecki, Jakob H. Macke, Michael Deistler
Simulation-based inference (SBI) provides a powerful framework for inferring posterior distributions of stochastic simulators in a wide range of domains.
2 code implementations • 3 Dec 2023 • Guy Moss, Vjeran Višnjević, Olaf Eisen, Falk M. Oraschewski, Cornelius Schröder, Jakob H. Macke, Reinhard Drews
The geometry of ice shelves, and hence their buttressing strength, is determined by ice flow as well as by the local surface accumulation and basal melt rates, governed by atmospheric and oceanic conditions.
1 code implementation • NeurIPS 2023 • Maximilian Dax, Jonas Wildberger, Simon Buchholz, Stephen R. Green, Jakob H. Macke, Bernhard Schölkopf
Neural posterior estimation methods based on discrete normalizing flows have become established tools for simulation-based inference (SBI), but scaling them to high-dimensional problems can be challenging.
no code implementations • NeurIPS 2023 • Richard Gao, Michael Deistler, Jakob H. Macke
Generalized Bayesian Inference (GBI) aims to robustify inference for (misspecified) simulator models, replacing the likelihood-function with a cost function that evaluates the goodness of parameters relative to data.
no code implementations • 24 May 2023 • Cornelius Schröder, Jakob H. Macke
Many scientific models are composed of multiple discrete components, and scientists often make heuristic decisions about which components to include.
1 code implementation • 24 May 2023 • Manuel Glöckler, Michael Deistler, Jakob H. Macke
Bayesian inference usually requires running potentially costly inference procedures separately for every new observation.
no code implementations • 9 Jan 2023 • Jaivardhan Kapoor, Jakob H. Macke, Christian F. Baumgartner
Generative modeling of 3D brain MRIs presents difficulties in achieving high visual fidelity while ensuring sufficient coverage of the data distribution.
no code implementations • 16 Nov 2022 • Jonas Wildberger, Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Pürrer, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf
Deep learning techniques for gravitational-wave parameter estimation have emerged as a fast alternative to standard samplers $\unicode{x2013}$ producing results of comparable accuracy.
1 code implementation • 11 Oct 2022 • Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Pürrer, Jonas Wildberger, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf
This shows a median sample efficiency of $\approx 10\%$ (two orders-of-magnitude better than standard samplers) as well as a ten-fold reduction in the statistical uncertainty in the log evidence.
1 code implementation • ICLR 2022 • Poornima Ramesh, Jan-Matthis Lueckmann, Jan Boelts, Álvaro Tejero-Cantero, David S. Greenberg, Pedro J. Gonçalves, Jakob H. Macke
We study the relationship between SBI and GANs, and introduce GATSBI, an adversarial approach to SBI.
1 code implementation • ICLR 2022 • Manuel Glöckler, Michael Deistler, Jakob H. Macke
We present Sequential Neural Variational Inference (SNVI), an approach to perform Bayesian inference in models with intractable likelihoods.
1 code implementation • ICLR 2022 • Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Deistler, Bernhard Schölkopf, Jakob H. Macke
We here describe an alternative method to incorporate equivariances under joint transformations of parameters and data.
1 code implementation • 23 Jun 2021 • Maximilian Dax, Stephen R. Green, Jonathan Gair, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf
We demonstrate unprecedented accuracy for rapid gravitational-wave parameter estimation with deep learning.
2 code implementations • 12 Jan 2021 • Jan-Matthis Lueckmann, Jan Boelts, David S. Greenberg, Pedro J. Gonçalves, Jakob H. Macke
We set out to fill this gap: We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms including recent approaches employing neural networks and classical Approximate Bayesian Computation methods.
no code implementations • 17 Jul 2020 • Alvaro Tejero-Cantero, Jan Boelts, Michael Deistler, Jan-Matthis Lueckmann, Conor Durkan, Pedro J. Gonçalves, David S. Greenberg, Jakob H. Macke
$\texttt{sbi}$ facilitates inference on black-box simulators for practising scientists and engineers by providing a unified interface to state-of-the-art algorithms together with documentation and tutorials.
2 code implementations • 3 Oct 2019 • Alexandre René, André Longtin, Jakob H. Macke
We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to either optimize parameters by gradient ascent on the log-likelihood, or to perform Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling.
1 code implementation • 27 Jun 2019 • Artur Speiser, Lucas-Raphael Müller, Ulf Matti, Christopher J. Obara, Wesley R. Legant, Jonas Ries, Jakob H. Macke, Srinivas C. Turaga
We present a novel localization algorithm based on deep learning which significantly improves upon the state of the art.
1 code implementation • NeurIPS 2019 • Alessio Ansuini, Alessandro Laio, Jakob H. Macke, Davide Zoccolan
We find that, in a trained network, the ID is orders of magnitude smaller than the number of units in each layer.
2 code implementations • 17 May 2019 • David S. Greenberg, Marcel Nonnenmacher, Jakob H. Macke
How can one perform Bayesian inference on stochastic simulators with intractable likelihoods?
no code implementations • 31 Oct 2018 • David G. T. Barrett, Ari S. Morcos, Jakob H. Macke
We explore opportunities for synergy between the two fields, such as the use of DNNs as in-silico model systems for neuroscience, and how this synergy can lead to new hypotheses about the operating principles of biological neural networks.
2 code implementations • 23 May 2018 • Jan-Matthis Lueckmann, Giacomo Bassetto, Theofanis Karaletsos, Jakob H. Macke
Approximate Bayesian Computation (ABC) provides methods for Bayesian inference in simulation-based stochastic models which do not permit tractable likelihoods.
no code implementations • NeurIPS 2017 • Marcel Nonnenmacher, Srinivas C. Turaga, Jakob H. Macke
Current approaches for dimensionality reduction on neural data are limited to single population recordings, and can not identify dynamics embedded across multiple measurements.
no code implementations • NeurIPS 2017 • Artur Speiser, Jinyao Yan, Evan Archer, Lars Buesing, Srinivas C. Turaga, Jakob H. Macke
Calcium imaging permits optical measurement of neural activity.
1 code implementation • NeurIPS 2017 • Jan-Matthis Lueckmann, Pedro J. Goncalves, Giacomo Bassetto, Kaan Öcal, Marcel Nonnenmacher, Jakob H. Macke
Our approach builds on recent advances in ABC by learning a neural network which maps features of the observed data to the posterior distribution over parameters.
1 code implementation • 29 Feb 2016 • Marcel Nonnenmacher, Christian Behrens, Philipp Berens, Matthias Bethge, Jakob H. Macke
Support for this notion has come from a series of studies which identified statistical signatures of criticality in the ensemble activity of retinal ganglion cells.
Neurons and Cognition
no code implementations • NeurIPS 2015 • Mijung Park, Gergo Bohner, Jakob H. Macke
Neural population activity often exhibits rich variability.
no code implementations • NeurIPS 2014 • Patrick Putzky, Florian Franzen, Giacomo Bassetto, Jakob H. Macke
Here, we present a statistical model for extracting hierarchically organised neural population states from multi-channel recordings of neural spiking activity.
no code implementations • NeurIPS 2014 • Evan W. Archer, Urs Koster, Jonathan W. Pillow, Jakob H. Macke
Moreover, because the nonlinear stimulus inputs are mixed by the ongoing dynamics, the model can account for a relatively large number of idiosyncratic receptive field shapes with a small number of nonlinear inputs to a low-dimensional latent dynamical model.
no code implementations • 12 Oct 2014 • Mijung Park, Jakob H. Macke
Here, we introduce a hierarchical statistical model of neural population activity which models both neural population dynamics as well as inter-trial modulations in firing rates.
no code implementations • NeurIPS 2013 • Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob H. Macke
Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed.
no code implementations • NeurIPS 2012 • Lars Buesing, Jakob H. Macke, Maneesh Sahani
Here, we show how spectral learning methods for linear systems with Gaussian observations (usually called subspace identification in this context) can be extended to estimate the parameters of dynamical system models observed through non-Gaussian noise models.
no code implementations • NeurIPS 2011 • Jakob H. Macke, Lars Buesing, John P. Cunningham, Byron M. Yu, Krishna V. Shenoy, Maneesh Sahani
Neurons in the neocortex code and compute as part of a locally interconnected population.
no code implementations • NeurIPS 2011 • Jakob H. Macke, Iain Murray, Peter E. Latham
However, maximum entropy models fit to small data sets can be subject to sampling bias; i. e. the true entropy of the data can be severely underestimated.
no code implementations • NeurIPS 2009 • Sebastian Gerwinn, Leonard White, Matthias Kaschube, Matthias Bethge, Jakob H. Macke
Imaging techniques such as optical imaging of intrinsic signals, 2-photon calcium imaging and voltage sensitive dye imaging can be used to measure the functional organization of visual cortex across different spatial scales.
no code implementations • NeurIPS 2007 • Guenther Zeck, Matthias Bethge, Jakob H. Macke
Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons?
Ranked #107 on Image Classification on STL-10