1 code implementation • 7 Feb 2024 • Agustinus Kristiadi, Felix Strieth-Kalthoff, Marta Skreta, Pascal Poupart, Alán Aspuru-Guzik, Geoff Pleiss
Bayesian optimization (BO) is an essential part of such workflows, enabling scientists to leverage prior domain knowledge into efficient exploration of a large molecular space.
no code implementations • 1 Feb 2024 • Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, Jose Miguel Hernandez Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets.
1 code implementation • 9 Dec 2023 • Wu Lin, Felix Dangel, Runa Eschenhagen, Kirill Neklyudov, Agustinus Kristiadi, Richard E. Turner, Alireza Makhzani
Second-order methods for deep learning -- such as KFAC -- can be useful for neural net training.
1 code implementation • 7 Nov 2023 • Ahmad Rashid, Serena Hacker, Guojun Zhang, Agustinus Kristiadi, Pascal Poupart
For instance, ReLU networks - a popular class of neural network architectures - have been shown to almost always yield high confidence predictions when the test data are far away from the training set, even when they are trained with OOD data.
no code implementations • 29 Sep 2023 • Jonathan Wenger, Felix Dangel, Agustinus Kristiadi
Our empirical results demonstrate that this is not the case in optimization, uncertainty quantification or continual learning.
1 code implementation • 17 Apr 2023 • Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Vincent Fortuin
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
1 code implementation • 20 May 2022 • Agustinus Kristiadi, Runa Eschenhagen, Philipp Hennig
We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.
1 code implementation • 7 Mar 2022 • Luca Rendsburg, Agustinus Kristiadi, Philipp Hennig, Ulrike Von Luxburg
By reframing the problem in terms of incompatible conditional distributions we arrive at a natural solution: the Gibbs prior.
no code implementations • 5 Nov 2021 • Runa Eschenhagen, Erik Daxberger, Philipp Hennig, Agustinus Kristiadi
Deep neural networks are prone to overconfident predictions on outliers.
3 code implementations • NeurIPS 2021 • Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.
1 code implementation • 18 Jun 2021 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
Despite their compelling theoretical properties, Bayesian neural networks (BNNs) tend to perform worse than frequentist methods in classification-based uncertainty quantification (UQ) tasks such as out-of-distribution (OOD) detection.
no code implementations • NeurIPS 2021 • Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.
no code implementations • NeurIPS 2021 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
We extend finite ReLU BNNs with infinite ReLU features via the GP and show that the resulting model is asymptotically maximally uncertain far away from the data while the BNNs' predictive power is unaffected near the data.
1 code implementation • 6 Oct 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs).
no code implementations • 28 Sep 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
However, far away from the training data, even Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be overconfident.
1 code implementation • 2 Mar 2020 • Marius Hobbhahn, Agustinus Kristiadi, Philipp Hennig
We reconsider old work (Laplace Bridge) to construct a Dirichlet approximation of this softmax output distribution, which yields an analytic map between Gaussian distributions in logit space and Dirichlet distributions (the conjugate prior to the Categorical distribution) in the output space.
1 code implementation • ICML 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
These theoretical results validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off.
no code implementations • ICLR 2019 • Agustinus Kristiadi, Asja Fischer
Despite the huge success of deep neural networks (NNs), finding good mechanisms for quantifying their prediction uncertainty is still an open problem.
no code implementations • 4 Feb 2019 • Agustinus Kristiadi, Sina Däubener, Asja Fischer
Despite the huge success of deep neural networks (NNs), finding good mechanisms for quantifying their prediction uncertainty is still an open problem.
1 code implementation • CONLL 2018 • Debanjan Chaudhuri, Agustinus Kristiadi, Jens Lehmann, Asja Fischer
Building systems that can communicate with humans is a core problem in Artificial Intelligence.
1 code implementation • 3 Feb 2018 • Agustinus Kristiadi, Mohammad Asif Khan, Denis Lukovnikov, Jens Lehmann, Asja Fischer
Most of the existing work on embedding (or latent feature) based knowledge graph analysis focuses mainly on the relations between entities.