no code implementations • 15 Nov 2023 • Jonas Vestergaard Jensen, Mikkel Jordahn, Michael Riis Andersen
In this work, we address the problem of assessing and constructing feedback for early-stage writing automatically using machine learning.
no code implementations • 8 Apr 2023 • Maxim Khomiakov, Michael Riis Andersen, Jes Frellsen
In geospatial planning, it is often essential to represent objects in a vectorized format, as this format easily translates to downstream tasks such as web development, graphics, or design.
no code implementations • 20 Mar 2023 • Maxim Khomiakov, Alejandro Valverde Mahou, Alba Reinders Sánchez, Jes Frellsen, Michael Riis Andersen
We present a novel pipeline for learning the conditional distribution of a building roof mesh given pixels from an aerial image, under the assumption that roof geometry follows a set of regular patterns.
no code implementations • 14 Jan 2023 • Jonathan Foldager, Mikkel Jordahn, Lars Kai Hansen, Michael Riis Andersen
In this work, we provide an extensive study of the relationship between the BO performance (regret) and uncertainty calibration for popular surrogate models and compare them across both synthetic and real-world experiments.
no code implementations • 2 Dec 2022 • Maxim Khomiakov, Julius Holbech Radzikowski, Carl Anton Schmidt, Mathias Bonde Sørensen, Mads Andersen, Michael Riis Andersen, Jes Frellsen
The body of research on classification of solar panel arrays from aerial imagery is increasing, yet there are still not many public benchmark datasets.
1 code implementation • 29 Mar 2022 • Manushi Welandawe, Michael Riis Andersen, Aki Vehtari, Jonathan H. Huggins
RABVI adaptively decreases the learning rate by detecting convergence of the fixed--learning-rate iterates, then estimates the symmetrized Kullback--Leibler (KL) divergence between the current variational approximation and the optimal one.
no code implementations • ICML Workshop INNF 2021 • Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael Riis Andersen, Jonathan H. Huggins, Aki Vehtari
Current black-box variational inference (BBVI) methods require the user to make numerous design choices---such as the selection of variational objective and approximating family---yet there is little principled guidance on how to do so.
no code implementations • NeurIPS 2021 • Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael Riis Andersen, Jonathan H. Huggins, Aki Vehtari
Our framework and supporting experiments help to distinguish between the behavior of BBVI methods for approximating low-dimensional versus moderate-to-high-dimensional posteriors.
no code implementations • NeurIPS 2021 • Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael Riis Andersen, Jonathan Huggins, Aki Vehtari
Our framework and supporting experiments help to distinguish between the behavior of BBVI methods for approximating low-dimensional versus moderate-to-high-dimensional posteriors.
no code implementations • NeurIPS 2020 • Akash Kumar Dhaka, Alejandro Catalina, Michael Riis Andersen, Måns Magnusson, Jonathan H. Huggins, Aki Vehtari
We consider the problem of fitting variational posterior approximations using stochastic optimization methods.
1 code implementation • ICML 2020 • William J. Wilkinson, Paul E. Chang, Michael Riis Andersen, Arno Solin
EP provides some benefits over the traditional methods via introduction of the so-called cavity distribution, and we combine these benefits with the computational efficiency of linearisation, providing extensive empirical analysis demonstrating the efficacy of various algorithms under this unifying framework.
no code implementations • 25 Mar 2020 • Eero Siivola, Akash Kumar Dhaka, Michael Riis Andersen, Javier Gonzalez, Pablo Garcia Moreno, Aki Vehtari
This direction has been mainly driven by the use of BO in machine learning hyper-parameter configuration problems.
1 code implementation • 17 Oct 2019 • Topi Paananen, Michael Riis Andersen, Aki Vehtari
For nonlinear supervised learning models, assessing the importance of predictor variables or their interactions is not straightforward because it can vary in the domain of the variables.
no code implementations • pproximateinference AABI Symposium 2019 • William J. Wilkinson, Paul E. Chang, Michael Riis Andersen, Arno Solin
The extended Kalman filter (EKF) is a classical signal processing algorithm which performs efficient approximate Bayesian inference in non-conjugate models by linearising the local measurement function, avoiding the need to compute intractable integrals when calculating the posterior.
no code implementations • 24 Apr 2019 • Måns Magnusson, Michael Riis Andersen, Johan Jonasson, Aki Vehtari
Model inference, such as model comparison, model checking, and model selection, is an important part of model development.
1 code implementation • 31 Jan 2019 • William J. Wilkinson, Michael Riis Andersen, Joshua D. Reiss, Dan Stowell, Arno Solin
A typical audio signal processing pipeline includes multiple disjoint analysis stages, including calculation of a time-frequency representation followed by spectrogram-based feature analysis.
1 code implementation • 6 Nov 2018 • William J. Wilkinson, Michael Riis Andersen, Joshua D. Reiss, Dan Stowell, Arno Solin
In audio signal processing, probabilistic time-frequency models have many benefits over their non-probabilistic counterparts.
2 code implementations • 21 Dec 2017 • Topi Paananen, Juho Piironen, Michael Riis Andersen, Aki Vehtari
Variable selection for Gaussian process models is often done using automatic relevance determination, which uses the inverse length-scale parameter of each input variable as a proxy for variable relevance.
no code implementations • 17 Apr 2017 • Rasmus S. Andersen, Anders U. Eliasen, Nicolai Pedersen, Michael Riis Andersen, Sofie Therese Hansen, Lars Kai Hansen
In this work we explore the generality of Edelman et al. hypothesis by considering decoding of face recognition.
Neurons and Cognition
1 code implementation • 4 Apr 2017 • Eero Siivola, Aki Vehtari, Jarno Vanhatalo, Javier González, Michael Riis Andersen
Bayesian optimization (BO) is a global optimization strategy designed to find the minimum of an expensive black-box function, typically defined on a compact subset of $\mathcal{R}^d$, by using a Gaussian process (GP) as a surrogate model for the objective.
no code implementations • 15 Sep 2015 • Michael Riis Andersen, Aki Vehtari, Ole Winther, Lars Kai Hansen
In this work, we address the problem of solving a series of underdetermined linear inverse problems subject to a sparsity constraint.
no code implementations • 19 Aug 2015 • Michael Riis Andersen, Ole Winther, Lars Kai Hansen
We are interested in solving the multiple measurement vector (MMV) problem for instances, where the underlying sparsity pattern exhibit spatio-temporal structure motivated by the electroencephalogram (EEG) source localization problem.