no code implementations • 1 Dec 2023 • Petru Tighineanu, Lukas Grossberger, Paul Baireuther, Kathrin Skubch, Stefan Falkner, Julia Vinogradska, Felix Berkenkamp
Meta-learning is a powerful approach that exploits historical data to quickly solve new tasks from the same distribution.
no code implementations • 7 Jul 2023 • Jiarong Pan, Stefan Falkner, Felix Berkenkamp, Joaquin Vanschoren
Bayesian optimization (BO) is a popular method to optimize costly black-box functions.
1 code implementation • NeurIPS 2021 • Amrutha Saseendran, Kathrin Skubch, Stefan Falkner, Margret Keuper
In this paper, we propose a simple and end-to-end trainable deterministic autoencoding framework, that efficiently shapes the latent space of the model during training and utilizes the capacity of expressive multi-modal latent distributions.
no code implementations • 1 Jan 2021 • Felix Berkenkamp, Anna Eivazi, Lukas Grossberger, Kathrin Skubch, Jonathan Spitz, Christian Daniel, Stefan Falkner
Transfer and meta-learning algorithms leverage evaluations on related tasks in order to significantly speed up learning or optimization on a new problem.
4 code implementations • 8 Jul 2020 • Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, Frank Hutter
Automated Machine Learning (AutoML) supports practitioners and researchers with the tedious task of designing machine learning pipelines and has recently achieved substantial success.
1 code implementation • 10 Oct 2019 • Matilde Gargiani, Aaron Klein, Stefan Falkner, Frank Hutter
We propose probabilistic models that can extrapolate learning curves of iterative machine learning algorithms, such as stochastic gradient descent for training deep networks, based on training data with variable-length learning curves.
2 code implementations • ICLR 2020 • Michael Volpp, Lukas P. Fröhlich, Kirsten Fischer, Andreas Doerr, Stefan Falkner, Frank Hutter, Christian Daniel
Transferring knowledge across tasks to improve data-efficiency is one of the open key challenges in the field of global black-box optimization.
5 code implementations • ICLR 2019 • Frederic Runge, Danny Stoll, Stefan Falkner, Frank Hutter
Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation.
3 code implementations • 18 Jul 2018 • Arber Zela, Aaron Klein, Stefan Falkner, Frank Hutter
While existing work on neural architecture search (NAS) tunes hyperparameters in a separate post-processing step, we demonstrate that architectural choices and other hyperparameter settings interact in a way that can render this separation suboptimal.
4 code implementations • ICML 2018 • Stefan Falkner, Aaron Klein, Frank Hutter
Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible.
1 code implementation • NIPS 2017 2017 • Aaron Klein, Stefan Falkner, Numair Mansur, Frank Hutter
Bayesian optimization is a powerful approach for the global derivative-free optimization of non-convex expensive functions.
no code implementations • 2 Dec 2016 • Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter
We consider parallel asynchronous Markov Chain Monte Carlo (MCMC) sampling for problems where we can leverage (stochastic) gradients to define continuous dynamics which explore the target distribution.
1 code implementation • NeurIPS 2016 • Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter
Bayesian optimization is a prominent method for optimizing expensive to evaluate black-box functions that is prominently applied to tuning the hyperparameters of machine learning algorithms.
1 code implementation • 23 May 2016 • Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank Hutter
Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks.