1 code implementation • 15 Apr 2024 • Tidiane Camaret Ndir, André Biedenkapp, Noor Awad
In this work, we address the challenge of zero-shot generalization (ZSG) in Reinforcement Learning (RL), where agents must adapt to entirely novel environments without additional training.
no code implementations • 8 May 2023 • Noor Awad, Ayushi Sharma, Philipp Muller, Janek Thomas, Frank Hutter
Hyperparameter optimization (HPO) is a powerful technique for automating the tuning of machine learning (ML) models.
no code implementations • 15 Mar 2023 • Hilde Weerts, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl, Frank Hutter
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices.
1 code implementation • 13 Dec 2022 • Shuhei Watanabe, Noor Awad, Masaki Onishi, Frank Hutter
Hyperparameter optimization (HPO) is a vital step in improving performance in deep learning (DL).
1 code implementation • 27 May 2022 • Steven Adriaensen, André Biedenkapp, Gresa Shala, Noor Awad, Theresa Eimer, Marius Lindauer, Frank Hutter
The performance of an algorithm often critically depends on its parameter configuration.
2 code implementations • 14 Sep 2021 • Katharina Eggensperger, Philipp Müller, Neeratyoy Mallik, Matthias Feurer, René Sass, Aaron Klein, Noor Awad, Marius Lindauer, Frank Hutter
To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications.
2 code implementations • 20 May 2021 • Noor Awad, Neeratyoy Mallik, Frank Hutter
Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever.
1 code implementation • 15 Dec 2020 • Noor Awad, Gresa Shala, Difan Deng, Neeratyoy Mallik, Matthias Feurer, Katharina Eggensperger, Andre' Biedenkapp, Diederick Vermetten, Hao Wang, Carola Doerr, Marius Lindauer, Frank Hutter
In this short note, we describe our submission to the NeurIPS 2020 BBO challenge.
1 code implementation • 11 Dec 2020 • Noor Awad, Neeratyoy Mallik, Frank Hutter
Neural architecture search (NAS) methods rely on a search strategy for deciding which architectures to evaluate next and a performance estimation strategy for assessing their performance (e. g., using full evaluations, multi-fidelity evaluations, or the one-shot model).
no code implementations • 28 Oct 2019 • Jörg K. H. Franke, Gregor Köhler, Noor Awad, Frank Hutter
Current Deep Reinforcement Learning algorithms still heavily rely on handcrafted neural network architectures.