no code implementations • 6 Feb 2024 • Guri Zabërgja, Arlind Kadra, Josif Grabocka
In this paper, we introduce a large-scale empirical study comparing neural networks against gradient-boosted decision trees on tabular data, but also transformer-based architectures against traditional multi-layer perceptrons (MLP) with residual connections.
1 code implementation • 6 Jun 2023 • Sebastian Pineda Arango, Fabio Ferreira, Arlind Kadra, Frank Hutter, Josif Grabocka
With the ever-increasing number of pretrained models, machine learning practitioners are continuously faced with which pretrained model to use, and how to finetune it for a new dataset.
1 code implementation • 22 May 2023 • Arlind Kadra, Sebastian Pineda Arango, Josif Grabocka
Through extensive experiments, we demonstrate that our explainable deep networks are as accurate as state-of-the-art classifiers on tabular data.
1 code implementation • 20 Feb 2022 • Martin Wistuba, Arlind Kadra, Josif Grabocka
Multi-fidelity (gray-box) hyperparameter optimization techniques (HPO) have recently emerged as a promising direction for tuning Deep Learning methods.
1 code implementation • NeurIPS 2021 • Arlind Kadra, Marius Lindauer, Frank Hutter, Josif Grabocka
Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures.
no code implementations • 1 Jan 2021 • Arlind Kadra, Marius Lindauer, Frank Hutter, Josif Grabocka
The regularization of prediction models is arguably the most crucial ingredient that allows Machine Learning solutions to generalize well on unseen data.
1 code implementation • 6 Nov 2019 • Matthias Feurer, Jan N. van Rijn, Arlind Kadra, Pieter Gijsbers, Neeratyoy Mallik, Sahithya Ravi, Andreas Müller, Joaquin Vanschoren, Frank Hutter
It also provides functionality to conduct machine learning experiments, upload the results to OpenML, and reproduce results which are stored on OpenML.