no code implementations • 15 Jun 2022 • Adrian El Baz, Ihsan Ullah, Edesio Alcobaça, André C. P. L. F. Carvalho, Hong Chen, Fabio Ferreira, Henry Gouk, Chaoyu Guan, Isabelle Guyon, Timothy Hospedales, Shell Hu, Mike Huisman, Frank Hutter, Zhengying Liu, Felix Mohr, Ekrem Öztürk, Jan N. van Rijn, Haozhe Sun, Xin Wang, Wenwu Zhu
Although deep neural networks are capable of achieving performance superior to humans on various tasks, they are notorious for requiring large amounts of data and computing resources, restricting their success to domains where such resources are available.
no code implementations • 31 Jul 2020 • Rafael Gomes Mantovani, André Luis Debiaso Rossi, Edesio Alcobaça, Jadson Castro Gertrudes, Sylvio Barbon Junior, André Carlos Ponce de Leon Ferreira de Carvalho
Our approach is grounded on a small set of optimized values able to obtain predictive performance values better than default settings provided by popular tools.
no code implementations • 15 Oct 2019 • Gean Trindade Pereira, Moisés dos Santos, Edesio Alcobaça, Rafael Mantovani, André Carvalho
Thus, we train a neural network on meta-datasets related to algorithm recommendation, and then using transfer learning, we reuse the knowledge learned by the neural network in other similar datasets from the same domain, to verify how transferable is the acquired meta-knowledge.
1 code implementation • 4 Jun 2019 • Rafael Gomes Mantovani, André Luis Debiaso Rossi, Edesio Alcobaça, Joaquin Vanschoren, André Carlos Ponce de Leon Ferreira de Carvalho
For many machine learning algorithms, predictive performance is critically affected by the hyperparameter values used to train them.