no code implementations • 20 Dec 2023 • Eleonora Poeta, Gabriele Ciravegna, Eliana Pastor, Tania Cerquitelli, Elena Baralis
The field of explainable artificial intelligence emerged in response to the growing need for more transparent and reliable models.
no code implementations • 23 Aug 2023 • Pietro Barbiero, Francesco Giannini, Gabriele Ciravegna, Michelangelo Diligenti, Giuseppe Marra
The design of interpretable deep learning models working in relational domains poses an open challenge: interpretable deep learning methods, such as Concept-Based Models (CBMs), are not designed to solve relational problems, while relational models are not as interpretable as CBMs.
1 code implementation • 27 Apr 2023 • Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Mateo Espinosa Zarlenga, Lucie Charlotte Magister, Alberto Tonda, Pietro Lio', Frederic Precioso, Mateja Jamnik, Giuseppe Marra
Deep learning methods are highly accurate, yet their opaque decision process prevents them from earning full human trust.
1 code implementation • 4 Nov 2022 • Rishabh Jain, Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Davide Buffelli, Pietro Lio
Recently, Logic Explained Networks (LENs) have been proposed as explainable-by-design neural models providing logic explanations for their predictions.
1 code implementation • 19 Sep 2022 • Mateo Espinosa Zarlenga, Pietro Barbiero, Gabriele Ciravegna, Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Zohreh Shams, Frederic Precioso, Stefano Melacci, Adrian Weller, Pietro Lio, Mateja Jamnik
Deploying AI-powered systems requires trustworthy models supporting effective human interactions, going beyond raw prediction accuracy.
no code implementations • 27 Jul 2022 • Lucie Charlotte Magister, Pietro Barbiero, Dmitry Kazhdan, Federico Siciliano, Gabriele Ciravegna, Fabrizio Silvestri, Mateja Jamnik, Pietro Lio
The opaque reasoning of Graph Neural Networks induces a lack of human trust.
1 code implementation • 15 Oct 2021 • Gabriele Ciravegna, Frédéric Precioso, Alessandro Betti, Kevin Mottin, Marco Gori
The deployment of Deep Learning (DL) models is still precluded in those contexts where the amount of supervised data is limited.
no code implementations • 21 Sep 2021 • Matteo Tiezzi, Gabriele Ciravegna, Marco Gori
Graph Drawing techniques have been developed in the last few years with the purpose of producing aesthetically pleasing node-link layouts.
1 code implementation • 11 Aug 2021 • Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Marco Gori, Pietro Lió, Marco Maggini, Stefano Melacci
The language used to communicate the explanations must be formal enough to be implementable in a machine and friendly enough to be understandable by a wide audience.
3 code implementations • 12 Jun 2021 • Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Pietro Lió, Marco Gori, Stefano Melacci
Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains.
Ranked #1 on Image Classification on CUB
1 code implementation • 25 May 2021 • Pietro Barbiero, Gabriele Ciravegna, Dobrik Georgiev, Franscesco Giannini
"PyTorch, Explain!"
no code implementations • 6 Sep 2020 • Giansalvo Cirrincione, Pietro Barbiero, Gabriele Ciravegna, Vincenzo Randazzo
The former is just an adaptation of a standard competitive layer for deep clustering, while the latter is trained on the transposed matrix.
1 code implementation • 21 Aug 2020 • Pietro Barbiero, Gabriele Ciravegna, Vincenzo Randazzo, Giansalvo Cirrincione
The aim of this work is to present a novel comprehensive theory aspiring at bridging competitive learning with gradient-based learning, thus allowing the use of extremely powerful deep neural networks for feature extraction and projection combined with the remarkable flexibility and expressiveness of competitive learning.
no code implementations • 6 Jun 2020 • Stefano Melacci, Gabriele Ciravegna, Angelo Sotgiu, Ambra Demontis, Battista Biggio, Marco Gori, Fabio Roli
Adversarial attacks on machine learning-based classifiers, along with defense mechanisms, have been widely studied in the context of single-label classification problems.
1 code implementation • Neural Networks 2020 • Giansalvo Cirrincione, Gabriele Ciravegna, Pietro Barbiero, Vincenzo Randazzo, Eros Pasero
Furthermore, an important and very promising application of GH-EXIN in two-way hierarchical clustering, for the analysis of gene expression data in the study of the colorectal cancer is described.