no code implementations • 27 Oct 2022 • Federico Cabitza, Matteo Cameli, Andrea Campagner, Chiara Natali, Luca Ronzio
The shift from symbolic AI systems to black-box, sub-symbolic, and statistical ones has motivated a rapid increase in the interest toward explainable AI (XAI), i. e. approaches to make black-box AI systems explainable to human decision makers with the aim of making these systems more acceptable and more usable tools and supports.
no code implementations • 10 Oct 2022 • Andrea Campagner, Lorenzo Famiglini, Anna Carobene, Federico Cabitza
In medical settings, Individual Variation (IV) refers to variation that is due not to population differences or errors, but rather to within-subject variation, that is the intrinsic and characteristic patterns of variation pertaining to a given instance or the measurement process.
no code implementations • 20 Jun 2022 • Andrea Campagner, Davide Ciucci, Thierry Denœux
The development of external evaluation criteria for soft clustering (SC) has received limited attention: existing methods do not provide a general approach to extend comparison measures to SC, and are unable to account for the uncertainty represented in the results of SC algorithms.
no code implementations • 9 Sep 2021 • Valerio Basile, Federico Cabitza, Andrea Campagner, Michael Fell
Most Artificial Intelligence applications are based on supervised machine learning (ML), which ultimately grounds on manually annotated data.
no code implementations • 21 Oct 2019 • Federico Cabitza, Andrea Campagner
With the increasing availability of AI-based decision support, there is an increasing need for their certification by both AI manufacturers and notified bodies, as well as the pragmatic (real-world) validation of these systems.