no code implementations • 6 Feb 2023 • Rachael Harkness, Alejandro F Frangi, Kieran Zucker, Nishant Ravikumar
We generate visual examples to show that our explainability method, when applied to the trained DirVAE, is able to highlight regions in CXR images that are clinically relevant to the class(es) of interest and additionally, can identify cases where classification relies on spurious feature correlations.
1 code implementation • 14 Sep 2021 • Rachael Harkness, Geoff Hall, Alejandro F Frangi, Nishant Ravikumar, Kieran Zucker
Model performance results have been exceptional when training and testing on open-source data, surpassing the reported capabilities of AI in pneumonia-detection prior to the COVID-19 outbreak.