Search Results for author: Gene Pennello

Found 4 papers, 2 papers with code

Designing monitoring strategies for deployed machine learning algorithms: navigating performativity through a causal lens

no code implementations20 Nov 2023 Jean Feng, Adarsh Subbaswamy, Alexej Gossmann, Harvineet Singh, Berkman Sahiner, Mi-Ok Kim, Gene Pennello, Nicholas Petrick, Romain Pirracchio, Fan Xia

When an ML algorithm interacts with its environment, the algorithm can affect the data-generating mechanism and be a major source of bias when evaluating its standalone performance, an issue known as performativity.

Causal Inference Ethics

Is this model reliable for everyone? Testing for strong calibration

1 code implementation28 Jul 2023 Jean Feng, Alexej Gossmann, Romain Pirracchio, Nicholas Petrick, Gene Pennello, Berkman Sahiner

In a well-calibrated risk prediction model, the average predicted probability is close to the true event rate for any given subgroup.

Fairness

Monitoring machine learning (ML)-based risk prediction algorithms in the presence of confounding medical interventions

1 code implementation17 Nov 2022 Jean Feng, Alexej Gossmann, Gene Pennello, Nicholas Petrick, Berkman Sahiner, Romain Pirracchio

Performance monitoring of machine learning (ML)-based risk prediction models in healthcare is complicated by the issue of confounding medical interventions (CMI): when an algorithm predicts a patient to be at high risk for an adverse event, clinicians are more likely to administer prophylactic treatment and alter the very target that the algorithm aims to predict.

Bayesian Inference Selection bias +1

Sequential algorithmic modification with test data reuse

no code implementations21 Mar 2022 Jean Feng, Gene Pennello, Nicholas Petrick, Berkman Sahiner, Romain Pirracchio, Alexej Gossmann

Each modification introduces a risk of deteriorating performance and must be validated on a test dataset.

Cannot find the paper you are looking for? You can Submit a new open access paper.