no code implementations • 28 May 2024 • Christoph Kern, Michael Kim, Angela Zhou
We show improvements in bias and mean squared error in simulations with increasingly larger covariate shift, and on a semi-synthetic case study of a parallel large observational study and smaller randomized controlled experiment.
no code implementations • 6 May 2024 • Chenyu Gao, Shunxing Bao, Michael Kim, Nancy Newlin, Praitayini Kanakaraj, Tianyuan Yao, Gaurav Rudravaram, Yuankai Huo, Daniel Moyer, Kurt Schilling, Walter Kukull, Arthur Toga, Derek Archer, Timothy Hohman, Bennett Landman, Zhiyuan Li
We hypothesize that the imputed image with complete FOV can improve the whole-brain tractography for corrupted data with incomplete FOV.
no code implementations • 30 Mar 2021 • Michael Kim
We present a novel method of stacking decision trees by projection into an ordered time split out-of-fold (OOF) one nearest neighbor (1NN) space.
no code implementations • ICML 2018 • Ursula Hebert-Johnson, Michael Kim, Omer Reingold, Guy Rothblum
We develop and study multicalibration as a new measure of fairness in machine learning that aims to mitigate inadvertent or malicious discrimination that is introduced at training time (even from ground truth data).
no code implementations • 25 Mar 2016 • Joseph Dulny III, Michael Kim
Through these experiments, we provide unique insights into the state of quantum ML via boosting and the use of quantum annealing hardware that are valuable to institutions interested in applying QA to problems in ML and beyond.