1 code implementation • 15 Nov 2022 • Naofumi Hama, Masayoshi Mase, Art B. Owen
Here we present some model-free methods that do not require access to the prediction function.
no code implementations • 31 May 2022 • Masayoshi Mase, Art B. Owen, Benjamin B. Seiler
The most popular methods for measuring importance of the variables in a black box prediction algorithm make use of synthetic inputs that combine predictor variables from multiple subjects.
1 code implementation • 25 May 2022 • Naofumi Hama, Masayoshi Mase, Art B. Owen
We find an expression for the expected value of the AUC under a random ordering of inputs to $f$ and propose an alternative area above a straight line for the regression setting.
Additive models Explainable Artificial Intelligence (XAI) +1
no code implementations • 27 Jul 2021 • Christopher R. Hoyt, Art B. Owen
We use graphical methods to probe neural nets that classify images.
1 code implementation • 17 May 2021 • Benjamin B. Seiler, Masayoshi Mase, Art B. Owen
We use Shapley value to combine all of the reductions in log cardinality due to revealing a variable after some subset of the other variables has been revealed.
1 code implementation • 15 May 2021 • Masayoshi Mase, Art B. Owen, Benjamin B. Seiler
Cohort Shapley value is a model-free method of variable importance grounded in game theory that does not use any unobserved and potentially impossible feature combinations.
no code implementations • 7 Apr 2021 • Sifan Liu, Art B. Owen
Many machine learning problems optimize an objective that must be measured with noise.
no code implementations • 2 Jul 2020 • Christopher Hoyt, Art B. Owen
For an additive function the radial and winding stairs are most efficient.
2 code implementations • 1 Nov 2019 • Masayoshi Mase, Art B. Owen, Benjamin Seiler
Instead of changing the value of a predictor we include or exclude subjects similar to the target subject on that predictor to form a similarity cohort.
1 code implementation • 11 Nov 2017 • Edgar Dobriban, Art B. Owen
This paper presents a deterministic version of PA (DPA), which is faster and more reproducible than PA. We show that DPA selects large factors and does not select small factors just like [Dobriban, 2017] shows for PA.
Methodology
1 code implementation • 11 Oct 2016 • Jingshu Wang, Lin Gui, Weijie J. Su, Chiara Sabatti, Art B. Owen
Replicability is a fundamental quality of scientific discoveries: we are interested in those signals that are detectable in different laboratories, study populations, across time etc.
Methodology
1 code implementation • 31 Jan 2016 • Katelyn Gao, Art B. Owen
Large crossed data sets, described by generalized linear mixed models, have become increasingly common and provide challenges for statistical analysis.
Methodology Computation
no code implementations • 27 Oct 2015 • Art B. Owen
That observation is often taken to mean that thinning MCMC output cannot improve statistical efficiency.
no code implementations • 17 Aug 2015 • Jingshu Wang, Qingyuan Zhao, Trevor Hastie, Art B. Owen
In some of these studies, the multiple testing procedure can be severely biased by latent confounding factors such as batch effects and unmeasured covariates that correlate with both primary variable(s) of interest (e. g. treatment variable, phenotype) and the outcome.
Methodology Statistics Theory Statistics Theory 62H25, 62J15
4 code implementations • 12 Apr 2015 • Edgar Dobriban, Kristen Fortney, Stuart K. Kim, Art B. Owen
For a Gaussian prior on effect sizes, we show that finding the optimal weights is a non-convex problem.
Methodology