Search Results for author: Michael Jong Kim

Found 4 papers, 0 papers with code

Thompson Sampling for Parameterized Markov Decision Processes with Uninformative Actions

no code implementations13 May 2023 Michael Gimelfarb, Michael Jong Kim

We study parameterized MDPs (PMDPs) in which the key parameters of interest are unknown and must be learned using Bayesian inference.

Bayesian Inference Thompson Sampling

A data-driven approach to beating SAA out-of-sample

no code implementations26 May 2021 Jun-Ya Gotoh, Michael Jong Kim, Andrew E. B. Lim

While solutions of Distributionally Robust Optimization (DRO) problems can sometimes have a higher out-of-sample expected reward than the Sample Average Approximation (SAA), there is no guarantee.

Worst-case sensitivity

no code implementations21 Oct 2020 Jun-Ya Gotoh, Michael Jong Kim, Andrew E. B. Lim

We introduce the notion of Worst-Case Sensitivity, defined as the worst-case rate of increase in the expected cost of a Distributionally Robust Optimization (DRO) model when the size of the uncertainty set vanishes.

Calibration of Distributionally Robust Empirical Optimization Models

no code implementations17 Nov 2017 Jun-Ya Gotoh, Michael Jong Kim, Andrew E. B. Lim

Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of ``little bit of robustness" (i. e., $\delta$ small, positive) is a significant reduction in the variance of the out-of-sample reward while the corresponding impact on the mean is almost an order of magnitude smaller.

Cannot find the paper you are looking for? You can Submit a new open access paper.