no code implementations • 24 May 2024 • Ramya Ramalingam, Sangdon Park, Osbert Bastani
Conformal prediction has emerged as a promising strategy for quantifying uncertainty in machine learning by modifying models to predict sets of labels instead of individual labels; it provides a probabilistic guarantee that the prediction set contains the true label with high probability.
no code implementations • 28 Mar 2024 • Hyejin Park, Jeongyeon Hwang, Sunung Mun, Sangdon Park, Jungseul Ok
In response to the emerging threat, we propose median batch normalization (MedBN), leveraging the robustness of the median for statistics estimation within the batch normalization layer during test-time inference.
1 code implementation • 19 Oct 2023 • Wenwen Si, Sangdon Park, Insup Lee, Edgar Dobriban, Osbert Bastani
We propose a novel algorithm for constructing prediction sets with PAC guarantees in the label shift setting.
no code implementations • 18 Jul 2023 • Sangdon Park, Taesoo Kim
Uncertainty learning and quantification of models are crucial tasks to enhance the trustworthiness of the models.
1 code implementation • 7 Jul 2023 • Shuo Li, Sangdon Park, Insup Lee, Osbert Bastani
To address this challenge, we propose the Trustworthy Retrieval Augmented Question Answering, or $\textit{TRAQ}$, which provides the first end-to-end statistical correctness guarantee for RAG.
1 code implementation • CVPR 2023 • Wenwen Si, Shuo Li, Sangdon Park, Insup Lee, Osbert Bastani
Experiments demonstrate the efficacy of the partial-covering patch in solving the complex bounding box problem.
1 code implementation • 17 Nov 2022 • Sangdon Park, Osbert Bastani, Taesoo Kim
To address the oracle problem, we propose an adaptive conformal consensus (ACon$^2$) algorithm that derives a consensus set of data from multiple oracle contracts via the recent advance in online uncertainty quantification learning.
no code implementations • 31 Oct 2022 • Sangdon Park, Xiang Cheng, Taesoo Kim
Memory-safety bugs introduce critical software-security issues.
1 code implementation • 24 Jul 2022 • Ramneet Kaur, Kaustubh Sridhar, Sangdon Park, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee
Machine learning models are prone to making incorrect predictions on inputs that are far from the training distribution.
no code implementations • 6 Jul 2022 • Sangdon Park, Edgar Dobriban, Insup Lee, Osbert Bastani
Uncertainty quantification is a key component of machine learning models targeted at safety-critical systems such as in healthcare or autonomous vehicles.
no code implementations • 15 Apr 2022 • Shuo Li, Sangdon Park, Xiayan Ji, Insup Lee, Osbert Bastani
Accurately detecting and tracking multi-objects is important for safety-critical applications such as autonomous navigation.
no code implementations • 7 Jan 2022 • Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Edgar Dobriban, Oleg Sokolsky, Insup Lee
We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection.
no code implementations • 29 Sep 2021 • Sooyong Jang, Sangdon Park, Insup Lee, Osbert Bastani
This problem can naturally be solved using a two-sample test--- i. e., test whether the current test distribution of covariates equals the training distribution of covariates.
no code implementations • 13 Aug 2021 • Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Oleg Sokolsky, Insup Lee
We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric).
1 code implementation • ICLR 2022 • Sangdon Park, Edgar Dobriban, Insup Lee, Osbert Bastani
Our approach focuses on the setting where there is a covariate shift from the source distribution (where we have labeled training examples) to the target distribution (for which we want to quantify uncertainty).
no code implementations • ICLR 2021 • Sangdon Park, Shuo Li, Insup Lee, Osbert Bastani
In our experiments, we demonstrate that our approach can be used to provide guarantees for state-of-the-art DNNs.
no code implementations • 29 Feb 2020 • Sangdon Park, Osbert Bastani, James Weimer, Insup Lee
Our algorithm uses importance weighting to correct for the shift from the training to the real-world distribution.
1 code implementation • ICLR 2020 • Sangdon Park, Osbert Bastani, Nikolai Matni, Insup Lee
We propose an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees---i. e., the confidence set for a given input contains the true label with high probability.
no code implementations • 10 Aug 2017 • Sangdon Park, James Weimer, Insup Lee
Specifically, a generic metric is proposed that is tailored to measure resilience of classification algorithms with respect to worst-case tampering of the training data.