no code implementations • 6 Apr 2024 • Pengyuan Lu, Lin Zhang, Mengyu Liu, Kaustubh Sridhar, Fanxin Kong, Oleg Sokolsky, Insup Lee
Cyber-physical systems (CPS) have experienced rapid growth in recent decades.
no code implementations • 4 Apr 2024 • Xinmeng Huang, Shuo Li, Mengxin Yu, Matteo Sesia, Hamed Hassani, Insup Lee, Osbert Bastani, Edgar Dobriban
Language Models (LMs) have shown promising performance in natural language generation.
no code implementations • 15 Nov 2023 • Yahan Yang, Soham Dan, Dan Roth, Insup Lee
We also conduct several ablation experiments to study the effect of language distances, language corpus size, and model size on calibration, and how multilingual models compare with their monolingual counterparts for diverse tasks and languages.
no code implementations • 13 Nov 2023 • Xi Zheng, Aloysius K. Mok, Ruzica Piskac, Yong Jae Lee, Bhaskar Krishnamachari, Dakai Zhu, Oleg Sokolsky, Insup Lee
The integration of machine learning (ML) into cyber-physical systems (CPS) offers significant benefits, including enhanced efficiency, predictive capabilities, real-time responsiveness, and the enabling of autonomous operations.
1 code implementation • 6 Nov 2023 • Pengyuan Lu, Matthew Cleaveland, Oleg Sokolsky, Insup Lee, Ivan Ruchkin
However, existing repair techniques do not preserve previously correct behaviors.
1 code implementation • 19 Oct 2023 • Wenwen Si, Sangdon Park, Insup Lee, Edgar Dobriban, Osbert Bastani
We propose a novel algorithm for constructing prediction sets with PAC guarantees in the label shift setting.
no code implementations • 9 Oct 2023 • Kaustubh Sridhar, Souradeep Dutta, Dinesh Jayaraman, James Weimer, Insup Lee
Imitation learning considerably simplifies policy synthesis compared to alternative approaches by exploiting access to expert demonstrations.
1 code implementation • 4 Oct 2023 • Pengyuan Lu, Michele Caprio, Eric Eaton, Insup Lee
Upon a new task, IBCL (1) updates a knowledge base in the form of a convex hull of model parameter distributions and (2) obtains particular models to address task trade-off preferences with zero-shot.
no code implementations • 1 Sep 2023 • Sydney Pugh, Ivan Ruchkin, Insup Lee, James Weimer
However, ensuring the robustness of these models is vital for building trustworthy AI systems.
no code implementations • 28 Aug 2023 • Souradeep Dutta, Michele Caprio, Vivian Lin, Matthew Cleaveland, Kuk Jin Jang, Ivan Ruchkin, Oleg Sokolsky, Insup Lee
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
no code implementations • 13 Jul 2023 • Michele Caprio, Yusuf Sale, Eyke Hüllermeier, Insup Lee
In their seminal 1990 paper, Wasserman and Kadane establish an upper bound for the Bayes' posterior probability of a measurable set $A$, when the prior lies in a class of probability measures $\mathcal{P}$ and the likelihood is precise.
1 code implementation • 7 Jul 2023 • Shuo Li, Sangdon Park, Insup Lee, Osbert Bastani
To address this challenge, we propose the Trustworthy Retrieval Augmented Question Answering, or $\textit{TRAQ}$, which provides the first end-to-end statistical correctness guarantee for RAG.
1 code implementation • 24 May 2023 • Pengyuan Lu, Michele Caprio, Eric Eaton, Insup Lee
Upon a new task, IBCL (1) updates a knowledge base in the form of a convex hull of model parameter distributions and (2) obtains particular models to address task trade-off preferences with zero-shot.
no code implementations • 27 Apr 2023 • Ramneet Kaur, Yiannis Kantaros, Wenwen Si, James Weimer, Insup Lee
Nevertheless, DNN models have proven to be vulnerable to adversarial digital and physical attacks.
no code implementations • 25 Apr 2023 • Mengyu Liu, Pengyuan Lu, Xin Chen, Fanxin Kong, Oleg Sokolsky, Insup Lee
We propose a model-free reinforcement learning solution, namely the ASAP-Phi framework, to encourage an agent to fulfill a formal specification ASAP.
no code implementations • 6 Apr 2023 • Pengyuan Lu, Ivan Ruchkin, Matthew Cleaveland, Oleg Sokolsky, Insup Lee
However, given the high diversity and complexity of LECs, it is challenging to encode domain knowledge (e. g., the CPS dynamics) in a scalable actual causality model that could generate useful repair suggestions.
1 code implementation • 3 Apr 2023 • Matthew Cleaveland, Insup Lee, George J. Pappas, Lars Lindemann
In fact, to obtain prediction regions over $T$ time steps with confidence $1-\delta$, {previous works require that each individual prediction region is valid} with confidence $1-\delta/T$.
no code implementations • 21 Feb 2023 • Ramneet Kaur, Xiayan Ji, Souradeep Dutta, Michele Caprio, Yahan Yang, Elena Bernardis, Oleg Sokolsky, Insup Lee
This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information (e. g. training class labels).
1 code implementation • 20 Feb 2023 • Vivian Lin, Kuk Jin Jang, Souradeep Dutta, Michele Caprio, Oleg Sokolsky, Insup Lee
To aid in our estimates of Wasserstein distance, we employ dimensionality reduction through orthonormal projection.
no code implementations • 19 Feb 2023 • Michele Caprio, Souradeep Dutta, Kuk Jin Jang, Vivian Lin, Radoslav Ivanov, Oleg Sokolsky, Insup Lee
We show that CBDL is better at quantifying and disentangling different types of uncertainties than single BNNs, ensemble of BNNs, and Bayesian Model Averaging.
1 code implementation • CVPR 2023 • Wenwen Si, Shuo Li, Sangdon Park, Insup Lee, Osbert Bastani
Experiments demonstrate the efficacy of the partial-covering patch in solving the complex bounding box problem.
no code implementations • 20 Dec 2022 • Yahan Yang, Soham Dan, Dan Roth, Insup Lee
Recently it has been shown that state-of-the-art NLP models are vulnerable to adversarial attacks, where the predictions of a model can be drastically altered by slight modifications to the input (such as synonym substitutions).
1 code implementation • 2 Dec 2022 • Kaustubh Sridhar, Souradeep Dutta, James Weimer, Insup Lee
Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network in each subset.
1 code implementation • 24 Jul 2022 • Ramneet Kaur, Kaustubh Sridhar, Sangdon Park, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee
Machine learning models are prone to making incorrect predictions on inputs that are far from the training distribution.
no code implementations • 6 Jul 2022 • Sangdon Park, Edgar Dobriban, Insup Lee, Osbert Bastani
Uncertainty quantification is a key component of machine learning models targeted at safety-critical systems such as in healthcare or autonomous vehicles.
1 code implementation • 13 Jun 2022 • Kaustubh Sridhar, Souradeep Dutta, Ramneet Kaur, James Weimer, Oleg Sokolsky, Insup Lee
Algorithm design of AT and its variants are focused on training models at a specified perturbation strength $\epsilon$ and only using the feedback from the performance of that $\epsilon$-robust model to improve the algorithm.
no code implementations • 10 Jun 2022 • Souradeep Dutta, Yahan Yang, Elena Bernardis, Edgar Dobriban, Insup Lee
We propose a new method for classification which can improve robustness to distribution shifts, by combining expert knowledge about the ``high-level" structure of the data with standard classifiers.
no code implementations • 22 May 2022 • Shuo Li, Xiayan Ji, Edgar Dobriban, Oleg Sokolsky, Insup Lee
Anomaly detection is essential for preventing hazardous outcomes for safety-critical applications like autonomous driving.
no code implementations • 15 Apr 2022 • Shuo Li, Sangdon Park, Xiayan Ji, Insup Lee, Osbert Bastani
Accurately detecting and tracking multi-objects is important for safety-critical applications such as autonomous navigation.
1 code implementation • 25 Feb 2022 • Souradeep Dutta, Kaustubh Sridhar, Osbert Bastani, Edgar Dobriban, James Weimer, Insup Lee, Julia Parish-Morris
We formulate expert intervention as allowing the agent to execute option templates before learning an implementation.
no code implementations • 7 Jan 2022 • Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Edgar Dobriban, Oleg Sokolsky, Insup Lee
We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection.
1 code implementation • 3 Nov 2021 • Ivan Ruchkin, Matthew Cleaveland, Radoslav Ivanov, Pengyuan Lu, Taylor Carpenter, Oleg Sokolsky, Insup Lee
To predict safety violations in a verified system, we propose a three-step confidence composition (CoCo) framework for monitoring verification assumptions.
no code implementations • 29 Sep 2021 • Sooyong Jang, Sangdon Park, Insup Lee, Osbert Bastani
This problem can naturally be solved using a two-sample test--- i. e., test whether the current test distribution of covariates equals the training distribution of covariates.
no code implementations • 29 Sep 2021 • Pengyuan Lu, Seungwon Lee, Amanda Watson, David Kent, Insup Lee, Eric Eaton, James Weimer
This tool achieves similar performance, in terms of per-task accuracy and resistance to catastrophic forgetting, as compared to fully labeled data.
no code implementations • 13 Aug 2021 • Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Oleg Sokolsky, Insup Lee
We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric).
1 code implementation • ICLR 2022 • Sangdon Park, Edgar Dobriban, Insup Lee, Osbert Bastani
Our approach focuses on the setting where there is a covariate shift from the source distribution (where we have labeled training examples) to the target distribution (for which we want to quantify uncertainty).
1 code implementation • 3 Jun 2021 • Kaustubh Sridhar, Oleg Sokolsky, Insup Lee, James Weimer
Improving adversarial robustness of neural networks remains a major challenge.
no code implementations • 30 Apr 2021 • Taylor J. Carpenter, Radoslav Ivanov, Insup Lee, James Weimer
This paper presents ModelGuard, a sampling-based approach to runtime model validation for Lipschitz-continuous models.
no code implementations • 23 Mar 2021 • Ramneet Kaur, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee
Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution (OOD) inputs.
no code implementations • 25 Feb 2021 • Sooyong Jang, Radoslav Ivanov, Insup Lee, James Weimer
As machine learning techniques become widely adopted in new domains, especially in safety-critical systems such as autonomous vehicles, it is crucial to provide accurate output uncertainty estimation.
no code implementations • 9 Nov 2020 • Sooyong Jang, Insup Lee, James Weimer
Providing reliable model uncertainty estimates is imperative to enabling robust decision making by autonomous agents and humans alike.
no code implementations • ICLR 2021 • Sangdon Park, Shuo Li, Insup Lee, Osbert Bastani
In our experiments, we demonstrate that our approach can be used to provide guarantees for state-of-the-art DNNs.
1 code implementation • 12 Oct 2020 • Min Du, Nesime Tatbul, Brian Rivers, Akhilesh Kumar Gupta, Lucas Hu, Wei Wang, Ryan Marcus, Shengtian Zhou, Insup Lee, Justin Gottschlich
Class distribution skews in imbalanced datasets may lead to models with prediction bias towards majority classes, making fair assessment of classifiers a challenging task.
no code implementations • 29 Feb 2020 • Sangdon Park, Osbert Bastani, James Weimer, Insup Lee
Our algorithm uses importance weighting to correct for the shift from the training to the real-world distribution.
no code implementations • 23 Feb 2020 • Yiannis Kantaros, Taylor Carpenter, Kaustubh Sridhar, Yahan Yang, Insup Lee, James Weimer
To highlight this, we demonstrate the efficiency of the proposed detector on ImageNet, a task that is computationally challenging for the majority of relevant defenses, and on physically attacked traffic signs that may be encountered in real-time autonomy applications.
1 code implementation • ICLR 2020 • Sangdon Park, Osbert Bastani, Nikolai Matni, Insup Lee
We propose an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees---i. e., the confidence set for a given input contains the true label with high probability.
1 code implementation • 11 Sep 2019 • Mohammadhosein Hasanbeig, Yiannis Kantaros, Alessandro Abate, Daniel Kroening, George J. Pappas, Insup Lee
Reinforcement Learning (RL) has emerged as an efficient method of choice for solving complex sequential decision making problems in automatic control, computer science, economics, and biology.
1 code implementation • 5 Nov 2018 • Radoslav Ivanov, James Weimer, Rajeev Alur, George J. Pappas, Insup Lee
This paper presents Verisig, a hybrid system approach to verifying safety properties of closed-loop systems using neural networks as controllers.
Systems and Control
no code implementations • 10 Aug 2017 • Sangdon Park, James Weimer, Insup Lee
Specifically, a generic metric is proposed that is tailored to measure resilience of classification algorithms with respect to worst-case tampering of the training data.