no code implementations • 29 Feb 2024 • Chester Holtz, Yucheng Wang, Chung-Kuan Cheng, Bill Lin
Namely, we show that when a small number of cells (e. g. 1%-5% of cells) have their positions shifted such that a measure of global congestion is guaranteed to remain unaffected (e. g. 1% of the design adversarially shifted by 0. 001% of the layout space results in a predicted decrease in congestion of up to 90%, while no change in congestion is implied by the perturbation).
no code implementations • 31 Jul 2023 • Chester Holtz, PengWen Chen, Alexander Cloninger, Chung-Kuan Cheng, Gal Mishne
Motivated by the need to address the degeneracy of canonical Laplace learning algorithms in low label rates, we propose to reformulate graph-based semi-supervised learning as a nonconvex generalization of a \emph{Trust-Region Subproblem} (TRS).
no code implementations • 20 Oct 2022 • Chester Holtz, Tsui-Wei Weng, Gal Mishne
There has been great interest in enhancing the robustness of neural network classifiers to defend against adversarial perturbations through adversarial training, while balancing the trade-off between robust accuracy and standard accuracy.
no code implementations • 4 Oct 2022 • Chester Holtz, Gal Mishne, Alexander Cloninger
Probabilistic generative models provide a flexible and systematic framework for learning the underlying geometry of data.
no code implementations • 29 Sep 2021 • Chester Holtz, Tsui-Wei Weng, Gal Mishne
There has been great interest in enhancing the robustness of neural network classifiers to defend against adversarial perturbations through adversarial training, while balancing the trade-off between robust accuracy and standard accuracy.
no code implementations • 23 Jan 2021 • Changhao Shi, Chester Holtz, Gal Mishne
To the best of our knowledge, our paper is the first that generalizes the idea of using self-supervised signals to perform online test-time purification.
no code implementations • 1 Jan 2021 • Chester Holtz, Changhao Shi, Gal Mishne
Recent work has demonstrated that neural networks are vulnerable to small, adversarial perturbations of their input.
no code implementations • ICLR 2021 • Changhao Shi, Chester Holtz, Gal Mishne
Deep neural networks are known to be vulnerable to adversarial examples, where a perturbation in the input space leads to an amplified shift in the latent network representation.