1 code implementation • 4 Dec 2023 • Qingsong Yao, Zecheng He, Yuexiang Li, Yi Lin, Kai Ma, Yefeng Zheng, S. Kevin Zhou
Interestingly, this vulnerability is a double-edged sword, which can be exploited to hide AEs.
2 code implementations • CVPR 2023 • Junjiao Tian, Xiaoliang Dai, Chih-Yao Ma, Zecheng He, Yen-Cheng Liu, Zsolt Kira
To solve this problem, we propose Trainable Projected Gradient Method (TPGM) to automatically learn the constraint imposed for each layer for a fine-grained fine-tuning regularization.
no code implementations • 22 Nov 2021 • Qingsong Yao, Zecheng He, S. Kevin Zhou
To the best of our knowledge, Medical Aegis is the first defense in the literature that successfully addresses the strong adaptive adversarial example attacks to medical images.
1 code implementation • 20 Aug 2021 • Zecheng He, Ruby B. Lee
Once an anomaly is detected, to reduce alert fatigue, CloudShield automatically distinguishes between benign programs, known attacks, and zero-day attacks, by examining the prediction error distributions.
no code implementations • 11 Mar 2021 • Guangyuan Hu, Zecheng He, Ruby B. Lee
Impostors are attackers who take over a smartphone and gain access to the legitimate user's confidential and private information.
no code implementations • 22 Dec 2020 • Zecheng He, Srinivas Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Wichers, Gabriel Schubiner, Ruby Lee, Jindong Chen, Blaise Agüera y Arcas
Our methodology is designed to leverage visual, linguistic and domain-specific features in user interaction traces to pre-train generic feature representations of UIs and their components.
1 code implementation • 17 Dec 2020 • Qingsong Yao, Zecheng He, Yi Lin, Kai Ma, Yefeng Zheng, S. Kevin Zhou
Deep neural networks (DNNs) for medical images are extremely vulnerable to adversarial examples (AEs), which poses security concerns on clinical decision making.
1 code implementation • 10 Jul 2020 • Qingsong Yao, Zecheng He, Hu Han, S. Kevin Zhou
A comprehensive evaluation on a public dataset for cephalometric landmark detection demonstrates that the adversarial examples generated by ATI-FGSM break the CNN-based network more effectively and efficiently, compared with the original Iterative FGSM attack.
no code implementations • CVPR 2019 • Zecheng He, Tianwei Zhang, Ruby Lee
Numerous cloud-based services are provided to help customers develop and deploy deep learning applications.
no code implementations • 9 Aug 2018 • Zecheng He, Tianwei Zhang, Ruby B. Lee
Even small weight changes can be clearly reflected in the model outputs, and observed by the customer.
no code implementations • 5 Jul 2018 • Tianwei Zhang, Zecheng He, Ruby B. Lee
While it is prevalent to outsource model training and serving tasks in the cloud, it is important to protect the privacy of sensitive samples in the training dataset and prevent information leakage to untrusted third parties.
no code implementations • 18 Jun 2018 • Zecheng He, Aswin Raghavan, Guangyuan Hu, Sek Chai, Ruby Lee
Specifically, we first train a temporal deep learning model, using only normal HPC readings from legitimate processes that run daily in these power-grid systems, to model the normal behavior of the power-grid controller.