no code implementations • 17 Feb 2024 • Wendi Cui, Jiaxin Zhang, Zhuohang Li, Hao Sun, Damien Lopez, Kamalika Das, Bradley Malin, Sricharan Kumar
Crafting an ideal prompt for Large Language Models (LLMs) is a challenging task that demands significant resources and expert human input.
no code implementations • 14 Feb 2024 • Andrew Lowy, Zhuohang Li, Jing Liu, Toshiaki Koike-Akino, Kieran Parsons, Ye Wang
In practical applications, such a worst-case guarantee may be overkill: practical attackers may lack exact knowledge of (nearly all of) the private data, and our data set might be easier to defend, in some sense, than the worst-case data set.
no code implementations • 6 Jan 2024 • Yang Sui, Zhuohang Li, Ding Ding, Xiang Pan, Xiaozhong Xu, Shan Liu, Zhenzhong Chen
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
1 code implementation • 4 Jan 2024 • Wendi Cui, Jiaxin Zhang, Zhuohang Li, Lopez Damien, Kamalika Das, Bradley Malin, Sricharan Kumar
Evaluating the quality and variability of text generated by Large Language Models (LLMs) poses a significant, yet unresolved research challenge.
1 code implementation • 3 Nov 2023 • Jiaxin Zhang, Zhuohang Li, Kamalika Das, Bradley A. Malin, Sricharan Kumar
Hallucination detection is a critical step toward understanding the trustworthiness of modern language models (LMs).
no code implementations • 21 Aug 2023 • Zhuohang Li, Chao Yan, Xinmeng Zhang, Gharib Gharibi, Zhijun Yin, Xiaoqian Jiang, Bradley A. Malin
Deep learning continues to rapidly evolve and is now demonstrating remarkable potential for numerous medical prediction tasks.
no code implementations • 1 Jun 2023 • Yang Sui, Zhuohang Li, Ding Ding, Xiang Pan, Xiaozhong Xu, Shan Liu, Zhenzhong Chen
Learned Image Compression (LIC) has recently become the trending technique for image transmission due to its notable performance.
no code implementations • 11 Apr 2023 • Yue Cui, Syed Irfan Ali Meerza, Zhuohang Li, Luyang Liu, Jiaxin Zhang, Jian Liu
In this paper, we seek to reconcile utility and privacy in FL by proposing a user-configurable privacy defense, RecUP-FL, that can better focus on the user-specified sensitive attributes while obtaining significant improvements in utility over traditional defenses.
no code implementations • 21 Feb 2023 • Zhuohang Li, Jiaxin Zhang, Jian Liu
Distributed machine learning paradigms, such as federated learning, have been recently adopted in many privacy-critical applications for speech analysis.
1 code implementation • 22 Aug 2022 • Huy Phan, Cong Shi, Yi Xie, Tianfang Zhang, Zhuohang Li, Tianming Zhao, Jian Liu, Yan Wang, Yingying Chen, Bo Yuan
Recently backdoor attack has become an emerging threat to the security of deep neural network (DNN) models.
1 code implementation • CVPR 2022 • Zhuohang Li, Jiaxin Zhang, Luyang Liu, Jian Liu
Federated Learning (FL) framework brings privacy benefits to distributed learning systems by allowing multiple clients to participate in a learning task under the coordination of a central server without exchanging their private data.
no code implementations • 3 Jul 2021 • Zhuohang Li, Luyang Liu, Jiaxin Zhang, Jian Liu
Federated Learning (FL) enables multiple distributed clients (e. g., mobile devices) to collaboratively train a centralized model while keeping the training data locally on the client.
no code implementations • 26 Apr 2020 • Yi Xie, Zhuohang Li, Cong Shi, Jian Liu, Yingying Chen, Bo Yuan
These idealized assumptions, however, makes the existing audio adversarial attacks mostly impossible to be launched in a timely fashion in practice (e. g., playing unnoticeable adversarial perturbations along with user's streaming input).
no code implementations • 4 Mar 2020 • Yi Xie, Cong Shi, Zhuohang Li, Jian Liu, Yingying Chen, Bo Yuan
As the popularity of voice user interface (VUI) exploded in recent years, speaker recognition system has emerged as an important medium of identifying a speaker in many security-required applications and services.