2 code implementations • 3 Mar 2024 • Shangquan Sun, Wenqi Ren, Jingzhi Li, Rui Wang, Xiaochun Cao
Knowledge distillation involves transferring soft labels from a teacher to a student using a shared temperature-based softmax function.
Ranked #1 on Knowledge Distillation on CIFAR-100
1 code implementation • 14 Feb 2024 • Ruoyu Chen, Hua Zhang, Siyuan Liang, Jingzhi Li, Xiaochun Cao
For incorrectly predicted samples, our method achieves gains of 81. 0% and 18. 4% compared to the HSIC-Attribution algorithm in the average highest confidence and Insertion score respectively.
no code implementations • IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2023 • Jingzhi Li, Hua Zhang, Siyuan Liang, Pengwen Dai, Xiaochun Cao
Within this module, we introduce a pixel importance estimation model based on Shapley value to obtain a pixel-level attribution map, and then each pixel on the attribution map is aggregated into semantic facial parts, which are used to quantify the importance of different facial parts.
no code implementations • CVPR 2023 • Jingzhi Li, Zidong Guo, Hui Li, Seungju Han, Ji-won Baek, Min Yang, Ran Yang, Sungjoo Suh
By constraining the teacher's search space with reverse distillation, we narrow the intrinsic gap and unleash the potential of feature-only distillation.
2 code implementations • 28 Oct 2022 • Binyi Su, Hua Zhang, Jingzhi Li, Zhong Zhou
In this paper, we seek a solution for the generalized few-shot open-set object detection (G-FOOD), which aims to avoid detecting unknown classes as known classes with a high confidence score while maintaining the performance of few-shot detection.
1 code implementation • 20 Sep 2022 • Jiawei Liang, Siyuan Liang, Aishan Liu, Ke Ma, Jingzhi Li, Xiaochun Cao
Specifically, we propose a sample-specific data augmentation to transfer the teacher model's ability in capturing distinct frequency components and suggest an adversarial feature augmentation to extract the teacher model's perceptions of non-robust features in the data.
no code implementations • 16 Sep 2022 • Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao
Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.
1 code implementation • ACM Transactions on Multimedia Computing, Communications and Applications 2022 • Ruoyu Chen, Jingzhi Li, Hua Zhang, Changchong Sheng, Li Liu, Xiaochun Cao
Different from existing models, in this paper, we propose a new interpretation method that explains the image similarity models by salience maps and attribute words.
no code implementations • 6 Mar 2020 • Jiwei Jia, Jian Ding, Siyu Liu, Guidong Liao, Jingzhi Li, Ben Duan, Guoqing Wang, Ran Zhang
Home quarantine is the most important one to prevent the spread of COVID-19.