no code implementations • 26 Jan 2024 • Nuoyan Zhou, Dawei Zhou, Decheng Liu, Xinbo Gao, Nannan Wang
Deep neural networks are vulnerable to adversarial samples.
1 code implementation • 11 Jan 2024 • Chunlei Peng, Boyu Wang, Decheng Liu, Nannan Wang, Ruimin Hu, Xinbo Gao
To address this, we mask the clothing and color information in the personal attribute description extracted through an attribute detection model.
1 code implementation • 18 Dec 2023 • Decheng Liu, Xijun Wang, Chunlei Peng, Nannan Wang, Ruiming Hu, Xinbo Gao
Adversarial attacks involve adding perturbations to the source image to cause misclassification by the target model, which demonstrates the potential of attacking face recognition models.
1 code implementation • 16 Dec 2023 • Decheng Liu, Xu Luo, Chunlei Peng, Nannan Wang, Ruimin Hu, Xinbo Gao
In this paper, we propose a novel Symmetrical Bidirectional Knowledge Alignment for zero-shot sketch-based image retrieval (SBKA).
2 code implementations • 7 Dec 2023 • Chunlei Peng, Huiqing Guo, Decheng Liu, Nannan Wang, Ruimin Hu, Xinbo Gao
Considering the complexity of the quality distribution of both real and fake faces, we propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces with varying image quality by mining the perceptual forgery fidelity of face images.
no code implementations • 13 Nov 2023 • Qinlin He, Chunlei Peng, Decheng Liu, Nannan Wang, Xinbo Gao
DeepFake detection is pivotal in personal privacy and public safety.
no code implementations • 5 Nov 2023 • Decheng Liu, Jiahao Yu, Ruimin Hu, Wenbin Feng
Based on the proposed identity model, we propose a trustworthy identity tracing framework (TITF) with multi-attribute synergistic identification to determine the identity of unknown objects, which can optimize the core identification set and provide an interpretable identity tracing process.
1 code implementation • 5 Oct 2023 • Nuoyan Zhou, Nannan Wang, Decheng Liu, Dawei Zhou, Xinbo Gao
Deep neural networks are vulnerable to adversarial noise.
Ranked #1 on Adversarial Attack on CIFAR-10 (Attack: AutoAttack metric)
no code implementations • 14 Sep 2023 • Liangchen Liu, Nannan Wang, Dawei Zhou, Xinbo Gao, Decheng Liu, Xi Yang, Tongliang Liu
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i. e., improving the performance on unseen classes while maintaining the performance on seen classes.
1 code implementation • 21 Jul 2023 • Decheng Liu, Tao Chen, Chunlei Peng, Nannan Wang, Ruimin Hu, Xinbo Gao
Due to the successful development of deep image generation technology, visual data forgery detection would play a more important role in social and economic security.
no code implementations • 6 Jul 2023 • Ruiyang Xia, Decheng Liu, Jie Li, Lin Yuan, Nannan Wang, Xinbo Gao
Advanced manipulation techniques have provided criminals with opportunities to make social panic or gain illicit profits through the generation of deceptive media, such as forged face images.
1 code implementation • 30 Dec 2022 • Decheng Liu, Zeyang Zheng, Chunlei Peng, Yukai Wang, Nannan Wang, Xinbo Gao
Face forgery detection plays an important role in personal privacy and social security.
1 code implementation • 18 Oct 2022 • Decheng Liu, Zhan Dang, Chunlei Peng, Yu Zheng, Shuang Li, Nannan Wang, Xinbo Gao
Experiments conducted on publicly available face forgery detection datasets prove the superior performance of the proposed FedForgery.
1 code implementation • 12 Jul 2022 • Decheng Liu, Weijie He, Chunlei Peng, Nannan Wang, Jie Li, Xinbo Gao
The multiple branches transformer is employed to explore the inter-correlation between different attributes in similar semantic regions for attribute feature learning.
no code implementations • 5 Jul 2022 • Yukai Wang, Chunlei Peng, Decheng Liu, Nannan Wang, Xinbo Gao
In recent years, with the rapid development of face editing and generation, more and more fake videos are circulating on social media, which has caused extreme public concerns.