no code implementations • 17 Aug 2023 • Jinyin Chen, Jie Ge, Shilian Zheng, Linhui Ye, Haibin Zheng, Weiguo Shen, Keqiang Yue, Xiaoniu Yang
It can also be found that the DeepReceiver is vulnerable to adversarial perturbations even with very low power and limited PAPR.
no code implementations • 18 Jul 2023 • Haibin Zheng, Jinyin Chen, Haibo Jin
Therefore, it is crucial to identify the misbehavior of DNN-based software and improve DNNs' quality.
no code implementations • 25 Mar 2023 • Ruoxi Chen, Haibo Jin, Jinyin Chen, Haibin Zheng
To address the issues, we introduce the concept of local gradient, and reveal that adversarial examples have a quite larger bound of local gradient than the benign ones.
1 code implementation • 22 Mar 2023 • Jinyin Chen, Haibin Zheng, Tao Liu, Rongchang Li, Yao Cheng, Xuhong Zhang, Shouling Ji
With the development of deep learning processors and accelerators, deep learning models have been widely deployed on edge devices as part of the Internet of Things.
no code implementations • 18 Mar 2023 • Jinyin Chen, Mingjun Li, Haibin Zheng
For the first time, we formalize the problem of copyright protection for FL, and propose FedRight to protect model copyright based on model fingerprints, i. e., extracting model features by generating adversarial examples as model fingerprints.
1 code implementation • 25 Oct 2022 • Haibin Zheng, Haiyang Xiong, Jinyin Chen, Haonan Ma, Guohan Huang
Most of the proposed studies launch the backdoor attack using a trigger that either is the randomly generated subgraph (e. g., erd\H{o}s-r\'enyi backdoor) for less computational burden, or the gradient-based generative subgraph (e. g., graph trojaning attack) to enable a more effective attack.
1 code implementation • 14 Aug 2022 • Haibin Zheng, Haiyang Xiong, Haonan Ma, Guohan Huang, Jinyin Chen
Consequently, the link prediction model trained on the backdoored dataset will predict the link with trigger to the target state.
no code implementations • 17 Jun 2022 • Jinyin Chen, Chengyu Jia, Haibin Zheng, Ruoxi Chen, Chenbo Fu
The proliferation of fake news and its serious negative social influence push fake news detection methods to become necessary tools for web managers.
1 code implementation • 11 Jun 2022 • Jinyin Chen, Mingjun Li, Tao Liu, Haibin Zheng, Yao Cheng, Changting Lin
To address these challenges, we reconsider the defense from a novel perspective, i. e., model weight evolving frequency. Empirically, we gain a novel insight that during the FL's training, the model weight evolving frequency of free-riders and that of benign clients are significantly different.
1 code implementation • 5 Apr 2022 • Jinyin Chen, Shulong Hu, Haibin Zheng, Changyou Xing, Guomin Zhang
Addressing the challenges, for the first time, we introduce expert knowledge to guide the agent to make better decisions in RL-based PT and propose a Generative Adversarial Imitation Learning-based generic intelligent Penetration testing framework, denoted as GAIL-PT, to solve the problems of higher labor costs due to the involvement of security experts and high-dimensional discrete action space.
1 code implementation • 12 Feb 2022 • Haibo Jin, Ruoxi Chen, Haibin Zheng, Jinyin Chen, Yao Cheng, Yue Yu, Xianglong Liu
By maximizing the number of excitable neurons concerning various wrong behaviors of models, DeepSensor can generate testing examples that effectively trigger more errors due to adversarial inputs, polluted data and incomplete training.
1 code implementation • 25 Dec 2021 • Haibin Zheng, Zhiqing Chen, Tianyu Du, Xuhong Zhang, Yao Cheng, Shouling Ji, Jingyi Wang, Yue Yu, Jinyin Chen
To overcome the challenges, we propose NeuronFair, a new DNN fairness testing framework that differs from previous work in several key aspects: (1) interpretable - it quantitatively interprets DNNs' fairness violations for the biased decision; (2) effective - it uses the interpretation results to guide the generation of more diverse instances in less time; (3) generic - it can handle both structured and unstructured data.
no code implementations • 24 Dec 2021 • Ruoxi Chen, Haibo Jin, Jinyin Chen, Haibin Zheng, Yue Yu, Shouling Ji
From the perspective of image feature space, some of them cannot reach satisfying results due to the shift of features.
1 code implementation • 13 Oct 2021 • Jinyin Chen, Guohan Huang, Haibin Zheng, Shanqing Yu, Wenrong Jiang, Chen Cui
This is the first study of adversarial attacks on GVFL.
no code implementations • 8 Oct 2021 • Jinyin Chen, Haiyang Xiong, Haibin Zheng, Jian Zhang, Guodong Jiang, Yi Liu
Backdoor attacks induce the DLP methods to make wrong prediction by the malicious training data, i. e., generating a subgraph sequence as the trigger and embedding it to the training data.
1 code implementation • 14 May 2021 • Jinyin Chen, Ruoxi Chen, Haibin Zheng, Zhaoyan Ming, Wenrong Jiang, Chen Cui
Motivated by the observation that adversarial examples are due to the non-robust feature learned from the original dataset by models, we propose the concepts of salient feature(SF) and trivial feature(TF).
no code implementations • 6 Jan 2021 • Jinyin Chen, Longyuan Zhang, Haibin Zheng, Xueke Wang, Zhaoyan Ming
As existing episodes mainly focused on attack success rate with patch-based samples, defense algorithms can easily detect these poisoning samples.
no code implementations • 18 Dec 2020 • Jinyin Chen, Zhen Wang, Haibin Zheng, Jun Xiao, Zhaoyan Ming
This work proposes a generic evaluation metric ROBY, a novel attack-independent robustness measure based on the model's decision boundaries.
no code implementations • 26 Feb 2020 • Jinyin Chen, Yixian Chen, Haibin Zheng, Shijing Shen, Shanqing Yu, Dan Zhang, Qi Xuan
The adversarial attack methods based on gradient information can adequately find the perturbations, that is, the combinations of rewired links, thereby reducing the effectiveness of the deep learning model based graph embedding algorithms, but it is also easy to fall into a local optimum.
Social and Information Networks
no code implementations • 1 May 2019 • Jinyin Chen, Mengmeng Su, Shijing Shen, Hui Xiong, Haibin Zheng
In this paper, comprehensive evaluation metrics are brought up for different adversarial attack methods.
no code implementations • 12 Apr 2019 • Jinyin Chen, Yangyang Wu, Lu Fan, Xiang Lin, Haibin Zheng, Shanqing Yu, Qi Xuan
In particular, we use a bipartite network to construct the user-item network, and represent the interactions among users (or items) by the corresponding one-mode projection network.
2 code implementations • 2020 • Jinyin Chen, Xuanheng Xu, Yangyang Wu, Haibin Zheng
To the best of our knowledge, it is the first time that GCN embedded LSTM is put forward for link prediction of dynamic networks.
Social and Information Networks Physics and Society
no code implementations • 1 Dec 2018 • Jinyin Chen, Haibin Zheng, Hui Xiong, Mengmeng Su
Inspired by the correlations between adversarial perturbations and object contour, slighter perturbations is produced via focusing on object contour features, which is more imperceptible and difficult to be defended, especially network add-on defense methods with the trade-off between perturbations filtering and contour feature loss.
no code implementations • 2 Oct 2018 • Jinyin Chen, Ziqiang Shi, Yangyang Wu, Xuanheng Xu, Haibin Zheng
Deep neural network has shown remarkable performance in solving computer vision and some graph evolved tasks, such as node classification and link prediction.
Physics and Society Social and Information Networks
no code implementations • 8 Sep 2018 • Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, Qi Xuan
Network embedding maps a network into a low-dimensional Euclidean space, and thus facilitate many network analysis tasks, such as node classification, link prediction and community detection etc, by utilizing machine learning methods.
Physics and Society Social and Information Networks