no code implementations • 18 Mar 2024 • Baoyu Jing, Dawei Zhou, Kan Ren, Carl Yang
Based on the results of the frontdoor adjustment, we introduce a novel Causality-Aware SPatiotEmpoRal graph neural network (CASPER), which contains a novel Spatiotemporal Causal Attention (SCA) and a Prompt Based Decoder (PBD).
no code implementations • 26 Jan 2024 • Nuoyan Zhou, Dawei Zhou, Decheng Liu, Xinbo Gao, Nannan Wang
Deep neural networks are vulnerable to adversarial samples.
1 code implementation • 5 Oct 2023 • Nuoyan Zhou, Nannan Wang, Decheng Liu, Dawei Zhou, Xinbo Gao
Deep neural networks are vulnerable to adversarial noise.
Ranked #1 on Adversarial Attack on CIFAR-10 (Attack: AutoAttack metric)
no code implementations • 14 Sep 2023 • Liangchen Liu, Nannan Wang, Dawei Zhou, Xinbo Gao, Decheng Liu, Xi Yang, Tongliang Liu
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i. e., improving the performance on unseen classes while maintaining the performance on seen classes.
1 code implementation • 19 Jul 2023 • Longfeng Wu, Bowen Lei, Dongkuan Xu, Dawei Zhou
In particular, to quantify the uncertainties in RCA, we develop a node-level uncertainty quantification algorithm to model the overlapping support regions with high uncertainty; to handle the rarity of minority classes in miscalibration calculation, we generalize the distribution-based calibration metric to the instance level and propose the first individual calibration measurement on graphs named Expected Individual Calibration Error (EICE).
1 code implementation • 17 Jul 2023 • Haohui Wang, Weijie Guan, Jianpeng Chen, Zi Wang, Dawei Zhou
To achieve this, we develop the most comprehensive (to the best of our knowledge) long-tailed learning benchmark named HeroLT, which integrates 13 state-of-the-art algorithms and 6 evaluation metrics on 14 real-world benchmark datasets across 4 tasks from 3 domains.
no code implementations • 25 Jun 2023 • Shuaicheng Zhang, Haohui Wang, Si Zhang, Dawei Zhou
While graph heterophily has been extensively studied in recent years, a fundamental research question largely remains nascent: How and to what extent will graph heterophily affect the prediction performance of graph neural networks (GNNs)?
no code implementations • 17 May 2023 • Haohui Wang, Baoyu Jing, Kaize Ding, Yada Zhu, Liqing Zhang, Dawei Zhou
However, there is limited literature that provides a theoretical tool to characterize the behaviors of long-tail categories on graphs and understand the generalization performance in real scenarios.
1 code implementation • 1 May 2023 • Yue Wu, Shuaicheng Zhang, Wenchao Yu, Yanchi Liu, Quanquan Gu, Dawei Zhou, Haifeng Chen, Wei Cheng
The recent trend towards Personalized Federated Learning (PFL) has garnered significant attention as it allows for the training of models that are tailored to each client while maintaining data privacy.
no code implementations • 1 May 2023 • Haohui Wang, Yuzhen Mao, Jianhui Sun, Si Zhang, Yonghui Fan, Dawei Zhou
Transferring knowledge across graphs plays a pivotal role in many high-stake domains, ranging from transportation networks to e-commerce networks, from neuroscience to finance.
no code implementations • 30 Mar 2023 • Lecheng Zheng, Dawei Zhou, Hanghang Tong, Jiejun Xu, Yada Zhu, Jingrui He
In addition, we propose a generic context sampling strategy for graph generative models, which is proven to be capable of fairly capturing the contextual information of each group with a high probability.
1 code implementation • 9 Dec 2022 • Longfeng Wu, Yao Zhou, Dawei Zhou
Finally, we further propose a hybrid network that is jointly optimized for learning a more generic product representation.
1 code implementation • 9 Dec 2022 • Yuzhen Mao, Jianhui Sun, Dawei Zhou
Given a resource-rich source graph and a resource-scarce target graph, how can we effectively transfer knowledge across graphs and ensure a good generalization performance?
1 code implementation • 19 Oct 2022 • Linfeng Liu, Xu Han, Dawei Zhou, Li-Ping Liu
In this work, we convert graph pruning to a problem of node relabeling and then relax it to a differentiable problem.
no code implementations • 4 Oct 2022 • Chaojian Yu, Dawei Zhou, Li Shen, Jun Yu, Bo Han, Mingming Gong, Nannan Wang, Tongliang Liu
Firstly, applying a pre-specified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network's desideratum.
1 code implementation • ICCV 2023 • Zhigang Su, Dawei Zhou, Nannan Wangu, Decheng Li, Zhen Wang, Xinbo Gao
Growing leakage and misuse of visual information raise security and privacy concerns, which promotes the development of information protection.
1 code implementation • 21 Aug 2022 • Dawei Zhou, Lecheng Zheng, Dongqi Fu, Jiawei Han, Jingrui He
To comprehend heterogeneous graph signals at different granularities, we propose a curriculum learning paradigm that automatically re-weighs graph signals in order to ensure a good generalization in the target domain.
1 code implementation • 25 Jul 2022 • Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Xiaoyu Wang, Yibing Zhan, Tongliang Liu
To alleviate this negative effect, in this paper, we investigate the dependence between outputs of the target model and input adversarial samples from the perspective of information theory, and propose an adversarial defense method.
no code implementations • 29 Sep 2021 • Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.
1 code implementation • 21 Sep 2021 • Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.
no code implementations • 10 Jun 2021 • Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu
However, pre-processing methods may suffer from the robustness degradation effect, in which the defense reduces rather than improving the adversarial robustness of a target model in a white-box setting.
no code implementations • 9 Jun 2021 • Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao
However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
no code implementations • ICCV 2021 • Dawei Zhou, Nannan Wang, Chunlei Peng, Xinbo Gao, Xiaoyu Wang, Jun Yu, Tongliang Liu
Then, we train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.
no code implementations • 1 Jan 2021 • Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Xinbo Gao
Motivated by this observation, we propose a defense framework ADD-Defense, which extracts the invariant information called \textit{perturbation-invariant representation} (PIR) to defend against widespread adversarial examples.