no code implementations • 2 Jun 2023 • Leijie Wu, Song Guo, Junxiao Wang, Zicong Hong, Jie Zhang, Jingren Zhou
As Federated Learning (FL) has gained increasing attention, it has become widely acknowledged that straightforwardly applying stochastic gradient descent (SGD) on the overall framework when learning over a sequence of tasks results in the phenomenon known as ``catastrophic forgetting''.
no code implementations • 14 Mar 2023 • Yunfeng Fan, Wenchao Xu, Haozhao Wang, Jiaqi Zhu, Junxiao Wang, Song Guo
Unfortunately, OCI learning can suffer from catastrophic forgetting (CF) as the decision boundaries for old classes can become inaccurate when perturbated by new ones.
no code implementations • 15 Nov 2022 • Jinyu Chen, Wenchao Xu, Song Guo, Junxiao Wang, Jie Zhang, Haozhao Wang
Federated Learning (FL) is an emerging paradigm that enables distributed users to collaboratively and iteratively train machine learning models without sharing their private data.
no code implementations • CVPR 2023 • Yunfeng Fan, Wenchao Xu, Haozhao Wang, Junxiao Wang, Song Guo
Multimodal learning (MML) aims to jointly exploit the common priors of different modalities to compensate for their inherent limitations.
no code implementations • 13 Nov 2022 • Leijie Wu, Song Guo, Yaohong Ding, Junxiao Wang, Wenchao Xu, Richard Yida Xu, Jie Zhang
In contrast, visual data exhibits a fundamentally different structure: Its basic unit (pixel) is a natural low-level representation with significant redundancies in the neighbourhood, which poses obvious challenges to the interpretability of MSA mechanism in ViT.
no code implementations • 24 Aug 2022 • Tao Guo, Song Guo, Junxiao Wang, Wenchao Xu
Quick global aggregation of effective distributed parameters is crucial to federated learning (FL), which requires adequate bandwidth for parameters communication and sufficient user data for local training.
no code implementations • 15 Jun 2022 • Rui Zhang, Song Guo, Junxiao Wang, Xin Xie, DaCheng Tao
In particular, we dig out some critical ingredients from the iteration-based attacks, including data initialization, model training and gradient matching.
no code implementations • 15 May 2022 • Hai Yang, Yibin Liu, Junxiao Wang, Jun Yang
In this paper, an improved multi-step finite control set model predictive current control (FCS-MPCC) strategy with speed loop disturbance compensation is proposed for permanent magnet synchronous machine (PMSM) drives system.
no code implementations • 27 Feb 2022 • Tao Guo, Song Guo, Jiewei Zhang, Wenchao Xu, Junxiao Wang
Existing studies of machine unlearning mainly focus on sample-wise unlearning, such that a learnt model will not expose user's privacy at the sample level.
no code implementations • 22 Oct 2021 • Junxiao Wang, Song Guo, Xin Xie, Heng Qi
Evaluated on CIFAR10 dataset, our method accelerates the speed of unlearning by 8. 9x for the ResNet model, and 7. 9x for the VGG model under no degradation in accuracy, compared to retraining from scratch.