no code implementations • 22 Apr 2024 • Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang
To investigate the impact of changes in training data on a pre-trained model, a common approach is leave-one-out retraining.
no code implementations • 20 Dec 2023 • Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
1 code implementation • 17 Jan 2023 • Ruonan Yu, Songhua Liu, Xinchao Wang
Recent success of deep learning is largely attributed to the sheer amount of data used for training deep neural networks. Despite the unprecedented success, the massive data, unfortunately, significantly increases the burden on storage and transmission and further gives rise to a cumbersome model training process.
1 code implementation • 27 Jul 2022 • Donglin Xie, Ruonan Yu, Gongfan Fang, Jie Song, Zunlei Feng, Xinchao Wang, Li Sun, Mingli Song
The goal of FedSA is to train a student model for a new task with the help of several decentralized teachers, whose pre-training tasks and data are different and agnostic.