no code implementations • 7 May 2024 • Yijiang Pang, Shuyang Yu, Bao Hoang, Jiayu Zhou
To tackle this challenge, in this paper, we propose a novel parameter-free optimizer, AdamG (Adam with the golden step size), designed to automatically adapt to diverse optimization problems without manual tuning.
1 code implementation • 4 Sep 2023 • Shuyang Yu, Junyuan Hong, Haobo Zhang, Haotao Wang, Zhangyang Wang, Jiayu Zhou
Training a high-performance deep neural network requires large amounts of data and computational resources.
1 code implementation • 4 Jun 2023 • Junyuan Hong, Yi Zeng, Shuyang Yu, Lingjuan Lyu, Ruoxi Jia, Jiayu Zhou
Data-free knowledge distillation (KD) helps transfer knowledge from a pre-trained model (known as the teacher model) to a smaller model (known as the student model) without access to the original training data used for training the teacher model.
Backdoor Defense for Data-Free Distillation with Poisoned Teachers Data-free Knowledge Distillation
1 code implementation • ICLR 2023 • Shuyang Yu, Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou
We propose to take advantage of such heterogeneity and turn the curse into a blessing that facilitates OoD detection in FL.
1 code implementation • the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 2021 • Junyuan Hong, Zhuangdi Zhu, Shuyang Yu, Zhangyang Wang, Hiroko Dodge, Jiayu Zhou
While adversarial learning is commonly used in centralized learning for mitigating bias, there are significant barriers when extending it to the federated framework.