1 code implementation • 13 Jun 2023 • Yijun Wan, Melih Barsbey, Abdellatif Zaidi, Umut Simsekli
Neural network compression has been an increasingly important subject, not only due to its practical relevance, but also due to its theoretical implications, as there is an explicit connection between compressibility and generalization error.
no code implementations • 9 Jun 2023 • Milad Sefidgaran, Romain Chor, Abdellatif Zaidi, Yijun Wan
Moreover, specialized to the case $R=1$ (sometimes referred to as "one-shot" FL or distributed learning) our bounds suggest that the generalization error of the FL setting decreases faster than that of centralized learning by a factor of $\mathcal{O}(\sqrt{\log(K)/K})$, thereby generalizing recent findings in this direction to arbitrary loss functions and algorithms.
1 code implementation • 23 May 2022 • Soon Hoe Lim, Yijun Wan, Umut Şimşekli
Recent studies have shown that gradient descent (GD) can achieve improved generalization when its dynamics exhibits a chaotic behavior.