1 code implementation • NeurIPS 2020 • Md Aamir Raihan, Tor M. Aamodt
For ResNet-50 on ImageNet SWAT reduces total floating-point operations (FLOPS) during training by 80% resulting in a 3. 3$\times$ training speedup when run on a simulated sparse learning accelerator representative of emerging platforms while incurring only 1. 63% reduction in validation accuracy.
13 code implementations • 19 Nov 2018 • Md Aamir Raihan, Negar Goli, Tor Aamodt
The efficacy of deep learning has resulted in it becoming one of the most important applications run in data centers today.
Mathematical Software Hardware Architecture