no code implementations • 30 Jan 2024 • Sangwoo Hwang, Jaeha Kung
In this work, we propose a single-spike phase coding as an encoding scheme that minimizes the number of spikes to transfer data between SNN layers.
no code implementations • 4 Nov 2022 • Seock-Hwan Noh, JunSang Park, Dahoon Park, Jahyun Koo, Jeik Choi, Jaeha Kung
Thus, in this work, we conduct a detailed analysis of the batch normalization layer to efficiently reduce the runtime overhead in the batch normalization process.
no code implementations • 13 Mar 2022 • Seock-Hwan Noh, Jahyun Koo, SeungHyun Lee, Jongse Park, Jaeha Kung
While several prior works proposed such multi-precision support for DNN accelerators, not only do they focus only on the inference, but also their core utilization is suboptimal at a fixed precision and specific layer types when the training is considered.
1 code implementation • 1 Nov 2021 • Dahoon Park, Kon-Woo Kwon, Sunghoon Im, Jaeha Kung
Many prior works on adversarial weight attack require not only the weight parameters, but also the training or test dataset in searching vulnerable bits to be attacked.