no code implementations • 25 Jan 2022 • Syed Asad Alam, Andrew Anderson, Barbara Barabasz, David Gregg
The choice of points impacts the numeric accuracy of the algorithm, but the optimal set of points for small convolutions remains unknown.
no code implementations • 23 Apr 2020 • Barbara Barabasz
We show that we can train the $8$ bit quantized network to nearly the same accuracy (up to 0. 5% loss) for tested network (Resnet18) and dataset (CIFAR10) as for quantized direct convolution with few additional operations in pre/post transformations.
no code implementations • 13 May 2019 • Barbara Barabasz, David Gregg
Winograd convolution is widely used in deep neural networks (DNNs).
no code implementations • 29 Mar 2018 • Barbara Barabasz, Andrew Anderson, Kirk M. Soodhalter, David Gregg
We propose several methods for reducing FP error.