Deep PeNSieve: A deep learning framework based on the posit number system

1 Jul 2020  ·  Raul Murillo, Alberto A. Del Barrio, Guillermo Botella ·

The Posit Number System (PNS) was introduced by John L. Gustafson in 2017. The interesting properties of this novel format can be exploited under the scenario of deep neural networks. In this paper, we propose Deep PeNSieve, a framework for entirely performing both training and inference of deep neural networks employing the PNS. Furthermore, an 8-bit posit quantization approach using fused operations is introduced. In comparison with the state-of-the-art posit frameworks, the proposal has been able to train more complex networks than the feedforward ones, achieving similar accuracies as the floating-point format. The case of CIFAR-10 is especially remarkable, as 16-bit posits even obtain 4% higher top-1 for such dataset. Overall, results show that the proposed quantization approach can preserve model accuracy in the same manner as common quantization techniques.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here