TY - GEN
T1 - VLSI implementation of deep neural networks using integral stochastic computing
AU - Ardakani, Arash
AU - Leduc-Primeau, Francois
AU - Onizawa, Naoya
AU - Hanyu, Takahiro
AU - Gross, Warren J.
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/10/17
Y1 - 2016/10/17
N2 - The hardware implementation of deep neural networks (DNNs) has recently received tremendous attention since many applications require high-speed operations. However, numerous processing elements and complex interconnections are usually required, leading to a large area occupation and a high power consumption. Stochastic computing has shown promising results for area-efficient hardware implementations, even though existing stochastic algorithms require long streams that exhibit long latency. In this paper, we propose an integer form of stochastic computation and introduce some elementary circuits. We then propose an efficient implementation of a DNN based on integral stochastic computing. The proposed architecture uses integer stochastic streams and a modified Finite State Machine-based tanh function to improve the performance and reduce the latency compared to existing stochastic architectures for DNN. The simulation results show the negligible performance loss of the proposed integer stochastic DNN for different network sizes compared to their floating point versions.
AB - The hardware implementation of deep neural networks (DNNs) has recently received tremendous attention since many applications require high-speed operations. However, numerous processing elements and complex interconnections are usually required, leading to a large area occupation and a high power consumption. Stochastic computing has shown promising results for area-efficient hardware implementations, even though existing stochastic algorithms require long streams that exhibit long latency. In this paper, we propose an integer form of stochastic computation and introduce some elementary circuits. We then propose an efficient implementation of a DNN based on integral stochastic computing. The proposed architecture uses integer stochastic streams and a modified Finite State Machine-based tanh function to improve the performance and reduce the latency compared to existing stochastic architectures for DNN. The simulation results show the negligible performance loss of the proposed integer stochastic DNN for different network sizes compared to their floating point versions.
UR - http://www.scopus.com/inward/record.url?scp=84994444302&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84994444302&partnerID=8YFLogxK
U2 - 10.1109/ISTC.2016.7593108
DO - 10.1109/ISTC.2016.7593108
M3 - Conference contribution
AN - SCOPUS:84994444302
T3 - International Symposium on Turbo Codes and Iterative Information Processing, ISTC
SP - 216
EP - 220
BT - 2016 9th International Symposium on Turbo Codes and Iterative Information Processing
PB - IEEE Computer Society
T2 - 9th International Symposium on Turbo Codes and Iterative Information Processing, ISTC 2016
Y2 - 5 September 2016 through 9 September 2016
ER -