This paper introduces a fast hardware-based learning algorithm for perceptrons using CMOS invertible logic. CMOS invertible logic is designed based on underlying Boltzmann machines that probabilistically realizes forward and backward operations using stochastic computing. This bidirectional-computing capability enables us to directly obtain weights of the perceptron without calculating a loss function used in a traditional learning algorithm. As a result, the proposed invertible-learning algorithm can perform with parallel training data as opposed to a sequential learning process of the traditional algorithm. For performance evaluation, a 25-input binarized perceptron is learned using a simplified Modified National Institute of Standards and Technology (MNIST) dataset. The proposed learning speed estimated using a 65-nm CMOS technology can be around a 5,600 x faster than the traditional perceptron-based learning algorithm, while maintaining a similar accuracy of 98%.
|Number of pages||18|
|Journal||Journal of Applied Logics|
|Publication status||Published - 2020 Jan|