Error measures of the back-propagation learning algorithm

Sumiyoshi Fujiki, Mitsuyuki Nakao, Nahomi M. Fujiki

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

If the conventional quadratic error measure is used, the learning process of the error back-propagation algorithm often fails in metastable states and the learning process suffers serious inefficiency. If the Kullback-Leibler divergence is used, it can be shown numerically that the most typical metastable states are removed and that the learning efficiency is improved significantly as the number of hidden neurons is increased. This means that the Kullback-Leibler divergence is a superior error measure for the error back-propagation algorithm with scalability for the redundancy of hidden neurons. This scalability has a great advantage in applications since we can simply provide a network with large enough size for a given problem, and we need not know its optimal network size in advance.

Original languageEnglish
Pages (from-to)1091-1095
Number of pages5
JournalJournal of the Korean Physical Society
Volume40
Issue number6
Publication statusPublished - 2002 Jun

Keywords

  • Error measure
  • Learning alorithm

Fingerprint

Dive into the research topics of 'Error measures of the back-propagation learning algorithm'. Together they form a unique fingerprint.

Cite this