Abstract
If the conventional quadratic error measure is used, the learning process of the error back-propagation algorithm often fails in metastable states and the learning process suffers serious inefficiency. If the Kullback-Leibler divergence is used, it can be shown numerically that the most typical metastable states are removed and that the learning efficiency is improved significantly as the number of hidden neurons is increased. This means that the Kullback-Leibler divergence is a superior error measure for the error back-propagation algorithm with scalability for the redundancy of hidden neurons. This scalability has a great advantage in applications since we can simply provide a network with large enough size for a given problem, and we need not know its optimal network size in advance.
Original language | English |
---|---|
Pages (from-to) | 1091-1095 |
Number of pages | 5 |
Journal | Journal of the Korean Physical Society |
Volume | 40 |
Issue number | 6 |
Publication status | Published - 2002 Jun |
Keywords
- Error measure
- Learning alorithm