Complexity control method for recurrent neural networks

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper demonstrates that the Lyapunov exponents of recurrent neural networks can be controlled by our proposed methods. One of the control methods minimize a squared error eλ = (λ - λobj)2/2 by a gradient method, where λ is the largest Lyapunov exponent of the network and λobj is a desired exponent. λ implying the dynamical complexity is calculated by observing the state transition for a long-term period. This method is, however, computationally expensive for large-scale recurrent networks and the control is unstable for recurrent networks with chaotic dynamics since a gradient correction through time diverges due to the chaotic instability. We also propose an approximation method in order to reduce the computational cost and realize a 'stable' control for chaotic networks. The new method is based on a stochastic relation which allows us to calculate the correction through time in a fashion without time evolution. Simulation results show that the approximation method can control the exponent for recurrent networks with chaotic dynamics under a restriction.

Original languageEnglish
Pages (from-to)I-484 - I-489
JournalProceedings of the IEEE International Conference on Systems, Man and Cybernetics
Volume1
Publication statusPublished - 1999
Event1999 IEEE International Conference on Systems, Man, and Cybernetics 'Human Communication and Cybernetics' - Tokyo, Jpn
Duration: 1999 Oct 121999 Oct 15

Fingerprint

Dive into the research topics of 'Complexity control method for recurrent neural networks'. Together they form a unique fingerprint.

Cite this