Abstract
This paper demonstrates that the Lyapunov exponents of recurrent neural networks can be controlled by our proposed methods. One of the control methods minimize a squared error eλ = (λ - λobj)2/2 by a gradient method, where λ is the largest Lyapunov exponent of the network and λobj is a desired exponent. λ implying the dynamical complexity is calculated by observing the state transition for a long-term period. This method is, however, computationally expensive for large-scale recurrent networks and the control is unstable for recurrent networks with chaotic dynamics since a gradient correction through time diverges due to the chaotic instability. We also propose an approximation method in order to reduce the computational cost and realize a 'stable' control for chaotic networks. The new method is based on a stochastic relation which allows us to calculate the correction through time in a fashion without time evolution. Simulation results show that the approximation method can control the exponent for recurrent networks with chaotic dynamics under a restriction.
Original language | English |
---|---|
Pages (from-to) | I-484 - I-489 |
Journal | Proceedings of the IEEE International Conference on Systems, Man and Cybernetics |
Volume | 1 |
Publication status | Published - 1999 |
Event | 1999 IEEE International Conference on Systems, Man, and Cybernetics 'Human Communication and Cybernetics' - Tokyo, Jpn Duration: 1999 Oct 12 → 1999 Oct 15 |