This paper demonstrates that the largest Lyapunov exponent λ of recurrent neural networks can be controlled efficiently by a stochastic gradient method. An essential core of the proposed method is a novel stochastic approximate formulation of the Lyapunov exponent λ as a function of the network parameters such as connection weights and thresholds of neural activation functions. By a gradient method, a direct calculation to minimize a square error (λ - λobj)2, where λobj is a desired exponent value, needs gradients collection through time which are given by a recursive calculation from past to present values. The collection is computationally expensive and causes unstable control of the exponent for networks with chaotic dynamics because of chaotic instability. The stochastic formulation derived in this paper gives us an approximation of the gradients collection in a fashion without the recursive calculation. This approximation can realize not only a faster calculation of the gradients, where only O(N2) run time is required while a direct calculation needs O(N5T) run time for networks with N neurons and T evolution, but also stable control for chaotic dynamics. It is also shown by simulation studies that the approximation is a robust formulation for the network size and that proposed method can control the chaos dynamics in recurrent neural networks effectively.
|Number of pages||6|
|Publication status||Published - 2001|
|Event||Joint 9th IFSA World Congress and 20th NAFIPS International Conference - Vancouver, BC, Canada|
Duration: 2001 Jul 25 → 2001 Jul 28
|Conference||Joint 9th IFSA World Congress and 20th NAFIPS International Conference|
|Period||01/7/25 → 01/7/28|