Stochastic analysis of chaos dynamics in recurrent neural networks

Noriyasu Homma, Masao Sakai, Madan M. Gupta, Ken Ichi Abe

Research output: Contribution to conferencePaperpeer-review


This paper demonstrates that the largest Lyapunov exponent λ of recurrent neural networks can be controlled efficiently by a stochastic gradient method. An essential core of the proposed method is a novel stochastic approximate formulation of the Lyapunov exponent λ as a function of the network parameters such as connection weights and thresholds of neural activation functions. By a gradient method, a direct calculation to minimize a square error (λ - λobj)2, where λobj is a desired exponent value, needs gradients collection through time which are given by a recursive calculation from past to present values. The collection is computationally expensive and causes unstable control of the exponent for networks with chaotic dynamics because of chaotic instability. The stochastic formulation derived in this paper gives us an approximation of the gradients collection in a fashion without the recursive calculation. This approximation can realize not only a faster calculation of the gradients, where only O(N2) run time is required while a direct calculation needs O(N5T) run time for networks with N neurons and T evolution, but also stable control for chaotic dynamics. It is also shown by simulation studies that the approximation is a robust formulation for the network size and that proposed method can control the chaos dynamics in recurrent neural networks effectively.

Original languageEnglish
Number of pages6
Publication statusPublished - 2001
EventJoint 9th IFSA World Congress and 20th NAFIPS International Conference - Vancouver, BC, Canada
Duration: 2001 Jul 252001 Jul 28


ConferenceJoint 9th IFSA World Congress and 20th NAFIPS International Conference
CityVancouver, BC


Dive into the research topics of 'Stochastic analysis of chaos dynamics in recurrent neural networks'. Together they form a unique fingerprint.

Cite this