“Structural Stability of Unsupervised Learning in Feedback Networks”
by Bart Kosko
April 1989
Structural stability is insensitivity to purturbations. Global stability, in contrast, is convergence to fixed points for all inputs and all parameters. Globally stable neural networks need not be structurally stable, need not be robust. Shaking can distort, destroy, or prevent equilibria. Then large-scale hardware implementation becomes dubious, and biological plausibility decreases. A large class of unsupervised nonlinear feedback neural networks, adaptive bidirectional associative memory (ABAM) models, is proven structurally stable. This is achieved by extending the ABAM models to the random-process domain as systems of stochastic differential equations, and appending scaled Brownian diffusions. This much larger family of models, random ABAM (RABAM) models, is then proved globally stable. Intuitively, RABAM equilibria are ABAM equilibria that randomly vibrate. Included in the ABAM family of structurally stable models are Hopfield circuits, Hodgkin-Huxley networks, competitive-learning networks, and ART-2 networks. All RABAM models permit Brownian annealing. The extent of RABAM system "vibration" is characterized by the RADAM Noise Suppression Theorem: The mean-squared activation and synaptic velocities, E[x2i], E[y2j], and E[m2i], decrease exponentially quickly to their lower bounds, the respective temperature-scaled noise "variances," Tis2i, Tjs2j, and Tijs2ij. This suggest that many feedback neural network models are more biologically "realistic" than they are often criticized as being. For, the many neuronal and synaptic parameters missing from such neural network models are now included, but as net random unmodeled effects. They simply do not affect the structure of realtime global computations.