“Performance Optimization in Cellular Neural Networks and Associated VLSI Architectures”

by Sa Hyun Bang

August 1994

A systematic annealing method of finding optimal solutions in recurrent associative neural networks is presented. The Hopfield neural network and cellular neural network are very promising in solving many scientific optimization problems by the use of their collective computational properties. However, the neural networks for optimization are subject to sub-optimal solutions due to the local-minimum problem as in engineering optimization problems. Various techniques and neural networks for global optimization have been suggested. The stochastic methods such as simulated annealing and Boltzmann machine require a tremendous amount of computational resources in executing the algorithm on digital computers.

Paralleled hardware annealing exploited in this dissertation is a highly efficient method of finding globally optimal solutions in recurrent associative neural networks. It is a paralleled, hardware-based realization of effective mean-field annealing while achieving the effects of the matrix nonconvexity method. The speed of convergence can be faster than those of the stochastic methods by several orders of magnitude, and is suitable for those of typical analog very large-scale integration (VLSI) neuroprocessor implementation. The process of global optimization can be described by the eigenvalues of a time-varying dynamic system. The generalized energy function, which serve as the cost function to be optimized, of the network is first increased by reducing the voltage gain of neurons. Then, the hardware annealing searches for the globally minimum energy state by continuously increasing the gain of neurons. The proposed annealing technique is described by applying it to a basic two-neuron network followed by a Hopfield analog-to-digital decision network in which the desired optimal solutions are exactly known. The powerful cellular neural networks are examined to understand the effectiveness of the hardware annealing in solving the problems of energy barriers. In many applications other than optimization, hardware annealing also provides adequate stimulation to frozen neurons caused by ill-conditioned initial states. As a practical example of the neural-based combinatorial optimization, the maximum-likelihood sequence estimation of digital data in communications is successfully investigated. In addition, efficient computing architectures for VLSI and detailed circuit design are presented.