“Subtraction in Optical Neural Nets”

by Chein-Hsun Wang

December 1990

To fully use the advantages of optics in optical neural networks, an incoherent optical neuron (ION) model is proposed. The main purpose of this model is to provide for the requisite subtraction of signals without phase sensitivity of a fully coherent system and without the cumbrance of photon-electron conversation and electronic subtraction. The ION model can subtract inhibitory from excitatory neuron inputs by using two device responses. Functionally it accommodates positive and negative weights, excitatory and inhibitory inputs, nonnegative neuron outputs, and can be used in a variety of neural network models. An extension is given to include bipolar neuron outputs in the case of fully connected networks.

The features of the ION model include a bias that is essentially independent of input weights and signals, a dynamically and globally variable threshold, the capability of implementing a sigmoid or binary threshold function for different neuron models, cascadability and ease of implementation. For example, this technique can in principle implement conventional inner-product neuron units and Grossberg's mass action law neuron units.

Some implementation considerations, such as the effect of nonlinearities in the device response, noise and fan-in/fan-out capability are discussed and simulated by computer. An experimental demonstration of optical excitation and inhibitation on a 2-D array of neuron units using a single Hughes liquid crystal light valve (LCLV) is also reported.

The ION model, in conjunction with optical weighted interconnections, can be used to implement arbitrarily connected neural networks. We describe its use to implement a model of simple cells of the visual cortex. Such simple cells perform the operations of edge detection, orientation selection, and in the case of moving objects, direction and speed selection. Experiments are described that utilize two Hughes liquid crystal light valves to perform the functions of input transduction and optical neuron unit implementation via ION. A multiplexed dichromated gelatin hologram serves as a holographic optical element that forms space invariant (but otherwise arbitrary) point spread functions for the network interconnections. By changing the holographic interconnection pattern, different simple cells performing , for example, transient response, edge detection, orientation preference, and direction and speed preference, can be implemented. Experimental results of these operations are presented.

We also present a learning algorithm, potential difference learning (PDL), based on temporal difference of membrane potential of the neuron for self-organizing neural networks. It is independent of the neuron nonlinearity, so it can be applied to analog or binary neurons. Two simulations for learning of weights are presented; a single layer fully-connected network and a 3-layer network with hidden units for a distributed semantic network. The results demonstrate that potential difference learning can be used with neural architectures for various applications. Finally, we also describe an optical architecture for PDL and the integration of ION model with PDL and the application of ION to PDL implementation.