Highly interconnected simple analog processors that mimic biological neural networks are known to excel at certain collective computational tasks. In realization of such artificial neural networks, one main difficulty may be how to implement efficiently a large number of mutually weighted inter-connection between the simplified artificial neurons. In this thesis, optical implementations of neural network models using the holograms for three-dimensional interconnections in space have been studied.
First, the Hopfield model for two-dimensional associative memory that requires parallel $N^4$ weighted interconnections is implemented with an N x N hologram array. Though the model has limited storage capacity, the implemented system is very simple. Second, quadratic associative memory model of neural networks that requires parallel $N^3$ weighted interconnections is implemented with an N x 1 hologram array and a Stanford matrixvector multiplier. The storage capacity per neuron of this scheme is increased by introducing a little more complexity. Third, to introduce the programmability, which is essential for adaptive networks, in these interconnection schemes, programmable higher-order interconnection method using holographic lenslet arrays and spatial light modulators are proposed. To demonstrate the feasibility of this interconnection scheme, a two-dimensionalquadratic associative memory that requires parallel $N^6$ weighted interconnections is implemented with two N x N holographic lenslet arrays and two spatial light modulators. Fourth, adaptive learning networks using our programmable interconnection scheme and a photorefractive crystal are proposed. Basic experiment for dynamic Hopfield-like networks is executed as an example of learning networks.
Next, the concepts of neural net computations are applied to some engineering problems: matrix inversion and analog-to-digital conversion. It is shown that the neural net computations for these problems have potent...