#### TAG neural network model for large-scale optical implementation = 대규모 광학적 구현을 위한 TAG 신경회로망 모델

Cited 0 time in Cited 0 time in
• Hit : 364
TAG (Training by Adaptive Gain) is a new adaptive learning algorithm developed for optical implementation of large-scale artificial neural networks. For single-layer neural networks with N input and M output neurons TAG contains two different types of interconnections, i.e., MN global fixed interconnections and $\beta$(N + M) adaptive gain controls, where $\beta$ is the number of local interconnectivity. For the same number of input and output neurons TAG requires much less adaptive elements, and provides a possibility for large-scale implementation at some sacrifice in performance as compared to the perceptron. The training algorithm is based on gradient descent and error backpropagation, and is easily extensible to multilayer, and/or high-order architectures. Computer simulation demonstrates reasonable performance of TAG compared to perceptron performance. For large-scale electro-optic implementation the fixed global interconnections may be implemented by multi-facet hologram, volume hologram or ground glass, while the adaptive gains by spatial light modulators (SLMs). The ground glass is more advantageous for random interonnections with much higher diffraction efficiency and interconnection density. Both feed-forward signal and error back-propagation paths are implemented by single ground glass, and adaptive learning has been demonstrated for hetero-associative memory and classifiers. Optical implementation of multi-layer bidirectional memory and inverse scattering application of the TAG model are also included as Appendices.
Lee, Soo-Youngresearcher이수영researcher
Description
한국과학기술원 : 전기 및 전자공학과,
Publisher
한국과학기술원
Issue Date
1994
Identifier
69066/325007 / 000895411
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기 및 전자공학과, 1994.2, [ iv, 116 p. ]

URI
http://hdl.handle.net/10203/36222