TAG: A Neural Network Model for Large-Scale Optical Implementation

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 482
  • Download : 0
TAG (Training by Adaptive Gain) is a new adaptive learning algorithm developed for optical implementation of large-scale artificial neural networks. For fully interconnected single-layer neural networks with. N input and M output neurons TAG contains two different types of interconnections, i.e., MN global fixed interconnections and N +. M adaptive gain controls. For two-dimensional input patterns the former may be achieved by multifacet holograms, and the latter by spatial light modulators (SLMs). For the same number of input and output neurons TAG requires much less adaptive elements, and provides a possibility for large-scale optical implementation at some sacrifice in performance as compared to the perceptron. The training algorithm is based on gradient descent and error backpropagation, and is easily extensible to multilayer architecture. Computer simulation demonstrates reasonable performance of TAG compared to perceptron performance. An electrooptical implementation of TAG is also proposed.
Publisher
Mit Press
Issue Date
1991
Language
English
Article Type
Article
Citation

NEURAL COMPUTATION, v.3, no.1, pp.135 - 143

ISSN
0899-7667
DOI
10.1162/neco.1991.3.1.135
URI
http://hdl.handle.net/10203/55759
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0