Parallel operation of Self-Limited Analog Programming for Fast Array-Level Weight Programming and Update

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 231
  • Download : 0
Memristive neural networks perform vector matrix multiplication efficiently, which is used for the accelerator of neuromorphic computing. To train the memristor cells in a memristive neural network, the analog conductance state of the memristor should be programmed in parallel; otherwise, the resulting long training time can limit the size of the neural network. Herein, a novel parallel programming method using the self‐limited analog switching behavior of the memristor is proposed. A Pt/Ti:NbOx/NbOx/TiN charge trap memristor device for the programming demonstration is utilized, and a convolutional neural network is emulated to train the MNIST dataset, based on the device characteristics. In the simulation, the proposed programming method is able to reduce programming time to as low as 1/130, compared with the sequential programming method. The simulation suggests that the programming time required by the proposed method is not affected by array size, which makes it very promising in a high‐density neural network.
Publisher
Willey Online Library
Issue Date
2020-07
Language
English
Citation

Advanced Intelligent Systems, v.2, no.7, pp.2000014

ISSN
2640-4567
DOI
10.1002/aisy.202000014
URI
http://hdl.handle.net/10203/279530
Appears in Collection
MS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0