Stochastic SOT device based SNN architecture for On-chip Unsupervised STDP Learning

Cited 11 time in webofscience Cited 0 time in scopus
  • Hit : 557
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorJang, Yunhoko
dc.contributor.authorKang, Gyuseongko
dc.contributor.authorKim, Taehwanko
dc.contributor.authorSeo, Yeongkyoko
dc.contributor.authorLee, Kyung-Jinko
dc.contributor.authorPark, Byong-Gukko
dc.contributor.authorPark, Jongsunko
dc.date.accessioned2022-08-28T00:00:17Z-
dc.date.available2022-08-28T00:00:17Z-
dc.date.created2021-11-17-
dc.date.created2021-11-17-
dc.date.created2021-11-17-
dc.date.issued2022-09-
dc.identifier.citationIEEE TRANSACTIONS ON COMPUTERS, v.71, no.9, pp.2022 - 2035-
dc.identifier.issn0018-9340-
dc.identifier.urihttp://hdl.handle.net/10203/298154-
dc.description.abstractEmerging device based spiking neural network (SNN) hardware design has been actively studied. Especially, energy and area efficient synapse crossbar has been of particular interest, but processing units for weight summations in synapse crossbar are still a main bottleneck for energy and area efficient hardware design. In this paper, we propose an efficient SNN architecture with stochastic spin-orbit torque (SOT) device based multi-bit synapses. First, we present SOT device based synapse array using modified gray code. The modified gray code based synapse needs only N devices to represent 2^N levels of synapse weights. Accumulative spike technique is also adopted in the proposed synapse array, to improve ADC utilization and reduce the number of neuron updates. In addition, we propose hardware friendly algorithmic techniques to improve classification accuracies as well as energy efficiencies. Non-spike depression based stochastic spike-timing-dependent plasticity is used to reduce the overlapping input representation and classification error. Early read termination is also employed to reduce energy consumption by turning off less associated neurons. The proposed SNN processor has been implemented using 65nm CMOS process, and it shows 90% classification accuracy in MNIST dataset consuming 0.78J/image (training) and 0.23J/image (inference) of energy with an area of 1.12mm2.-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.titleStochastic SOT device based SNN architecture for On-chip Unsupervised STDP Learning-
dc.typeArticle-
dc.identifier.wosid000838669200003-
dc.identifier.scopusid2-s2.0-85117788254-
dc.type.rimsART-
dc.citation.volume71-
dc.citation.issue9-
dc.citation.beginningpage2022-
dc.citation.endingpage2035-
dc.citation.publicationnameIEEE TRANSACTIONS ON COMPUTERS-
dc.identifier.doi10.1109/tc.2021.3119180-
dc.contributor.localauthorLee, Kyung-Jin-
dc.contributor.localauthorPark, Byong-Guk-
dc.contributor.nonIdAuthorJang, Yunho-
dc.contributor.nonIdAuthorKang, Gyuseong-
dc.contributor.nonIdAuthorKim, Taehwan-
dc.contributor.nonIdAuthorSeo, Yeongkyo-
dc.contributor.nonIdAuthorPark, Jongsun-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorSynapses-
dc.subject.keywordAuthorNeurons-
dc.subject.keywordAuthorComputer architecture-
dc.subject.keywordAuthorSwitches-
dc.subject.keywordAuthorHardware-
dc.subject.keywordAuthorMagnetization-
dc.subject.keywordAuthorMagnetic tunneling-
dc.subject.keywordAuthorSpin-orbit torque device-
dc.subject.keywordAuthorspiking neural network-
dc.subject.keywordAuthorstochastic spike-timing-dependent plasticity-
dc.subject.keywordAuthoron-chip learning-
dc.subject.keywordPlusNONVOLATILE-
dc.subject.keywordPlusENERGY-
dc.subject.keywordPlusMODEL-
Appears in Collection
PH-Journal Papers(저널논문)MS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 11 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0