DNPU: An Energy-Efficient Deep-Learning Processor with Heterogeneous Multi-Core Architecture

Cited 37 time in webofscience Cited 0 time in scopus
  • Hit : 768
  • Download : 0
An energy-efficient deep-learning processor called DNPU is proposed for the embedded processing of convolutional neural networks (CNNs) and recurrent neural networks (RNNs) in mobile platforms. DNPU uses a heterogeneous multi-core architecture to maximize energy efficiency in both CNNs and RNNs. In each core, a memory architecture, data paths, and processing elements are optimized depending on the characteristics of each network. Also, a mixed workload division method is proposed to minimize off-chip memory access in CNNs, and a quantization table-based matrix multiplier is proposed to remove duplicated multiplications in RNNs.
Publisher
IEEE COMPUTER SOC
Issue Date
2018-09
Language
English
Article Type
Article
Citation

IEEE MICRO, v.38, no.5, pp.85 - 93

ISSN
0272-1732
DOI
10.1109/MM.2018.053631145
URI
http://hdl.handle.net/10203/246342
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 37 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0