An Energy-efficient Deep Neural Network Training Processor with Bit-slice-level Reconfigurability and Sparsity Exploitation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 93
  • Download : 0
This paper presents an energy-efficient deep neural network (DNN) training processor through the four key features: 1) Layer-wise Adaptive bit-Precision Scaling (LAPS) with 2) In-Out Slice Skipping (IOSS) core, 3) double-buffered Reconfigurable Accumulation Network (RAN), 4) momentum-ADAM unified OPTimizer Core (OPTC). Thanks to the bit-slice-level scalability and zero-slice skipping, it shows 5.9 x higher energy-efficiency compared with the state-of-the-art on-chip-learning processor (OCLPs).
Publisher
IEEE COMPUTER SOC
Issue Date
2021-04
Language
English
Citation

IEEE Symposium on Low-Power and High-Speed Chips (IEEE COOL CHIPS)

ISSN
2473-4683
DOI
10.1109/COOLCHIPS52128.2021.9410324
URI
http://hdl.handle.net/10203/288426
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0