PNPU: An Energy-Efficient Deep-Neural-Network Learning Processor with Stochastic Coarse-Fine Level Weight Pruning and Adaptive Input/Output/Weight Zero Skipping

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 88
  • Download : 0
Recently, deep-neural-network (DNN) learning processors for edge devices have been proposed, but they cannot reduce the complexity of over-parameterized network during training. Also, they cannot support energy-efficient zero-skipping because previous methods cannot be performed perfectly in backpropagation and weight gradient update. In this letter, energy-efficient DNN learning processor PNPU is proposed with three key features: 1) stochastic coarse-fine level pruning; 2) adaptive input, output, weight zero skipping; and 3) weight pruning unit with weight sparsity balancer. As a result, PNPU shows 3.14-278.39 TFLOPS/W energy efficiency, at 0.78 V and 50 MHz with FP8 and 0%-90% sparsity condition.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2021
Language
English
Article Type
Article
Citation

IEEE Solid-State Circuits Letters, v.4, pp.22 - 25

ISSN
2573-9603
DOI
10.1109/LSSC.2020.3041497
URI
http://hdl.handle.net/10203/296839
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0