Reinforcement learning-based mixed-precision quantization for lightweight deep neural networks경량 심층신경망을 위한 강화학습 기반 혼합정밀도 양자화

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 117
  • Download : 0
Network quantization has been widely studied to compress the deep neural network in mobile devices. Conventional methods quantize the network parameters of all layers with the same fixed precision, regardless of the number of parameters in each layer. However, quantizing the weights of the layer with many parameters is more effective in reducing the model size. Accordingly, in this paper, we propose a novel mixed-precision quantization method based on reinforcement learning. Specifically, we utilize the number of parameters at each layer as a prior for our framework. By using the accuracy and the bit-width as a reward, the proposed framework determines the optimal quantization policy for each layer. By applying this policy sequentially, we achieve weighted-average 2.97 bits for the VGG-16 model on the CIFAR-10 dataset with no degradation of the accuracy, compared with its full-precision baseline. We also show that our framework can provide an optimal quantization policy for the VGG-Net and the ResNet to minimize the storage while preserving the accuracy.
Advisors
Kim, Changickresearcher김창익researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2021.2,[iv, 40 p. :]

Keywords

Deep neural network▼aReinforcement learning▼aModel compression▼aQuantization▼aEmbedded system; 심층신경망▼a강화학습▼a모델압축▼a양자화▼a임베디드 시스템

URI
http://hdl.handle.net/10203/295961
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948978&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0