Cluster-promoting quantization with bit-drop for minimizing network quantization loss네트워크 양자화 손실을 줄이기 위한 군집 촉진하는 양자화 및 비트드랍

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 136
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorYang, Eunho-
dc.contributor.advisor양은호-
dc.contributor.authorLee, Jung Hyun-
dc.date.accessioned2022-04-13T05:40:05Z-
dc.date.available2022-04-13T05:40:05Z-
dc.date.issued2021-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=963745&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/292500-
dc.description학위논문(석사) - 한국과학기술원 : AI대학원, 2021.8,[iii, 20 p. :]-
dc.description.abstractNetwork quantization, which aims to reduce the bit-lengths of the network weights and activations, has emerged for their deployments to resource-limited devices. Although recent studies have successfully discretized a full-precision network, they still incur large quantization errors after training, thus giving rise to a significant performance gap between a full-precision network and its quantized counterpart. In this work, we propose a novel quantization method for neural networks, Cluster-Promoting Quantization (CPQ) that finds the optimal quantization grids while naturally encouraging the underlying full-precision weights to gather around those quantization grids cohesively during training. This property of CPQ is thanks to our two main ingredients that enable differentiable quantization: i) the use of the categorical distribution designed by a specific probabilistic parametrization in the forward pass and ii) our proposed multi-class straight-through estimator (STE) in the backward pass. Since our second component, multi-class STE, is intrinsically biased, we additionally propose a new bit-drop technique, DropBits, that revises the standard dropout regularization to randomly drop bits instead of neurons. As a natural extension of DropBits, we further introduce the way of learning heterogeneous quantization levels to find proper bit-length for each layer by imposing an additional regularization on DropBits. We experimentally validate our method on various benchmark datasets and network architectures, and also support a new hypothesis for quantization: learning heterogeneous quantization levels outperforms the case using the same but fixed quantization levels from scratch.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectNetwork Quantization▼aCluster-Promoting Quantization▼aDropBits▼aHeterogeneous Quantization▼aNew Hypothesis for Quantization-
dc.subject네트워크 양자화▼a군집 촉진하는 양자화-
dc.subject드랍비트▼a비동형 양자화▼a양자화에 대한 새로운 가설-
dc.titleCluster-promoting quantization with bit-drop for minimizing network quantization loss-
dc.title.alternative네트워크 양자화 손실을 줄이기 위한 군집 촉진하는 양자화 및 비트드랍-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :AI대학원,-
dc.contributor.alternativeauthor이정현-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0