DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chang, Dong-Jin | ko |
dc.contributor.author | Nam, Byeong-Gyu | ko |
dc.contributor.author | Ryu, Seung-Tak | ko |
dc.date.accessioned | 2021-09-08T07:50:20Z | - |
dc.date.available | 2021-09-08T07:50:20Z | - |
dc.date.created | 2021-09-08 | - |
dc.date.created | 2021-09-08 | - |
dc.date.created | 2021-09-08 | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | IEEE ACCESS, v.9, pp.117554 - 117564 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | http://hdl.handle.net/10203/287676 | - |
dc.description.abstract | This paper proposes design strategies for a low-cost quantized neural network. To prevent the classification accuracy from being degraded by quantization, a structure-design strategy that utilizes a large number of channels rather than deep layers is proposed. In addition, a squeeze-and-excitation (SE) layer is adopted to enhance the performance of the quantized network. Through a quantitative analysis and simulations of the quantized key convolution layers of ResNet and MobileNets, a low-cost layer-design strategy for use when building a neural network is proposed. With this strategy, a low-cost network referred to as a MixedNet is constructed. A 4-bit quantized MixedNet example achieves an on-chip memory size reduction of 60% and fewer memory access by 53% with negligible classification accuracy degradation in comparison with conventional networks while also showing classification accuracy rates of approximately 73% for Cifar-100 and 93% for Cifar-10. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | MixedNet: Network Design Strategies for Cost-Effective Quantized CNNs | - |
dc.type | Article | - |
dc.identifier.wosid | 000690439700001 | - |
dc.identifier.scopusid | 2-s2.0-85113892311 | - |
dc.type.rims | ART | - |
dc.citation.volume | 9 | - |
dc.citation.beginningpage | 117554 | - |
dc.citation.endingpage | 117564 | - |
dc.citation.publicationname | IEEE ACCESS | - |
dc.identifier.doi | 10.1109/ACCESS.2021.3106658 | - |
dc.contributor.localauthor | Ryu, Seung-Tak | - |
dc.contributor.nonIdAuthor | Nam, Byeong-Gyu | - |
dc.description.isOpenAccess | Y | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Quantization (signal) | - |
dc.subject.keywordAuthor | Convolution | - |
dc.subject.keywordAuthor | Network architecture | - |
dc.subject.keywordAuthor | Hardware | - |
dc.subject.keywordAuthor | Degradation | - |
dc.subject.keywordAuthor | Convolutional neural networks | - |
dc.subject.keywordAuthor | System-on-chip | - |
dc.subject.keywordAuthor | Convolutional neural network | - |
dc.subject.keywordAuthor | deep neural network | - |
dc.subject.keywordAuthor | memory access number | - |
dc.subject.keywordAuthor | memory cost | - |
dc.subject.keywordAuthor | on-chip memory size | - |
dc.subject.keywordAuthor | quantized neural networks | - |
dc.subject.keywordPlus | MEMORY | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.