DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Lee-Sup | - |
dc.contributor.advisor | 김이섭 | - |
dc.contributor.author | Choi, Seungkyu | - |
dc.date.accessioned | 2019-09-04T02:40:32Z | - |
dc.date.available | 2019-09-04T02:40:32Z | - |
dc.date.issued | 2018 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734207&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/266727 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2018.2,[iv, 36 p. :] | - |
dc.description.abstract | Training convolutional neural network on device has become essential where it allows applications to consider user’s individual environment. Meanwhile, weight update operation from the training process is the primary factor of high energy consumption due to its substantial memory accesses. We propose a dedicated weight update architecture to minimize the data access with two key features: (1) a specialized local buffer for the DRAM access deduction, (2) a novel dataflow and its suitable processing element array structure for weight gradient computation to optimize the energy consumed by internal memories. Our scheme achieves 14.3%-30.2% total energy reduction by drastically eliminating the memory accesses. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | On-device▼aconvolutional neural network training▼amemory access▼adataflow | - |
dc.subject | 온-디바이스▼a컨볼루셔널 신경망 학습▼a메모리 접근▼a데이터 흐름 | - |
dc.title | (A) memory optimized weight update architecture for on-device convolutional neural network training | - |
dc.title.alternative | 메모리 접근 최적화된 온-디바이스 CNN 학습 전용 웨이트 업데이트 아키텍처 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 최승규 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.