(A) memory optimized weight update architecture for on-device convolutional neural network training메모리 접근 최적화된 온-디바이스 CNN 학습 전용 웨이트 업데이트 아키텍처

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 287
  • Download : 0
Training convolutional neural network on device has become essential where it allows applications to consider user’s individual environment. Meanwhile, weight update operation from the training process is the primary factor of high energy consumption due to its substantial memory accesses. We propose a dedicated weight update architecture to minimize the data access with two key features: (1) a specialized local buffer for the DRAM access deduction, (2) a novel dataflow and its suitable processing element array structure for weight gradient computation to optimize the energy consumed by internal memories. Our scheme achieves 14.3%-30.2% total energy reduction by drastically eliminating the memory accesses.
Advisors
Kim, Lee-Supresearcher김이섭researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2018
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2018.2,[iv, 36 p. :]

Keywords

On-device▼aconvolutional neural network training▼amemory access▼adataflow; 온-디바이스▼a컨볼루셔널 신경망 학습▼a메모리 접근▼a데이터 흐름

URI
http://hdl.handle.net/10203/266727
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734207&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0