Controlled dropout : a dropout method for improving training speed on deep neural network제어된 드롭아웃 : 심층 신경망에서 학습 속도 향상을 위한 드롭아웃 방법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 437
  • Download : 0
Dropout is a technique widely used for preventing overfitting while training deep neural networks. However, applying dropout to a deep neural network typically increases the training time. This paper proposes a different dropout approach called controlled dropout that improves training speed by dropping units in a column-wise or row-wise manner on the matrices. In controlled dropout, a network is trained using compressed matrices of smaller size, which results in notable improvement of training speed. In the experiment on feed-forward neural networks for MNIST data set and convolutional neural networks for CIFAR-10 and SVHN data sets, our proposed method achieves faster training speed than conventional methods both on CPU and GPU, while exhibiting the same generalization as conventional dropout. Moreover, the improvement of training speed increases when the number of fully-connected layers increases. As the training process of neural network is an iterative process comprising forward propagation and backpropagation, speed improvement using controlled dropout would provide a significantly decreased training time.
Advisors
Choi, Ho-Jinresearcher최호진researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2018
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2018.2,[iv, 32 p. :]

Keywords

Dropout▼adeep neural network▼atraining speed; 드롭아웃▼a심층 신경망▼a학습 속도

URI
http://hdl.handle.net/10203/267106
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734101&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0