Parallel computation techniques to accelerate training of deep neural networks심층 신경망의 훈련 가속화를 위한 병렬 계산 기법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
In recent years, deep neural networks (DNNs) have achieved remarkable success in image classification and natural language processing tasks thanks to the large number of parameters and input data that make up the neural network. By layering on a large number of neurons, the DNN is able to represent the appropriate function for the given data. However, the downside is that DNNs with many parameters require extensive training time, even with the use of GPUs. As a result, there is a growing need for fast training methods that are not solely reliant on hardware performance. Parallel computation has emerged as the most promising method for achieving fast training, and distributed computing machines that utilize multiple GPUs have been developed. Despite their advantages, current parallel training methods such as data parallelism and model parallelism have limitations in effectively training large scale DNNs, and often lack a mathematical reason. To address these issues, this thesis proposes a novel parallel computation technique for deep neural networks.
Advisors
이창옥researcher
Description
한국과학기술원 :수리과학과,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 수리과학과, 2023.8,[vi, 68 p. :]

Keywords

심층 신경망▼a다중 GPU▼a병렬 계산; Deep neural network▼aMulti-GPU▼aParallel computation

URI
http://hdl.handle.net/10203/320992
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1047419&flag=dissertation
Appears in Collection
MA-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0