(An) empirical study on the unified deep learning interface with GPU power consumption model-based computing resource configuration schemeGPU 전력소비 모형 기반의 컴퓨팅 자원 구성 기법을 사용하는 일원화된 딥러닝 인터페이스의 실험적 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 485
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorYoun, Chan Hyun-
dc.contributor.advisor윤찬현-
dc.contributor.authorKim, Tae Woo-
dc.date.accessioned2018-06-20T06:23:18Z-
dc.date.available2018-06-20T06:23:18Z-
dc.date.issued2017-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=718696&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/243383-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2017.8,[iv, 82 p. :]-
dc.description.abstractWith the development of computing technology and the emergence of new knowledge information processing system, the era of artificial intelligence is rapidly approaching with technological advances in medical image processing, web service, voice and video as well as natural language processing. Therefore, it is increasingly common to develop intelligent application services using the deep learning frameworks such as Caffe or TensorFlow. Users execute applications including the architecture of deep neural network, input dataset, and execution configuration by using the model description of Caffe or TensorFlow. In general, for the deep learning frameworks the model descriptions of applications are currently an incompatible format. In addition, because optimization methods including task scheduling and memory allocation and GPU-accelerated libraries such as cuDNN and cuBLAS used by the framework are different, the unified execution of deep learning framework is required. There may be also many differences in cost of computing resources of the deep learning task according to computing resource configuration. Generally, depending on and computing environment such as the network and the type of processing unit such as CPU or GPU that applications are executed, the execution time and power consumption are different. Conventional frameworks, however, do not support an optimal resource scheduling algorithm considering the execution time and power consumption of user-defined applications. Therefore, users must manually select the framework and resource configuration for executing applications, which may lead to increase in unpredictable cost due to various overheads and the computing process. In this thesis, to solve the problems in conventional execution methods of deep learning framework, we propose a unified deep learning structure and interface that can implement processing of deep learning applications in a unified way, even though model description formats between frameworks are different. Through the proposed interface, user-defined applications can be converted into a format that can be executed by the existing deep learning framework. In addition, the unified deep learning interface contains a novel method to statistically process GPU power consumption pattern according to deep learning model and service of quality (QoS), which automatically allocates computing resources with optimal conditions considering the execution time and GPU power consumption of user-defined applications. Finally, the proposed unified deep learning interface is evaluated experimentally by comparing the processing performance with the existing Caffe and TensorFlow in various computing environments. As a result, it shows generally more efficient in terms of costs required to process deep learning and indicate that it is possible to manage the cost of application processing.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectunified deep learning interface▼adeep learning framework▼aresource scheduling▼adeep learning▼aGPU cluster-
dc.subject일원화된 딥러닝 인터페이스▼a딥러닝 프레임워크▼a자원 스케쥴링▼a딥러닝▼aGPU 클러스터-
dc.title(An) empirical study on the unified deep learning interface with GPU power consumption model-based computing resource configuration scheme-
dc.title.alternativeGPU 전력소비 모형 기반의 컴퓨팅 자원 구성 기법을 사용하는 일원화된 딥러닝 인터페이스의 실험적 연구-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthor김태우-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0