(A) hybrid deep learning scheduling scheme for accelerated AMI stream data processing in edge-cloud system에지 클라우드 시스템에서 AMI 데이터 스트림 가속 처리를 위한 하이브리드 딥 러닝 스케줄링 기법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 203
  • Download : 0
Recently, researches based on deep neural network have been increasing in data analytics platforms to address various problems caused by inaccurate prediction of conventional data analysis algorithm used to provide from missing data interpolation to artificial intelligence services. In particular, the data analysis platform used in smart grid is important for AMI data management to provide data collection and power services. The computing platform that analyzes AMI data handles the vast amount of time series data whose data characteristics change in real time, with conventional offline deep learning methods having problems requiring a lot of computing resources. Therefore, the strategies of gradually learning a deep neural network through the incremental learning method was proposed. However, learning performance improvements are needed in an environment where the biased distribution fluctuates according to different power consumption habits because incremental learning methods are only effective when data features do not significantly change. In this thesis, we proposed a hybrid deep learning scheduling algorithm to improve and accelerate learning performance in the multi-AMI environment, which biased data feature varies dramatically. By analyzing the frequency distribution of cosine similarity, a model recognizes the biased data feature of power consumption patterns. Using the zero skewness property of an uniform distribution, skewed data distribution is reduced and the diversity of experience buffer was increased with update strategy maximizing variance of pattern. When scheduling an online and offline gradient in different computational complexity, the proposed model reduces processing time by selectively calculating gradient considering the degree of data feature transition. To verify the performance of the proposed algorithm, we conducted three experiments on the proposed method and the existing method of continual learning. When comparing average predictive errors for the distribution of entire test data, the accuracy performance of proposed scheme was higher than the incremental learning method and similar to the traditional continual learning. A performance comparison of training time showed 43% to 47% faster processing time than conventional continual learning methods with gradient regularization.
Advisors
Youn, Chan-Hyunresearcher윤찬현researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2020.2,[v, 47 p. :]

Keywords

Deep Learning Scheduling▼aOnline Learning▼aIncremental Learning▼aContinual Learning▼aLifelong Learning; 온라인 딥 러닝▼a딥 러닝 스케줄링▼a증분 학습▼a지속적인 학습▼aAMI 데이터 처리

URI
http://hdl.handle.net/10203/284759
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=911377&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0