Incremental learning of expert networks with adaptive resonance theory competition적응 공명 경쟁을 통한 혼합 신경망 구조에 대한 점증식 학습 알고리즘

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 567
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorLee, Ju-Jang-
dc.contributor.advisor이주장-
dc.contributor.authorKim, Cheol-Taek-
dc.contributor.author김철택-
dc.date.accessioned2011-12-14-
dc.date.available2011-12-14-
dc.date.issued2008-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=295403&flag=dissertation-
dc.identifier.urihttp://hdl.handle.net/10203/35443-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학전공, 2008.2, [ ix, 75 p. ]-
dc.description.abstractIn this dissertation, a framework for automatic model selection is proposed to find an appropriate compromise between over-smoothing and over-fitting for online function approximation. Recently, most of model selection methods build experts network which is made up of local linear functions and corresponding basis functions. Unlike these methods, the proposed method builds experts network which is consisted of local feed-forward neural networks (FNNs) and corresponding basis functions. This strategy improves the accuracy of local approximation by the capability of FNNs (lower bias compared to local linear models) and complements the increasing variance by generation of smaller experts and continuity preservation function. The basic structure of the proposed framework is mixture of experts whose experts are FNNs. The framework generates and adjusts gating network for FNNs to fit the local data from the receptive fields that is in the gating network. For the generation and adjustment, a type of adaptive resonance theory, Gaussian ART, is incorporated into the framework with modification. Using the error of FNNs to generate and adjust the receptive fields are an important idea for the framework. As a result, the proposed framework constructs function approximator from data in incremental way. The proposed framework is significantly based on two different paradigm. Therefore, the implementation is not straightforward and hard to understand. However, real implementation is very easy and the effectiveness is shown through empirical evaluations of numerical function approximation, time-series prediction, and nonlinear plant modeling.eng
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectIncremental Learning-
dc.subjectMixture of Experts-
dc.subjectAdaptive Resoancne Theory-
dc.subjectFunction Approximation-
dc.subjectModel Selection-
dc.subject점증식 학습-
dc.subject혼합 신경망-
dc.subject적응공명이론-
dc.subject함수근사화-
dc.subject모델선택-
dc.subjectIncremental Learning-
dc.subjectMixture of Experts-
dc.subjectAdaptive Resoancne Theory-
dc.subjectFunction Approximation-
dc.subjectModel Selection-
dc.subject점증식 학습-
dc.subject혼합 신경망-
dc.subject적응공명이론-
dc.subject함수근사화-
dc.subject모델선택-
dc.titleIncremental learning of expert networks with adaptive resonance theory competition-
dc.title.alternative적응 공명 경쟁을 통한 혼합 신경망 구조에 대한 점증식 학습 알고리즘-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN295403/325007 -
dc.description.department한국과학기술원 : 전기및전자공학전공, -
dc.identifier.uid020025083-
dc.contributor.localauthorLee, Ju-Jang-
dc.contributor.localauthor이주장-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0