Incremental learning of expert networks with adaptive resonance theory competition적응 공명 경쟁을 통한 혼합 신경망 구조에 대한 점증식 학습 알고리즘

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 575
  • Download : 0
In this dissertation, a framework for automatic model selection is proposed to find an appropriate compromise between over-smoothing and over-fitting for online function approximation. Recently, most of model selection methods build experts network which is made up of local linear functions and corresponding basis functions. Unlike these methods, the proposed method builds experts network which is consisted of local feed-forward neural networks (FNNs) and corresponding basis functions. This strategy improves the accuracy of local approximation by the capability of FNNs (lower bias compared to local linear models) and complements the increasing variance by generation of smaller experts and continuity preservation function. The basic structure of the proposed framework is mixture of experts whose experts are FNNs. The framework generates and adjusts gating network for FNNs to fit the local data from the receptive fields that is in the gating network. For the generation and adjustment, a type of adaptive resonance theory, Gaussian ART, is incorporated into the framework with modification. Using the error of FNNs to generate and adjust the receptive fields are an important idea for the framework. As a result, the proposed framework constructs function approximator from data in incremental way. The proposed framework is significantly based on two different paradigm. Therefore, the implementation is not straightforward and hard to understand. However, real implementation is very easy and the effectiveness is shown through empirical evaluations of numerical function approximation, time-series prediction, and nonlinear plant modeling.
Advisors
Lee, Ju-Jangresearcher이주장researcher
Description
한국과학기술원 : 전기및전자공학전공,
Publisher
한국과학기술원
Issue Date
2008
Identifier
295403/325007  / 020025083
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학전공, 2008.2, [ ix, 75 p. ]

Keywords

Incremental Learning; Mixture of Experts; Adaptive Resoancne Theory; Function Approximation; Model Selection; 점증식 학습; 혼합 신경망; 적응공명이론; 함수근사화; 모델선택; Incremental Learning; Mixture of Experts; Adaptive Resoancne Theory; Function Approximation; Model Selection; 점증식 학습; 혼합 신경망; 적응공명이론; 함수근사화; 모델선택

URI
http://hdl.handle.net/10203/35443
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=295403&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0