DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Lee, Ju-Jang | - |
dc.contributor.advisor | 이주장 | - |
dc.contributor.author | Kim, Cheol-Taek | - |
dc.contributor.author | 김철택 | - |
dc.date.accessioned | 2011-12-14 | - |
dc.date.available | 2011-12-14 | - |
dc.date.issued | 2008 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=295403&flag=dissertation | - |
dc.identifier.uri | http://hdl.handle.net/10203/35443 | - |
dc.description | 학위논문(박사) - 한국과학기술원 : 전기및전자공학전공, 2008.2, [ ix, 75 p. ] | - |
dc.description.abstract | In this dissertation, a framework for automatic model selection is proposed to find an appropriate compromise between over-smoothing and over-fitting for online function approximation. Recently, most of model selection methods build experts network which is made up of local linear functions and corresponding basis functions. Unlike these methods, the proposed method builds experts network which is consisted of local feed-forward neural networks (FNNs) and corresponding basis functions. This strategy improves the accuracy of local approximation by the capability of FNNs (lower bias compared to local linear models) and complements the increasing variance by generation of smaller experts and continuity preservation function. The basic structure of the proposed framework is mixture of experts whose experts are FNNs. The framework generates and adjusts gating network for FNNs to fit the local data from the receptive fields that is in the gating network. For the generation and adjustment, a type of adaptive resonance theory, Gaussian ART, is incorporated into the framework with modification. Using the error of FNNs to generate and adjust the receptive fields are an important idea for the framework. As a result, the proposed framework constructs function approximator from data in incremental way. The proposed framework is significantly based on two different paradigm. Therefore, the implementation is not straightforward and hard to understand. However, real implementation is very easy and the effectiveness is shown through empirical evaluations of numerical function approximation, time-series prediction, and nonlinear plant modeling. | eng |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Incremental Learning | - |
dc.subject | Mixture of Experts | - |
dc.subject | Adaptive Resoancne Theory | - |
dc.subject | Function Approximation | - |
dc.subject | Model Selection | - |
dc.subject | 점증식 학습 | - |
dc.subject | 혼합 신경망 | - |
dc.subject | 적응공명이론 | - |
dc.subject | 함수근사화 | - |
dc.subject | 모델선택 | - |
dc.subject | Incremental Learning | - |
dc.subject | Mixture of Experts | - |
dc.subject | Adaptive Resoancne Theory | - |
dc.subject | Function Approximation | - |
dc.subject | Model Selection | - |
dc.subject | 점증식 학습 | - |
dc.subject | 혼합 신경망 | - |
dc.subject | 적응공명이론 | - |
dc.subject | 함수근사화 | - |
dc.subject | 모델선택 | - |
dc.title | Incremental learning of expert networks with adaptive resonance theory competition | - |
dc.title.alternative | 적응 공명 경쟁을 통한 혼합 신경망 구조에 대한 점증식 학습 알고리즘 | - |
dc.type | Thesis(Ph.D) | - |
dc.identifier.CNRN | 295403/325007 | - |
dc.description.department | 한국과학기술원 : 전기및전자공학전공, | - |
dc.identifier.uid | 020025083 | - |
dc.contributor.localauthor | Lee, Ju-Jang | - |
dc.contributor.localauthor | 이주장 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.