Complexity reduction and performance improvement of multiclass pattern classification다부류 패턴 분류의 복잡도 감소 및 성능 향상 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 319
  • Download : 0
Multiclass pattern classification consists of dimensionality reduction stage that reduces the dimension of sample data and classification stage that classifies new sample data based on training samples. Two representative dimensionality reduction techniques are feature selection and feature extraction. Feature selection reduces the number of features, removes irrelevant, redundant or noisy features and chooses the important original features. Conventional methods of feature selection involve evaluating different feature subsets using some index and selecting the best among them. Linear feature extraction is considered a linear mapping of data from a original space to a new subspace that has smaller or equal dimension. Although feature selection keeps the original physical meaning of selected features, it costs a great degree of time complexity for an exhaustive comparison if a large number of features is to be selected. Therefore, feature extraction is more widely used in supervised learning than feature selection. Feature extraction consists of parametric methods which uses statistical information of sample data and nonparametric methods which uses euclidean distance information between sample data. Parametric methods are easy to implement and faster than nonparametric methods, but these are not adequate to deal with high dimensional data. NWFE, a representative nonparametric method, resolved disadvantages of parametric methods, but it takes too much computation time when we treat large number of sample data to get high accuracy. To resolve this problem of NWFE, we suggest two methods in this dissertation. First, we save computation time by eliminating redundant or less important features from sample data. Strictly speaking, this method is a kind of feature selection, but it takes far less time compare to previous feature selection methods. Second, by just using samples in the boundary region when we compute transformation matrix of NWFE, we can ...
Advisors
Lim, Jong-Taeresearcher임종태researcher
Description
한국과학기술원 : 전기및전자공학전공,
Publisher
한국과학기술원
Issue Date
2007
Identifier
264967/325007  / 020053239
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학전공, 2007.2, [ x, 90 p. ]

Keywords

NWFE; Pattern classification; covariance estimator; 공분산 추측기; NWFE; 패턴 분류

URI
http://hdl.handle.net/10203/38459
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=264967&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0