DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Yufeng | ko |
dc.contributor.author | Zhang, Hao Helen | ko |
dc.contributor.author | Park, Cheolwoo | ko |
dc.contributor.author | Ahn, Jeongyoun | ko |
dc.date.accessioned | 2021-06-02T02:50:43Z | - |
dc.date.available | 2021-06-02T02:50:43Z | - |
dc.date.created | 2021-06-02 | - |
dc.date.created | 2021-06-02 | - |
dc.date.created | 2021-06-02 | - |
dc.date.issued | 2007-08 | - |
dc.identifier.citation | COMPUTATIONAL STATISTICS & DATA ANALYSIS, v.51, no.12, pp.6380 - 6394 | - |
dc.identifier.issn | 0167-9473 | - |
dc.identifier.uri | http://hdl.handle.net/10203/285435 | - |
dc.description.abstract | The standard support vector machine (SVM) minimizes the hinge loss function subject to the L-2 penalty or the roughness penalty. Recently, the L-1 SVM was suggested for variable selection by producing sparse solutions [Bradley, P., Mangasarian, O., 1998. Feature selection via concave minimization and support vector machines. In: Shavlik, J. (Ed.), ICML'98. Morgan Kaufmann, Los Altos, CA; Zhu, J., Hastie, T., Rosset, S., Tibshirani, R., 2003. 1-norm support vector machines. Neural Inform. Process. Systems 16]. These learning methods are non-adaptive since their penalty forms are pre-determined before looking at data, and they often perform well only in a certain type of situation. For instance, the L-2 SVM generally works well except when there are too many noise inputs, while the L-1 SVM is more preferred in the presence of many noise variables. In this article we propose and explore an adaptive learning procedure called the L-q SVM, Where the best q > 0 is automatically chosen by data. Both two- and multi-class classification problems are considered. We show that the new adaptive approach combines the benefit of a class of non-adaptive procedures and gives the best performance of this class across a variety of situations. Moreover, we observe that the proposed L-q penalty is more robust to noise variables than the L-1 and L-2 penalties. An iterative algorithm is suggested to solve the L-q SVM efficiently. Simulations and real data applications support the effectiveness of the proposed procedure. (C) 2007 Elsevier B.V. All rights reserved. | - |
dc.language | English | - |
dc.publisher | ELSEVIER | - |
dc.title | Support vector machines with adaptive L-q penalty | - |
dc.type | Article | - |
dc.identifier.wosid | 000249316000071 | - |
dc.identifier.scopusid | 2-s2.0-34547234238 | - |
dc.type.rims | ART | - |
dc.citation.volume | 51 | - |
dc.citation.issue | 12 | - |
dc.citation.beginningpage | 6380 | - |
dc.citation.endingpage | 6394 | - |
dc.citation.publicationname | COMPUTATIONAL STATISTICS & DATA ANALYSIS | - |
dc.identifier.doi | 10.1016/j.csda.2007.02.006 | - |
dc.contributor.localauthor | Park, Cheolwoo | - |
dc.contributor.localauthor | Ahn, Jeongyoun | - |
dc.contributor.nonIdAuthor | Liu, Yufeng | - |
dc.contributor.nonIdAuthor | Zhang, Hao Helen | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | adaptive penalty | - |
dc.subject.keywordAuthor | classification | - |
dc.subject.keywordAuthor | shrinkage | - |
dc.subject.keywordAuthor | support vector machine | - |
dc.subject.keywordAuthor | variable selection | - |
dc.subject.keywordPlus | VARIABLE SELECTION | - |
dc.subject.keywordPlus | REGULARIZATION | - |
dc.subject.keywordPlus | CLASSIFICATION | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.