Support vector machines with adaptive L-q penalty

Cited 55 time in webofscience Cited 0 time in scopus
  • Hit : 168
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLiu, Yufengko
dc.contributor.authorZhang, Hao Helenko
dc.contributor.authorPark, Cheolwooko
dc.contributor.authorAhn, Jeongyounko
dc.date.accessioned2021-06-02T02:50:43Z-
dc.date.available2021-06-02T02:50:43Z-
dc.date.created2021-06-02-
dc.date.created2021-06-02-
dc.date.created2021-06-02-
dc.date.issued2007-08-
dc.identifier.citationCOMPUTATIONAL STATISTICS & DATA ANALYSIS, v.51, no.12, pp.6380 - 6394-
dc.identifier.issn0167-9473-
dc.identifier.urihttp://hdl.handle.net/10203/285435-
dc.description.abstractThe standard support vector machine (SVM) minimizes the hinge loss function subject to the L-2 penalty or the roughness penalty. Recently, the L-1 SVM was suggested for variable selection by producing sparse solutions [Bradley, P., Mangasarian, O., 1998. Feature selection via concave minimization and support vector machines. In: Shavlik, J. (Ed.), ICML'98. Morgan Kaufmann, Los Altos, CA; Zhu, J., Hastie, T., Rosset, S., Tibshirani, R., 2003. 1-norm support vector machines. Neural Inform. Process. Systems 16]. These learning methods are non-adaptive since their penalty forms are pre-determined before looking at data, and they often perform well only in a certain type of situation. For instance, the L-2 SVM generally works well except when there are too many noise inputs, while the L-1 SVM is more preferred in the presence of many noise variables. In this article we propose and explore an adaptive learning procedure called the L-q SVM, Where the best q > 0 is automatically chosen by data. Both two- and multi-class classification problems are considered. We show that the new adaptive approach combines the benefit of a class of non-adaptive procedures and gives the best performance of this class across a variety of situations. Moreover, we observe that the proposed L-q penalty is more robust to noise variables than the L-1 and L-2 penalties. An iterative algorithm is suggested to solve the L-q SVM efficiently. Simulations and real data applications support the effectiveness of the proposed procedure. (C) 2007 Elsevier B.V. All rights reserved.-
dc.languageEnglish-
dc.publisherELSEVIER-
dc.titleSupport vector machines with adaptive L-q penalty-
dc.typeArticle-
dc.identifier.wosid000249316000071-
dc.identifier.scopusid2-s2.0-34547234238-
dc.type.rimsART-
dc.citation.volume51-
dc.citation.issue12-
dc.citation.beginningpage6380-
dc.citation.endingpage6394-
dc.citation.publicationnameCOMPUTATIONAL STATISTICS & DATA ANALYSIS-
dc.identifier.doi10.1016/j.csda.2007.02.006-
dc.contributor.localauthorPark, Cheolwoo-
dc.contributor.localauthorAhn, Jeongyoun-
dc.contributor.nonIdAuthorLiu, Yufeng-
dc.contributor.nonIdAuthorZhang, Hao Helen-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthoradaptive penalty-
dc.subject.keywordAuthorclassification-
dc.subject.keywordAuthorshrinkage-
dc.subject.keywordAuthorsupport vector machine-
dc.subject.keywordAuthorvariable selection-
dc.subject.keywordPlusVARIABLE SELECTION-
dc.subject.keywordPlusREGULARIZATION-
dc.subject.keywordPlusCLASSIFICATION-
Appears in Collection
MA-Journal Papers(저널논문)IE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 55 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0