Minimum Distance Lasso for robust high-dimensional regression

Cited 13 time in webofscience Cited 0 time in scopus
  • Hit : 662
  • Download : 527
DC FieldValueLanguage
dc.contributor.authorLozano, Aurélie C.ko
dc.contributor.authorMeinshausen, Nicolaiko
dc.contributor.authorYang, Eunhoko
dc.date.accessioned2016-09-06T07:42:10Z-
dc.date.available2016-09-06T07:42:10Z-
dc.date.created2016-07-29-
dc.date.created2016-07-29-
dc.date.created2016-07-29-
dc.date.created2016-07-29-
dc.date.issued2016-05-
dc.identifier.citationELECTRONIC JOURNAL OF STATISTICS, v.10, no.1, pp.1296 - 1340-
dc.identifier.issn1935-7524-
dc.identifier.urihttp://hdl.handle.net/10203/212349-
dc.description.abstractWe propose a minimum distance estimation method for robust regression in sparse high-dimensional settings. Likelihood-based estimators lack resilience against outliers and model misspecification, a critical issue when dealing with high-dimensional noisy data. Our method, Minimum Distance Lasso (MD-Lasso), combines minimum distance functionals customarily used in nonparametric estimation for robustness, with l(1)-regularization. MD-Lasso is governed by a scaling parameter capping the influence of outliers: the loss is locally convex and close to quadratic for small squared residuals, and flattens for squared residuals larger than the scaling parameter. As the parameter approaches infinity the estimator becomes equivalent to least-squares Lasso. MD-Lasso is able to maintain the robustness of minimum distance functionals in sparse high-dimensional regression. The estimator achieves maximum breakdown point and enjoys consistency with fast convergence rates under mild conditions on the model error distribution. These hold for any solution in a convexity region around the true parameter and in certain cases for every solution. We provide an alternative set of results that do not require the solutions to lie within the convexity region but where the l(2)-norm of the feasible solutions is constrained within a safety radius. Thanks to this constraint, a first-order optimization method is able to produce local optima that are consistent. A connection is established with re-weighted least-squares that intuitively explains MD-Lasso robustness. The merits of our method are demonstrated through simulation and eQTL analysis.-
dc.languageEnglish-
dc.publisherINST MATHEMATICAL STATISTICS-
dc.subjectVARIABLE SELECTION-
dc.subjectSHRINKAGE-
dc.subjectMODELS-
dc.subjectRISK-
dc.titleMinimum Distance Lasso for robust high-dimensional regression-
dc.typeArticle-
dc.identifier.wosid000389914600046-
dc.identifier.scopusid2-s2.0-84971411266-
dc.type.rimsART-
dc.citation.volume10-
dc.citation.issue1-
dc.citation.beginningpage1296-
dc.citation.endingpage1340-
dc.citation.publicationnameELECTRONIC JOURNAL OF STATISTICS-
dc.identifier.doi10.1214/16-EJS1136-
dc.contributor.localauthorYang, Eunho-
dc.contributor.nonIdAuthorLozano, Aurélie C.-
dc.contributor.nonIdAuthorMeinshausen, Nicolai-
dc.description.isOpenAccessY-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorLasso-
dc.subject.keywordAuthorrobust estimation-
dc.subject.keywordAuthorhigh-dimensional variable selection-
dc.subject.keywordAuthorsparse learning-
dc.subject.keywordPlusVARIABLE SELECTION-
dc.subject.keywordPlusSHRINKAGE-
dc.subject.keywordPlusMODELS-
dc.subject.keywordPlusRISK-
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 13 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0