Trimming the $l_1$ regularizer : statistical analysis, optimization, and applications to deep learning가지친 $l_1$ 정규화 : 통계적 이론 분석, 최적화, 그리고 딥러닝 문제에 대한 적용 방법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 159
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorYang, Eunho-
dc.contributor.advisor양은호-
dc.contributor.authorYun, Jihun-
dc.date.accessioned2021-05-13T19:32:25Z-
dc.date.available2021-05-13T19:32:25Z-
dc.date.issued2020-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=911002&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/284671-
dc.description학위논문(석사) - 한국과학기술원 : 전산학부, 2020.2,[iii, 27 p. :]-
dc.description.abstractWe study high-dimensional estimators with the trimmed $l_1$ penalty, which leaves the $h$ largest parameter entries penalty-free. While optimization techniques for this non-convex penalty have been studied, the statistical properties have not yet been analyzed. We present the first statistical analyses for M-estimation, and characterize support recovery, $l_{\infty}$ and $l_2$ error of the trimmed $l_1$ estimates as a function of the trimming parameter h. Our results show different regimes based on how h compares to the true support size. Our second contribution is a new algorithm for the trimmed regularization problem, which has the same theoretical convergence rate as difference of convex (DC) algorithms, but in practice is faster and finds lower objective values. Empirical evaluation of $l_1$ trimming for sparse linear regression and graphical model estimation indicate that trimmed $l_1$ can outperform vanilla $l_1$ and non-convex alternatives. Our last contribution is to show that the trimmed penalty is beneficial beyond M-estimation, and yields promising results for two deep learning tasks: input structures recovery and network sparsification.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectHigh-dimensional Statistics▼aLearning with Sparsity▼aRegularization-
dc.subjectOptimization▼aDeep Learning-
dc.subject고차원 데이터 분석▼a희소 학습▼a정규화▼a최적화▼a딥 러닝-
dc.titleTrimming the $l_1$ regularizer-
dc.title.alternative가지친 $l_1$ 정규화 : 통계적 이론 분석, 최적화, 그리고 딥러닝 문제에 대한 적용 방법-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전산학부,-
dc.contributor.alternativeauthor윤지훈-
dc.title.subtitlestatistical analysis, optimization, and applications to deep learning-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0