Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks

Cited 7 time in webofscience Cited 0 time in scopus
  • Hit : 155
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorShin, Yeonjongko
dc.date.accessioned2022-07-06T02:00:16Z-
dc.date.available2022-07-06T02:00:16Z-
dc.date.created2022-07-06-
dc.date.created2022-07-06-
dc.date.issued2022-01-
dc.identifier.citationANALYSIS AND APPLICATIONS, v.20, no.01, pp.73 - 119-
dc.identifier.issn0219-5305-
dc.identifier.urihttp://hdl.handle.net/10203/297246-
dc.description.abstractDeep neural networks have been used in various machine learning applications and achieved tremendous empirical successes. However, training deep neural networks is a challenging task. Many alternatives have been proposed in place of end-to-end back-propagation. Layer-wise training is one of them, which trains a single layer at a time, rather than trains the whole layers simultaneously. In this paper, we study a layer-wise training using a block coordinate gradient descent (BCGD) for deep linear networks. We establish a general convergence analysis of BCGD and found the optimal learning rate, which results in the fastest decrease in the loss. We identify the effects of depth, width, and initialization. When the orthogonal-like initialization is employed, we show that the width of intermediate layers plays no role in gradient-based training beyond a certain threshold. Besides, we found that the use of deep networks could drastically accelerate convergence when it is compared to those of a depth 1 network, even when the computational cost is considered. Numerical examples are provided to justify our theoretical findings and demonstrate the performance of layer-wise training by BCGD.-
dc.languageEnglish-
dc.publisherWORLD SCIENTIFIC PUBL CO PTE LTD-
dc.titleEffects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks-
dc.typeArticle-
dc.identifier.wosid000747226200004-
dc.identifier.scopusid2-s2.0-85122298458-
dc.type.rimsART-
dc.citation.volume20-
dc.citation.issue01-
dc.citation.beginningpage73-
dc.citation.endingpage119-
dc.citation.publicationnameANALYSIS AND APPLICATIONS-
dc.identifier.doi10.1142/S0219530521500263-
dc.contributor.localauthorShin, Yeonjong-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorDeep linear neural networks-
dc.subject.keywordAuthorlayer-wise training-
dc.subject.keywordAuthorblock coordinate gradient descent-
dc.subject.keywordPlusGRADIENT DESCENT METHOD-
dc.subject.keywordPlusKACZMARZ-
Appears in Collection
MA-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 7 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0