DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shin, Yeonjong | ko |
dc.date.accessioned | 2022-07-06T02:00:16Z | - |
dc.date.available | 2022-07-06T02:00:16Z | - |
dc.date.created | 2022-07-06 | - |
dc.date.created | 2022-07-06 | - |
dc.date.issued | 2022-01 | - |
dc.identifier.citation | ANALYSIS AND APPLICATIONS, v.20, no.01, pp.73 - 119 | - |
dc.identifier.issn | 0219-5305 | - |
dc.identifier.uri | http://hdl.handle.net/10203/297246 | - |
dc.description.abstract | Deep neural networks have been used in various machine learning applications and achieved tremendous empirical successes. However, training deep neural networks is a challenging task. Many alternatives have been proposed in place of end-to-end back-propagation. Layer-wise training is one of them, which trains a single layer at a time, rather than trains the whole layers simultaneously. In this paper, we study a layer-wise training using a block coordinate gradient descent (BCGD) for deep linear networks. We establish a general convergence analysis of BCGD and found the optimal learning rate, which results in the fastest decrease in the loss. We identify the effects of depth, width, and initialization. When the orthogonal-like initialization is employed, we show that the width of intermediate layers plays no role in gradient-based training beyond a certain threshold. Besides, we found that the use of deep networks could drastically accelerate convergence when it is compared to those of a depth 1 network, even when the computational cost is considered. Numerical examples are provided to justify our theoretical findings and demonstrate the performance of layer-wise training by BCGD. | - |
dc.language | English | - |
dc.publisher | WORLD SCIENTIFIC PUBL CO PTE LTD | - |
dc.title | Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks | - |
dc.type | Article | - |
dc.identifier.wosid | 000747226200004 | - |
dc.identifier.scopusid | 2-s2.0-85122298458 | - |
dc.type.rims | ART | - |
dc.citation.volume | 20 | - |
dc.citation.issue | 01 | - |
dc.citation.beginningpage | 73 | - |
dc.citation.endingpage | 119 | - |
dc.citation.publicationname | ANALYSIS AND APPLICATIONS | - |
dc.identifier.doi | 10.1142/S0219530521500263 | - |
dc.contributor.localauthor | Shin, Yeonjong | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Deep linear neural networks | - |
dc.subject.keywordAuthor | layer-wise training | - |
dc.subject.keywordAuthor | block coordinate gradient descent | - |
dc.subject.keywordPlus | GRADIENT DESCENT METHOD | - |
dc.subject.keywordPlus | KACZMARZ | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.