Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks

Cited 7 time in webofscience Cited 0 time in scopus
  • Hit : 156
  • Download : 0
Deep neural networks have been used in various machine learning applications and achieved tremendous empirical successes. However, training deep neural networks is a challenging task. Many alternatives have been proposed in place of end-to-end back-propagation. Layer-wise training is one of them, which trains a single layer at a time, rather than trains the whole layers simultaneously. In this paper, we study a layer-wise training using a block coordinate gradient descent (BCGD) for deep linear networks. We establish a general convergence analysis of BCGD and found the optimal learning rate, which results in the fastest decrease in the loss. We identify the effects of depth, width, and initialization. When the orthogonal-like initialization is employed, we show that the width of intermediate layers plays no role in gradient-based training beyond a certain threshold. Besides, we found that the use of deep networks could drastically accelerate convergence when it is compared to those of a depth 1 network, even when the computational cost is considered. Numerical examples are provided to justify our theoretical findings and demonstrate the performance of layer-wise training by BCGD.
Publisher
WORLD SCIENTIFIC PUBL CO PTE LTD
Issue Date
2022-01
Language
English
Article Type
Article
Citation

ANALYSIS AND APPLICATIONS, v.20, no.01, pp.73 - 119

ISSN
0219-5305
DOI
10.1142/S0219530521500263
URI
http://hdl.handle.net/10203/297246
Appears in Collection
MA-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 7 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0