Expressive power of deep and narrow neural networks깊고 좁은 인공신경망의 표현 능력

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 196
  • Download : 0
The expressive power of neural networks is critical for understanding the empirical success of deep learning. In this thesis, we study the expressive power of deep and narrow networks as a dual of classical results for shallow and wide networks. First, we study the universal approximation property of deep and narrow networks. In particular, we provide the first exact characterization on the minimum width of ReLU networks required for the universal approximation. Second, we study the memorization power of deep and narrow networks. In particular, we aim to characterize the necessary number of parameters for memorizing $N$ data. We show that $O(N^{\frac23})$ parameters are sufficient for deep and narrow ReLU networks to memorize $N$ data, while $\Omega(N)$ parameters are necessary for shallow and wide counterparts. We believe that our results provide new insight into the expressive power theory of deep and narrow networks.
Advisors
Shin, Jinwooresearcher신진우researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2020.8,[v, 55 p. :]

Keywords

Deep and narrow neural networks▼aMemorization▼aUniversal approximation; 깊고 좁은 인공신경망; 암기; 일반 근사 이론

URI
http://hdl.handle.net/10203/284438
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=924526&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0