Provable Memorization via Deep Neural Networks using Sub-linear Parameters

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 132
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, Sejunko
dc.contributor.authorLee, Jaehoko
dc.contributor.authorYun, Chulheeko
dc.contributor.authorShin, Jinwooko
dc.date.accessioned2022-01-14T06:55:40Z-
dc.date.available2022-01-14T06:55:40Z-
dc.date.created2021-12-02-
dc.date.issued2021-08-
dc.identifier.citationConference on Learning Theory (COLT) 2021-
dc.identifier.urihttp://hdl.handle.net/10203/291826-
dc.description.abstractIt is known that 𝑂(𝑁) parameters are sufficient for neural networks to memorize arbitrary 𝑁 input-label pairs. By exploiting depth, we show that 𝑂(𝑁2/3) parameters suffice to memorize 𝑁 pairs, under a mild condition on the separation of input points. In particular, deeper networks (even with width 3) are shown to memorize more pairs than shallow networks, which also agrees with the recent line of works on the benefits of depth for function approximation. We also provide empirical results that support our theoretical findings.-
dc.languageEnglish-
dc.publisherJMLR JOURNAL MACHINE LEARNING RESEARCH-
dc.titleProvable Memorization via Deep Neural Networks using Sub-linear Parameters-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameConference on Learning Theory (COLT) 2021-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationOnline-
dc.contributor.localauthorShin, Jinwoo-
dc.contributor.nonIdAuthorPark, Sejun-
dc.contributor.nonIdAuthorYun, Chulhee-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0