A Parareal Architecture for Very Deep Convolutional Neural Networks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 107
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Chang-Ockko
dc.contributor.authorLee, YoungKyuko
dc.contributor.authorPark, Jonghoko
dc.date.accessioned2023-09-14T08:01:55Z-
dc.date.available2023-09-14T08:01:55Z-
dc.date.created2023-09-14-
dc.date.issued2020-12-
dc.identifier.citation26th International Conference on Domain Decomposition Methods, 2020, pp.407 - 415-
dc.identifier.urihttp://hdl.handle.net/10203/312632-
dc.description.abstractDue to the large number of layers in deep neural networks (DNNs) [11, 12], DNN training is time-consuming and there are demands to reduce training time these days. Recently, multi-GPU parallel computing has become an important topic for accelerating DNN training [2, 6]. In particular, Günther et al. [6] considered the layer structure of ResNet [8] as the forward Euler discretization of a specific ODE and applied a nonlinear in-time multigrid method [3] by regarding the learning process of the network as an optimal control problem.-
dc.languageEnglish-
dc.publisherSpringer Science and Business Media Deutschland GmbH-
dc.titleA Parareal Architecture for Very Deep Convolutional Neural Networks-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85151164071-
dc.type.rimsCONF-
dc.citation.beginningpage407-
dc.citation.endingpage415-
dc.citation.publicationname26th International Conference on Domain Decomposition Methods, 2020-
dc.identifier.conferencecountryCC-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1007/978-3-030-95025-5_43-
dc.contributor.localauthorLee, Chang-Ock-
dc.contributor.localauthorPark, Jongho-
Appears in Collection
MA-Conference Papers(학술회의논문)RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0