DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Chang-Ock | ko |
dc.contributor.author | Lee, YoungKyu | ko |
dc.contributor.author | Park, Jongho | ko |
dc.date.accessioned | 2023-09-14T08:01:55Z | - |
dc.date.available | 2023-09-14T08:01:55Z | - |
dc.date.created | 2023-09-14 | - |
dc.date.issued | 2020-12 | - |
dc.identifier.citation | 26th International Conference on Domain Decomposition Methods, 2020, pp.407 - 415 | - |
dc.identifier.uri | http://hdl.handle.net/10203/312632 | - |
dc.description.abstract | Due to the large number of layers in deep neural networks (DNNs) [11, 12], DNN training is time-consuming and there are demands to reduce training time these days. Recently, multi-GPU parallel computing has become an important topic for accelerating DNN training [2, 6]. In particular, Günther et al. [6] considered the layer structure of ResNet [8] as the forward Euler discretization of a specific ODE and applied a nonlinear in-time multigrid method [3] by regarding the learning process of the network as an optimal control problem. | - |
dc.language | English | - |
dc.publisher | Springer Science and Business Media Deutschland GmbH | - |
dc.title | A Parareal Architecture for Very Deep Convolutional Neural Networks | - |
dc.type | Conference | - |
dc.identifier.scopusid | 2-s2.0-85151164071 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 407 | - |
dc.citation.endingpage | 415 | - |
dc.citation.publicationname | 26th International Conference on Domain Decomposition Methods, 2020 | - |
dc.identifier.conferencecountry | CC | - |
dc.identifier.conferencelocation | Virtual | - |
dc.identifier.doi | 10.1007/978-3-030-95025-5_43 | - |
dc.contributor.localauthor | Lee, Chang-Ock | - |
dc.contributor.localauthor | Park, Jongho | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.