HetPipe: Enabling large DNN training on (Whimpy) heterogeneous GPU clusters through integration of pipelined model parallelism and data parallelism

Cited 59 time in webofscience Cited 0 time in scopus
  • Hit : 202
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, Jay Hko
dc.contributor.authorYun, Gyeongchanko
dc.contributor.authorYi, Chang Mko
dc.contributor.authorNguyen, Nguyen Tko
dc.contributor.authorLee, Seungminko
dc.contributor.authorChoi, Jaesikko
dc.contributor.authorNoh, Sam Hko
dc.contributor.authorChoi, Young-Riko
dc.date.accessioned2020-12-24T06:10:09Z-
dc.date.available2020-12-24T06:10:09Z-
dc.date.created2020-12-02-
dc.date.created2020-12-02-
dc.date.issued2020-07-16-
dc.identifier.citationUSENIX Annual Technical Conference, ATC 2020, pp.307 - 321-
dc.identifier.urihttp://hdl.handle.net/10203/279090-
dc.description.abstractDeep Neural Network (DNN) models have continuously been growing in size in order to improve the accuracy and quality of the models. Moreover, for training of large DNN models, the use of heterogeneous GPUs is inevitable due to the short release cycle of new GPU architectures. In this paper, we investigate how to enable training of large DNN models on a heterogeneous GPU cluster that possibly includes whimpy GPUs that, as a standalone, could not be used for training. We present a DNN training system, HetPipe (Heterogeneous Pipeline), that integrates pipelined model parallelism (PMP) with data parallelism (DP). In HetPipe, a group of multiple GPUs, called a virtual worker, processes minibatches in a pipelined manner, and multiple such virtual workers employ data parallelism for higher performance. We also propose a novel parameter synchronization model, which we refer to as Wave Synchronous Parallel (WSP) to accommodate both PMP and DP for virtual workers, and provide convergence proof of WSP. Our experimental results on a given heterogeneous setting show that with HetPipe, DNN models converge up to 49% faster compared to the state-of-the-art DP technique.-
dc.languageEnglish-
dc.publisherUSENIX Association-
dc.titleHetPipe: Enabling large DNN training on (Whimpy) heterogeneous GPU clusters through integration of pipelined model parallelism and data parallelism-
dc.typeConference-
dc.identifier.wosid000696712200021-
dc.identifier.scopusid2-s2.0-85091891049-
dc.type.rimsCONF-
dc.citation.beginningpage307-
dc.citation.endingpage321-
dc.citation.publicationnameUSENIX Annual Technical Conference, ATC 2020-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorChoi, Jaesik-
dc.contributor.nonIdAuthorPark, Jay H-
dc.contributor.nonIdAuthorYun, Gyeongchan-
dc.contributor.nonIdAuthorYi, Chang M-
dc.contributor.nonIdAuthorNguyen, Nguyen T-
dc.contributor.nonIdAuthorLee, Seungmin-
dc.contributor.nonIdAuthorNoh, Sam H-
dc.contributor.nonIdAuthorChoi, Young-Ri-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 59 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0