LazyBatching: An SLA-aware Batching System for Cloud Machine Learning Inference

Cited 32 time in webofscience Cited 0 time in scopus
  • Hit : 313
  • Download : 0
In cloud ML inference systems, batching is an essential technique to increase throughput which helps optimize total-cost-of-ownership. Prior graph batching combines the individual DNN graphs into a single one, allowing multiple inputs to be concurrently executed in parallel. We observe that the coarse-grained graph batching becomes suboptimal in effectively handling the dynamic inference request traffic, leaving significant performance left on the table. This paper proposes LazyBatching, an SLA-Aware batching system that considers both scheduling and batching in the granularity of individual graph nodes, rather than the entire graph for flexible batching. We show that LazyBatching can intelligently determine the set of nodes that can be efficiently batched together, achieving an average 15\times, 1.5\times, and 5.5\times improvement than graph batching in terms of average response time, throughput, and SLA satisfaction, respectively.
Publisher
IEEE Computer Society
Issue Date
2021-03-02
Language
English
Citation

The 27th IEEE International Symposium on High-Performance Computer Architecture (HPCA-27), pp.493 - 506

ISSN
1530-0897
DOI
10.1109/HPCA51647.2021.00049
URI
http://hdl.handle.net/10203/285733
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 32 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0