PARIS and ELSA: An Elastic Scheduling Algorithm for Reconfigurable Multi-GPU Inference Servers

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 57
  • Download : 0
Providing low latency to end-users while maximizing server utilization and system throughput is crucial for cloud ML servers. NVIDIA's recently announced Ampere GPU architecture provides features to "reconfigure" one large, monolithic GPU into multiple smaller "GPU partitions". Such feature provides cloud ML service providers the ability to utilize the reconfigurable GPU not only for large-batch training but also for small-batch inference with the potential to achieve high resource utilization. We study this emerging GPU architecture with reconfigurability to develop a high-performance multi-GPU ML inference server, presenting a sophisticated partitioning algorithm for reconfigurable GPUs combined with an elastic scheduling algorithm tailored for our heterogeneously partitioned GPU server.
Publisher
ACM/IEEE/ESDA
Issue Date
2022-06-10
Language
English
Citation

59th ACM/IEEE Design Automation Conference, DAC 2022, pp.607 - 612

ISSN
0738-100X
DOI
10.1145/3489517.3530510
URI
http://hdl.handle.net/10203/300886
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0