喵ID:DvWpX3免责声明

Kube-Knots: Resource Harvesting through Dynamic Container Orchestration in GPU-based Datacenters

基本信息

DOI:
10.1109/cluster.2019.8891040
发表时间:
2019-09
期刊:
2019 IEEE International Conference on Cluster Computing (CLUSTER)
影响因子:
--
通讯作者:
P. Thinakaran;Jashwant Raj Gunasekaran;Bikash Sharma;M. Kandemir;C. Das
中科院分区:
其他
文献类型:
--
作者: P. Thinakaran;Jashwant Raj Gunasekaran;Bikash Sharma;M. Kandemir;C. Das研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Compute heterogeneity is increasingly gaining prominence in modern datacenters due to the addition of accelerators like GPUs and FPGAs. We observe that datacenter schedulers are agnostic of these emerging accelerators, especially their resource utilization footprints, and thus, not well equipped to dynamically provision them based on the application needs. We observe that the state-of-the-art datacenter schedulers fail to provide fine-grained resource guarantees for latency-sensitive tasks that are GPU-bound. Specifically for GPUs, this results in resource fragmentation and interference leading to poor utilization of allocated GPU resources. Furthermore, GPUs exhibit highly linear energy efficiency with respect to utilization and hence proactive management of these resources is essential to keep the operational costs low while ensuring the end-to-end Quality of Service (QoS) in case of user-facing queries.Towards addressing the GPU orchestration problem, we build Knots, a GPU-aware resource orchestration layer and integrate it with the Kubernetes container orchestrator to build Kube- Knots. Kube-Knots can dynamically harvest spare compute cycles through dynamic container orchestration enabling co-location of latency-critical and batch workloads together while improving the overall resource utilization. We design and evaluate two GPU-based scheduling techniques to schedule datacenter-scale workloads through Kube-Knots on a ten node GPU cluster. Our proposed Correlation Based Prediction (CBP) and Peak Prediction (PP) schemes together improves both average and 99th percentile cluster-wide GPU utilization by up to 80% in case of HPC workloads. In addition, CBP+PP improves the average job completion times (JCT) of deep learning workloads by up to 36% when compared to state-of-the-art schedulers. This leads to 33% cluster-wide energy savings on an average for three different workloads compared to state-of-the-art GPU-agnostic schedulers. Further, the proposed PP scheduler guarantees the end-to-end QoS for latency-critical queries by reducing QoS violations by up to 53% when compared to state-of-the-art GPU schedulers.
由于添加了GPU和FPGA等加速器,计算异构性在现代数据中心中日益凸显。我们发现数据中心调度器对这些新兴加速器并不了解,尤其是它们的资源利用情况,因此无法根据应用需求对其进行动态配置。我们观察到,最先进的数据中心调度器无法为受GPU限制的对延迟敏感的任务提供细粒度的资源保障。特别是对于GPU,这会导致资源碎片化和干扰,从而导致分配的GPU资源利用率低下。此外,GPU的能效与利用率呈高度线性关系,因此对这些资源进行主动管理对于在面向用户查询的情况下确保端到端服务质量(QoS)的同时降低运营成本至关重要。为了解决GPU编排问题,我们构建了Knots,一个感知GPU的资源编排层,并将其与Kubernetes容器编排器集成以构建Kube - Knots。Kube - Knots可以通过动态容器编排动态获取空闲计算周期,从而使对延迟要求严格的工作负载和批量工作负载能够共同放置,同时提高整体资源利用率。我们设计并评估了两种基于GPU的调度技术,以便在一个十节点的GPU集群上通过Kube - Knots调度数据中心规模的工作负载。我们提出的基于相关性的预测(CBP)和峰值预测(PP)方案在高性能计算(HPC)工作负载的情况下,将整个集群的GPU平均利用率和第99百分位利用率提高了多达80%。此外,与最先进的调度器相比,CBP + PP将深度学习工作负载的平均作业完成时间(JCT)提高了多达36%。与最先进的不感知GPU的调度器相比,这三种不同工作负载平均可使整个集群节能33%。此外,与最先进的GPU调度器相比,所提出的PP调度器通过将QoS违规降低多达53%,为对延迟敏感的查询保证了端到端QoS。
参考文献(49)
被引文献(29)

数据更新时间:{{ references.updateTime }}

P. Thinakaran;Jashwant Raj Gunasekaran;Bikash Sharma;M. Kandemir;C. Das
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓