Compute heterogeneity is increasingly gaining prominence in modern datacenters due to the addition of accelerators like GPUs and FPGAs. We observe that datacenter schedulers are agnostic of these emerging accelerators, especially their resource utilization footprints, and thus, not well equipped to dynamically provision them based on the application needs. We observe that the state-of-the-art datacenter schedulers fail to provide fine-grained resource guarantees for latency-sensitive tasks that are GPU-bound. Specifically for GPUs, this results in resource fragmentation and interference leading to poor utilization of allocated GPU resources. Furthermore, GPUs exhibit highly linear energy efficiency with respect to utilization and hence proactive management of these resources is essential to keep the operational costs low while ensuring the end-to-end Quality of Service (QoS) in case of user-facing queries.Towards addressing the GPU orchestration problem, we build Knots, a GPU-aware resource orchestration layer and integrate it with the Kubernetes container orchestrator to build Kube- Knots. Kube-Knots can dynamically harvest spare compute cycles through dynamic container orchestration enabling co-location of latency-critical and batch workloads together while improving the overall resource utilization. We design and evaluate two GPU-based scheduling techniques to schedule datacenter-scale workloads through Kube-Knots on a ten node GPU cluster. Our proposed Correlation Based Prediction (CBP) and Peak Prediction (PP) schemes together improves both average and 99th percentile cluster-wide GPU utilization by up to 80% in case of HPC workloads. In addition, CBP+PP improves the average job completion times (JCT) of deep learning workloads by up to 36% when compared to state-of-the-art schedulers. This leads to 33% cluster-wide energy savings on an average for three different workloads compared to state-of-the-art GPU-agnostic schedulers. Further, the proposed PP scheduler guarantees the end-to-end QoS for latency-critical queries by reducing QoS violations by up to 53% when compared to state-of-the-art GPU schedulers.
由于添加了GPU和FPGA等加速器,计算异构性在现代数据中心中日益凸显。我们发现数据中心调度器对这些新兴加速器并不了解,尤其是它们的资源利用情况,因此无法根据应用需求对其进行动态配置。我们观察到,最先进的数据中心调度器无法为受GPU限制的对延迟敏感的任务提供细粒度的资源保障。特别是对于GPU,这会导致资源碎片化和干扰,从而导致分配的GPU资源利用率低下。此外,GPU的能效与利用率呈高度线性关系,因此对这些资源进行主动管理对于在面向用户查询的情况下确保端到端服务质量(QoS)的同时降低运营成本至关重要。为了解决GPU编排问题,我们构建了Knots,一个感知GPU的资源编排层,并将其与Kubernetes容器编排器集成以构建Kube - Knots。Kube - Knots可以通过动态容器编排动态获取空闲计算周期,从而使对延迟要求严格的工作负载和批量工作负载能够共同放置,同时提高整体资源利用率。我们设计并评估了两种基于GPU的调度技术,以便在一个十节点的GPU集群上通过Kube - Knots调度数据中心规模的工作负载。我们提出的基于相关性的预测(CBP)和峰值预测(PP)方案在高性能计算(HPC)工作负载的情况下,将整个集群的GPU平均利用率和第99百分位利用率提高了多达80%。此外,与最先进的调度器相比,CBP + PP将深度学习工作负载的平均作业完成时间(JCT)提高了多达36%。与最先进的不感知GPU的调度器相比,这三种不同工作负载平均可使整个集群节能33%。此外,与最先进的GPU调度器相比,所提出的PP调度器通过将QoS违规降低多达53%,为对延迟敏感的查询保证了端到端QoS。