In order to reduce power and energy costs, giant cloud providers now mix online and batch jobs on the same cluster. Although the co-allocation of such jobs improves machine utilization, it challenges the data center scheduler and workload assignment in terms of quality of service, fault tolerance, and failure recovery, especially for latency critical online services. In this paper, we explore various characteristics of co-allocated online services and batch jobs from a production cluster containing 1.3k servers in Alibaba Cloud. From the trace data, we`find the following: 1) For batch jobs with multiple tasks and instances, 50.8% failed tasks wait and halted after a very long time interval when their first and the only one instance fails. This wastes much time and resources as the remaining instances are running for an impossible successful termination. 2) For online services jobs, they are clustered in 25 categories according to their requested CPU, memory, and disk resources. Such clustering can help the co-allocation of online services jobs with batch jobs. 3) Servers are clustered into seven groups by CPU utilization, memory utilization, and their correlations. Machines with a strong correlation between CPU and memory utilization provides an opportunity for job co-allocation and resource utilization estimation. 4) The MTBF (mean time between failures) of instances are in the interval [400, 800] seconds while the average completion time of the 99th percentile is 1003 seconds. We also compare the cumulative distribution functions of jobs and servers and explain the differences and opportunities for workload assignment between them. Our`findings and insights presented in this paper can help the community and data center operators better understand the workload characteristics, improve resource utilization, and failure recovery design.
为了降低电力和能源成本,大型云服务提供商现在在同一集群中混合在线作业和批处理作业。虽然此类作业的共同分配提高了机器利用率,但在服务质量、容错和故障恢复方面,尤其是对于对延迟要求苛刻的在线服务,它对数据中心调度器和工作负载分配提出了挑战。在本文中,我们从阿里云一个包含1300台服务器的生产集群中探究共同分配的在线服务和批处理作业的各种特征。从跟踪数据中,我们发现以下情况:
1) 对于具有多个任务和实例的批处理作业,当第一个也是唯一的一个实例失败后,50.8%的失败任务在很长时间间隔后等待并停止。这浪费了大量时间和资源,因为其余实例在为不可能成功终止的情况而运行。
2) 对于在线服务作业,根据它们所请求的CPU、内存和磁盘资源,它们被聚类为25个类别。这种聚类可以帮助在线服务作业与批处理作业的共同分配。
3) 通过CPU利用率、内存利用率及其相关性,服务器被聚类为7个组。CPU和内存利用率之间具有强相关性的机器为作业共同分配和资源利用率估算提供了机会。
4) 实例的平均故障间隔时间(MTBF)在[400, 800]秒区间内,而第99百分位数的平均完成时间为1003秒。我们还比较了作业和服务器的累积分布函数,并解释了它们之间在工作负载分配方面的差异和机会。我们在本文中提出的发现和见解可以帮助社区和数据中心运营商更好地理解工作负载特征,提高资源利用率,并改进故障恢复设计。