Large Language Models (LLMs) are consistently improving at increasingly realistic software engineering (SE) tasks. In real-world software stacks, significant SE effort is spent developing foundational system software like the Linux kernel. Unlike application-level software, a systems codebase like Linux is multilingual (low-level C/Assembly/Bash/Rust); gigantic (>20 million lines); critical (impacting billions of devices worldwide), and highly concurrent (involving complex multi-threading). To evaluate if ML models are useful while developing such large-scale systems-level software, we introduce kGym (a platform) and kBench (a dataset). The kGym platform provides a SE environment for large-scale experiments on the Linux kernel, including compiling and running kernels in parallel across several virtual machines, detecting operations and crashes, inspecting logs, and querying and patching the code base. We use kGym to facilitate evaluation on kBench, a crash resolution benchmark drawn from real-world Linux kernel bugs. An example bug in kBench contains crashing stack traces, a bug-reproducer file, a developer-written fix, and other associated data. To understand current performance, we conduct baseline experiments by prompting LLMs to resolve Linux kernel crashes. Our initial evaluations reveal that the best performing LLM achieves 0.72% and 5.38% in the unassisted and assisted (i.e., buggy files disclosed to the model) settings, respectively. These results highlight the need for further research to enhance model performance in SE tasks. Improving performance on kBench requires models to master new learning skills, including understanding the cause of crashes and repairing faults, writing memory-safe and hardware-aware code, and understanding concurrency. As a result, this work opens up multiple avenues of research at the intersection of machine learning and systems software.
大型语言模型(LLM)在越来越现实的软件工程(SE)任务中一直在改进。在现实世界软件堆栈中,大量的SE工作花在开发Linux内核等基础系统软件上。与应用程序级软件不同,像Linux这样的系统代码库是多语言(低级C/汇编/bash/rust);巨大(> 2000万行);批判性(影响了全球数十亿个设备),并高度并发(涉及复杂的多线程)。为了评估ML模型在开发此类大型系统级软件时是否有用,我们介绍了Kgym(一个平台)和KBENCH(数据集)。 KGYM平台为Linux内核上的大规模实验提供了SE环境,包括在几台虚拟机上并行编译和运行内核,检测操作和崩溃,检查日志,查询和修补代码库。我们使用KGYM来促进KBENCH上的评估,这是由现实World Linux内核错误绘制的崩溃分辨率基准。 KBENCH中的一个示例错误包含崩溃的堆栈跟踪,一个错误重新生产文件,开发人员写的修复程序以及其他相关数据。为了了解当前的性能,我们通过提示LLMS解决Linux内核崩溃来进行基线实验。我们的初步评估表明,表现最佳的LLM分别在未辅助和辅助(即向模型披露的错误文件)设置中实现0.72%和5.38%。这些结果突出了需要进一步研究以增强SE任务中模型性能的必要性。提高KBENCH的性能需要模型来掌握新的学习技能,包括了解崩溃的原因和修复故障的原因,编写内存安全和硬件感知的代码以及理解并发。结果,这项工作在机器学习与系统软件的交集中开辟了多种研究途径。