Large Language Models (LLMs) are consistently improving at increasingly realistic software engineering (SE) tasks. In real-world software stacks, significant SE effort is spent developing foundational system software like the Linux kernel. Unlike application-level software, a systems codebase like Linux is multilingual (low-level C/Assembly/Bash/Rust); gigantic (>20 million lines); critical (impacting billions of devices worldwide), and highly concurrent (involving complex multi-threading). To evaluate if ML models are useful while developing such large-scale systems-level software, we introduce kGym (a platform) and kBench (a dataset). The kGym platform provides a SE environment for large-scale experiments on the Linux kernel, including compiling and running kernels in parallel across several virtual machines, detecting operations and crashes, inspecting logs, and querying and patching the code base. We use kGym to facilitate evaluation on kBench, a crash resolution benchmark drawn from real-world Linux kernel bugs. An example bug in kBench contains crashing stack traces, a bug-reproducer file, a developer-written fix, and other associated data. To understand current performance, we conduct baseline experiments by prompting LLMs to resolve Linux kernel crashes. Our initial evaluations reveal that the best performing LLM achieves 0.72% and 5.38% in the unassisted and assisted (i.e., buggy files disclosed to the model) settings, respectively. These results highlight the need for further research to enhance model performance in SE tasks. Improving performance on kBench requires models to master new learning skills, including understanding the cause of crashes and repairing faults, writing memory-safe and hardware-aware code, and understanding concurrency. As a result, this work opens up multiple avenues of research at the intersection of machine learning and systems software.
大型语言模型(LLM)在越来越现实的软件工程(SE)任务中不断改进。在现实世界的软件堆栈中,大量的SE工作花费在开发像Linux内核这样的基础系统软件上。与应用层软件不同,像Linux这样的系统代码库是多语言的(低级C/Assembly/Bash/Rust);巨大的(2000万行);关键的(影响全球数十亿设备)和高度并发的(涉及复杂的多线程)。为了评估ML模型在开发如此大规模的系统级软件时是否有用,我们引入了kGym(一个平台)和kB边(一个数据集)。KGym平台提供了一个在Linux内核上进行大规模实验的SE环境,包括跨多个虚拟机并行编译和运行内核、检测操作和崩溃、检查日志以及查询和修补代码库。我们使用kGym来促进对kBch的评估,这是一个从真实的Linux内核错误中提取的崩溃解决基准。KBtch中的一个示例错误包含崩溃的堆栈跟踪、错误再现文件、开发人员编写的修复程序和其他相关数据。为了了解当前的性能,我们通过提示LLM解决Linux内核崩溃来进行基线实验。我们的初步评估显示,在无辅助和辅助(即,向模型披露有错误的文件)设置下,性能最好的LLM分别达到0.72%和5.38%。这些结果强调了进一步研究的必要性,以提高模型在SE任务中的性能。要提高kBtch的性能,需要模型掌握新的学习技能,包括了解崩溃的原因和修复故障,编写内存安全和硬件感知的代码,以及了解并发性。因此,这项工作在机器学习和系统软件的交叉点开辟了多种研究途径。