喵ID:AETmuw免责声明

KGym: A Platform and Dataset to Benchmark Large Language Models on Linux Kernel Crash Resolution

KGym:在 Linux 内核崩溃解决方案上对大型语言模型进行基准测试的平台和数据集

基本信息

DOI:
--
发表时间:
2024
期刊:
影响因子:
--
通讯作者:
Baishakhi Ray
中科院分区:
文献类型:
--
作者: Alex Mathai;Chenxi Huang;Petros Maniatis;A. Nogikh;Franjo Ivancic;Junfeng Yang;Baishakhi Ray研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Large Language Models (LLMs) are consistently improving at increasingly realistic software engineering (SE) tasks. In real-world software stacks, significant SE effort is spent developing foundational system software like the Linux kernel. Unlike application-level software, a systems codebase like Linux is multilingual (low-level C/Assembly/Bash/Rust); gigantic (>20 million lines); critical (impacting billions of devices worldwide), and highly concurrent (involving complex multi-threading). To evaluate if ML models are useful while developing such large-scale systems-level software, we introduce kGym (a platform) and kBench (a dataset). The kGym platform provides a SE environment for large-scale experiments on the Linux kernel, including compiling and running kernels in parallel across several virtual machines, detecting operations and crashes, inspecting logs, and querying and patching the code base. We use kGym to facilitate evaluation on kBench, a crash resolution benchmark drawn from real-world Linux kernel bugs. An example bug in kBench contains crashing stack traces, a bug-reproducer file, a developer-written fix, and other associated data. To understand current performance, we conduct baseline experiments by prompting LLMs to resolve Linux kernel crashes. Our initial evaluations reveal that the best performing LLM achieves 0.72% and 5.38% in the unassisted and assisted (i.e., buggy files disclosed to the model) settings, respectively. These results highlight the need for further research to enhance model performance in SE tasks. Improving performance on kBench requires models to master new learning skills, including understanding the cause of crashes and repairing faults, writing memory-safe and hardware-aware code, and understanding concurrency. As a result, this work opens up multiple avenues of research at the intersection of machine learning and systems software.
大型语言模型(LLM)在越来越现实的软件工程(SE)任务中一直在改进。在现实世界软件堆栈中,大量的SE工作花在开发Linux内核等基础系统软件上。与应用程序级软件不同,像Linux这样的系统代码库是多语言(低级C/汇编/bash/rust);巨大(> 2000万行);批判性(影响了全球数十亿个设备),并高度并发(涉及复杂的多线程)。为了评估ML模型在开发此类大型系统级软件时是否有用,我们介绍了Kgym(一个平台)和KBENCH(数据集)。 KGYM平台为Linux内核上的大规模实验提供了SE环境,包括在几台虚拟机上并行编译和运行内核,检测操作和崩溃,检查日志,查询和修补代码库。我们使用KGYM来促进KBENCH上的评估,这是由现实World Linux内核错误绘制的崩溃分辨率基准。 KBENCH中的一个示例错误包含崩溃的堆栈跟踪,一个错误重新生产文件,开发人员写的修复程序以及其他相关数据。为了了解当前的性能,我们通过提示LLMS解决Linux内核崩溃来进行基线实验。我们的初步评估表明,表现最佳的LLM分别在未辅助和辅助(即向模型披露的错误文件)设置中实现0.72%和5.38%。这些结果突出了需要进一步研究以增强SE任务中模型性能的必要性。提高KBENCH的性能需要模型来掌握新的学习技能,包括了解崩溃的原因和修复故障的原因,编写内存安全和硬件感知的代码以及理解并发。结果,这项工作在机器学习与系统软件的交集中开辟了多种研究途径。
参考文献(1)
被引文献(0)
MultiPL-E: A Scalable and Polyglot Approach to Benchmarking Neural Code Generation
DOI:
10.1109/tse.2023.3267446
发表时间:
2023-07
期刊:
IEEE Transactions on Software Engineering
影响因子:
7.4
作者:
Federico Cassano;John Gouwar;Daniel Nguyen;S. Nguyen;Luna Phipps-Costin;Donald Pinckney;Ming-Ho Yee-M
通讯作者:
Federico Cassano;John Gouwar;Daniel Nguyen;S. Nguyen;Luna Phipps-Costin;Donald Pinckney;Ming-Ho Yee-M

数据更新时间:{{ references.updateTime }}

Baishakhi Ray
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓