喵ID:vp1So0免责声明

DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips

基本信息

DOI:
--
发表时间:
2020-03
期刊:
ArXiv
影响因子:
--
通讯作者:
Fan Yao;A. S. Rakin;Deliang Fan
中科院分区:
其他
文献类型:
--
作者: Fan Yao;A. S. Rakin;Deliang Fan研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Security of machine learning is increasingly becoming a major concern due to the ubiquitous deployment of deep learning in many security-sensitive domains. Many prior studies have shown external attacks such as adversarial examples that tamper with the integrity of DNNs using maliciously crafted inputs. However, the security implication of internal threats (i.e., hardware vulnerability) to DNN models has not yet been well understood. In this paper, we demonstrate the first hardware-based attack on quantized deep neural networks-DeepHammer-that deterministically induces bit flips in model weights to compromise DNN inference by exploiting the rowhammer vulnerability. DeepHammer performs aggressive bit search in the DNN model to identify the most vulnerable weight bits that are flippable under system constraints. To trigger deterministic bit flips across multiple pages within reasonable amount of time, we develop novel system-level techniques that enable fast deployment of victim pages, memory-efficient rowhammering and precise flipping of targeted bits. DeepHammer can deliberately degrade the inference accuracy of the victim DNN system to a level that is only as good as random guess, thus completely depleting the intelligence of targeted DNN systems. We systematically demonstrate our attacks on real systems against 12 DNN architectures with 4 different datasets and different application domains. Our evaluation shows that DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes. We further discuss several mitigation techniques from both algorithm and system levels to protect DNNs against such attacks. Our work highlights the need to incorporate security mechanisms in future deep learning system to enhance the robustness of DNN against hardware-based deterministic fault injections.
由于深度学习在许多对安全敏感的领域广泛应用,机器学习的安全性日益成为一个主要关注点。许多先前的研究已经表明存在外部攻击,例如对抗样本,它使用恶意构造的输入篡改深度神经网络(DNN)的完整性。然而,内部威胁(即硬件漏洞)对DNN模型的安全影响尚未得到很好的理解。在本文中,我们展示了对量化深度神经网络的首次基于硬件的攻击——DeepHammer,它通过利用行锤(rowhammer)漏洞确定性地诱导模型权重中的比特翻转,从而破坏DNN的推理。DeepHammer在DNN模型中进行积极的比特搜索,以识别在系统约束下可翻转的最脆弱的权重比特。为了在合理的时间内触发多个页面上的确定性比特翻转,我们开发了新的系统级技术,这些技术能够实现受害页面的快速部署、高效内存的行锤攻击以及目标比特的精确翻转。DeepHammer能够故意将受害DNN系统的推理准确性降低到仅与随机猜测相当的水平,从而完全耗尽目标DNN系统的智能。我们系统地展示了针对具有4种不同数据集和不同应用领域的12种DNN架构在真实系统上的攻击。我们的评估表明,DeepHammer能够在几分钟内成功篡改运行时的DNN推理行为。我们进一步从算法和系统两个层面讨论了几种缓解技术,以保护DNN免受此类攻击。我们的工作强调了在未来的深度学习系统中纳入安全机制的必要性,以增强DNN对基于硬件的确定性故障注入的鲁棒性。
参考文献(73)
被引文献(99)

数据更新时间:{{ references.updateTime }}

Fan Yao;A. S. Rakin;Deliang Fan
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓