喵ID:F8rUni免责声明

DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories

基本信息

DOI:
10.1109/sp46214.2022.9833743
发表时间:
2021-11
期刊:
2022 IEEE Symposium on Security and Privacy (SP)
影响因子:
--
通讯作者:
A. S. Rakin;Md Hafizul Islam Chowdhuryy;Fan Yao;Deliang Fan
中科院分区:
其他
文献类型:
--
作者: A. S. Rakin;Md Hafizul Islam Chowdhuryy;Fan Yao;Deliang Fan研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Recent advancements in Deep Neural Networks (DNNs) have enabled widespread deployment in multiple security-sensitive domains. The need for resource-intensive training and the use of valuable domain-specific training data have made these models the top intellectual property (IP) for model owners. One of the major threats to DNN privacy is model extraction attacks where adversaries attempt to steal sensitive information in DNN models. In this work, we propose an advanced model extraction framework DeepSteal that steals DNN weights remotely for the first time with the aid of a memory side-channel attack. Our proposed DeepSteal comprises two key stages. Firstly, we develop a new weight bit information extraction method, called HammerLeak, through adopting the rowhammer-based fault technique as the information leakage vector. HammerLeak leverages several novel system-level techniques tailored for DNN applications to enable fast and efficient weight stealing. Secondly, we propose a novel substitute model training algorithm with Mean Clustering weight penalty, which leverages the partial leaked bit information effectively and generates a substitute prototype of the target victim model. We evaluate the proposed model extraction framework on three popular image datasets (e.g., CIFAR-10/100/GTSRB) and four DNN architectures (e.g., ResNet-18/34/Wide-ResNetNGG-11). The extracted substitute model has successfully achieved more than 90% test accuracy on deep residual networks for the CIFAR-10 dataset. Moreover, our extracted substitute model could also generate effective adversarial input samples to fool the victim model. Notably, it achieves similar performance (i.e., ~1-2% test accuracy under attack) as white-box adversarial input attack (e.g., PGD/Trades).
深度神经网络(DNN)的最新进展使其能够在多个对安全敏感的领域广泛应用。对资源密集型训练的需求以及对有价值的特定领域训练数据的使用,使得这些模型成为模型所有者的首要知识产权(IP)。对DNN隐私的主要威胁之一是模型提取攻击,攻击者试图窃取DNN模型中的敏感信息。在这项工作中,我们提出了一种先进的模型提取框架DeepSteal,它首次借助内存侧信道攻击远程窃取DNN权重。我们提出的DeepSteal包括两个关键阶段。首先,我们通过采用基于行锤(rowhammer)的故障技术作为信息泄漏向量,开发了一种新的权重比特信息提取方法,称为HammerLeak。HammerLeak利用了几种为DNN应用量身定制的新型系统级技术,以实现快速高效的权重窃取。其次,我们提出了一种带有平均聚类权重惩罚的新型替代模型训练算法,该算法有效地利用了部分泄漏的比特信息,并生成了目标受害模型的替代原型。我们在三个流行的图像数据集(例如,CIFAR - 10/100/GTSRB)和四种DNN架构(例如,ResNet - 18/34/Wide - ResNetNGG - 11)上评估了所提出的模型提取框架。对于CIFAR - 10数据集,提取的替代模型在深度残差网络上成功实现了超过90%的测试准确率。此外,我们提取的替代模型还可以生成有效的对抗性输入样本以欺骗受害模型。值得注意的是,它实现了与白盒对抗性输入攻击(例如,PGD/Trades)相似的性能(即在攻击下测试准确率约为1 - 2%)。
参考文献(97)
被引文献(60)

数据更新时间:{{ references.updateTime }}

A. S. Rakin;Md Hafizul Islam Chowdhuryy;Fan Yao;Deliang Fan
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓