喵ID:Wa4DrF免责声明

T-BFA: <underline>T</underline>argeted <underline>B</underline>it-<underline>F</underline>lip Adversarial Weight <underline>A</underline>ttack

T-BFA:<underline>T</underline>有针对性的<underline>B</underline>it-<underline>F</underline>唇形对抗重量<underline>A</underline>攻击

基本信息

DOI:
--
发表时间:
2020
影响因子:
23.6
通讯作者:
Deliang Fan
中科院分区:
计算机科学1区
文献类型:
--
作者: A. S. Rakin;Zhezhi He;Jingtao Li;Fan Yao;C. Chakrabarti;Deliang Fan研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Traditional Deep Neural Network (DNN) security is mostly related to the well-known adversarial input example attack. Recently, another dimension of adversarial attack, namely, attack on DNN weight parameters, has been shown to be very powerful. As a representative one, the Bit-Flip-based adversarial weight Attack (BFA) injects an extremely small amount of faults into weight parameters to hijack the executing DNN function. Prior works of BFA focus on un-targeted attack that can hack all inputs into a random output class by flipping a very small number of weight bits stored in computer memory. This paper proposes the first work of targeted BFA based (T-BFA) adversarial weight attack on DNNs, which can intentionally mislead selected inputs to a target output class. The objective is achieved by identifying the weight bits that are highly associated with classification of a targeted output through a class-dependent vulnerable weight bit searching algorithm. Our proposed T-BFA performance is successfully demonstrated on multiple DNN architectures for image classification tasks. For example, by merely flipping 27 out of 88 million weight bits of ResNet-18, our T-BFA can misclassify all the images from ’Hen’ class into ’Goose’ class (i.e., 100% attack success rate) in ImageNet dataset, while maintaining 59.35% validation accuracy. Moreover, we successfully demonstrate our T-BFA attack in a real computer prototype system running DNN computation, with Ivy Bridge-based Intel i7 CPU and 8GB DDR3 memory.
传统的Deep神经网络(DNN)的安全性主要与众所周知的对手输入示例攻击有关。基于位的对抗权重攻击(BFA)将非常少量的故障注入重量参数,以劫持执行DNN功能的重点。在计算机记忆中汇总少量的重量位。通过识别与类依赖的弱点搜索算法相关的输出的重量位。在RESNET-18的8800万重量位中,有27个,我们的T-BFA可能会在Imagenet数据集中误导“ Hen”类中的所有图像中的所有图像(即100%攻击成功率),同时保持59.35%的验证精度此外,我们在运行DNN计算的真实计算机原型系统中成功演示了我们的T-BFA攻击,并使用IVY桥梁的Intel I7 CPU和8GB DDR3内存。
参考文献(6)
被引文献(15)
Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack
DOI:
10.1109/cvpr42600.2020.01410
发表时间:
2020-06
期刊:
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
影响因子:
0
作者:
Zhezhi He;A. S. Rakin;Jingtao Li;C. Chakrabarti;Deliang Fan
通讯作者:
Zhezhi He;A. S. Rakin;Jingtao Li;C. Chakrabarti;Deliang Fan
BadNets: Evaluating Backdooring Attacks on Deep Neural Networks
DOI:
10.1109/access.2019.2909068
发表时间:
2019-01-01
期刊:
IEEE ACCESS
影响因子:
3.9
作者:
Gu, Tianyu;Liu, Kang;Garg, Siddharth
通讯作者:
Garg, Siddharth

数据更新时间:{{ references.updateTime }}

Deliang Fan
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓