Traditional Deep Neural Network (DNN) security is mostly related to the well-known adversarial input example attack. Recently, another dimension of adversarial attack, namely, attack on DNN weight parameters, has been shown to be very powerful. As a representative one, the Bit-Flip-based adversarial weight Attack (BFA) injects an extremely small amount of faults into weight parameters to hijack the executing DNN function. Prior works of BFA focus on un-targeted attack that can hack all inputs into a random output class by flipping a very small number of weight bits stored in computer memory. This paper proposes the first work of targeted BFA based (T-BFA) adversarial weight attack on DNNs, which can intentionally mislead selected inputs to a target output class. The objective is achieved by identifying the weight bits that are highly associated with classification of a targeted output through a class-dependent vulnerable weight bit searching algorithm. Our proposed T-BFA performance is successfully demonstrated on multiple DNN architectures for image classification tasks. For example, by merely flipping 27 out of 88 million weight bits of ResNet-18, our T-BFA can misclassify all the images from ’Hen’ class into ’Goose’ class (i.e., 100% attack success rate) in ImageNet dataset, while maintaining 59.35% validation accuracy. Moreover, we successfully demonstrate our T-BFA attack in a real computer prototype system running DNN computation, with Ivy Bridge-based Intel i7 CPU and 8GB DDR3 memory.
传统的Deep神经网络(DNN)的安全性主要与众所周知的对手输入示例攻击有关。基于位的对抗权重攻击(BFA)将非常少量的故障注入重量参数,以劫持执行DNN功能的重点。在计算机记忆中汇总少量的重量位。通过识别与类依赖的弱点搜索算法相关的输出的重量位。在RESNET-18的8800万重量位中,有27个,我们的T-BFA可能会在Imagenet数据集中误导“ Hen”类中的所有图像中的所有图像(即100%攻击成功率),同时保持59.35%的验证精度此外,我们在运行DNN计算的真实计算机原型系统中成功演示了我们的T-BFA攻击,并使用IVY桥梁的Intel I7 CPU和8GB DDR3内存。