The improvement of language model robustness, including successful defense against adversarial attacks, remains an open problem. In computer vision settings, the stochastic noising and de-noising process provided by diffusion models has proven useful for purifying input images, thus improving model robustness against adversarial attacks. Similarly, some initial work has explored the use of random noising and de-noising to mitigate adversarial attacks in an NLP setting, but improving the quality and efficiency of these methods is necessary for them to remain competitive. We extend upon methods of input text purification that are inspired by diffusion processes, which randomly mask and refill portions of the input text before classification. Our novel method, MaskPure, exceeds or matches robustness compared to other contemporary defenses, while also requiring no adversarial classifier training and without assuming knowledge of the attack type. In addition, we show that MaskPure is provably certifiably robust. To our knowledge, MaskPure is the first stochastic-purification method with demonstrated success against both character-level and word-level attacks, indicating the generalizable and promising nature of stochastic denoising defenses. In summary: the MaskPure algorithm bridges literature on the current strongest certifiable and empirical adversarial defense methods, showing that both theoretical and practical robustness can be obtained together. Code is available on GitHub at https://github.com/hubarruby/MaskPure.
语言模型鲁棒性的改善,包括针对对抗性攻击的成功防御仍然是一个开放的问题。在计算机视觉设置中,被证明可用于净化输入图像,从而改善了针对对抗性攻击的模型鲁棒性。同样,一些初始工作探讨了在NLP设置中使用随机噪声和拖延来减轻对抗性攻击的使用,但是提高这些方法的质量和效率是他们保持竞争性的必要条件。我们扩展了受到扩散过程启发的输入文本纯化方法,这些方法在分类之前会随机掩盖和重新填充输入文本的部分。与其他当代防御相比,我们的新方法Maskpure超过或匹配了鲁棒性,同时也不需要对抗分类器训练,也不需要了解攻击类型。此外,我们证明了Maskpure可以证明具有稳健性。据我们所知,Maskpure是第一种随机纯化方法,其表现为对角色级别和单词级攻击的成功,这表明随机去索诺的防御能力的可普遍性和有希望的性质。总而言之:有关当前最强认证和经验的对抗防御方法的Maskpure算法桥梁文献,表明可以共同获得理论和实践鲁棒性。代码可在https://github.com/hubarruby/maskpure上在github上找到。