Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query. However, XAI also opens a door for adversaries to gain insights into the black-box models in MLaaS, thereby making the models more vulnerable to several attacks. For example, feature-based explanations (e.g., SHAP) could expose the top important features that a black-box model focuses on. Such disclosure has been exploited to craft effective backdoor triggers against malware classifiers. To address this trade-off, we introduce a new concept of achieving local differential privacy (LDP) in the explanations, and from that we establish a defense, called XRand, against such attacks. We show that our mechanism restricts the information that the adversary can learn about the top important features, while maintaining the faithfulness of the explanations.
可解释人工智能(XAI)领域的最新发展有助于提高对机器学习即服务(MLaaS)系统的信任,在该系统中,针对每个查询,在给出模型预测的同时还会提供解释。然而,XAI也为攻击者提供了深入了解MLaaS中黑箱模型的途径,从而使这些模型更容易受到多种攻击。例如,基于特征的解释(如SHAP)可能会暴露黑箱模型所关注的最重要特征。这种信息披露已被利用来针对恶意软件分类器设计有效的后门触发器。为了解决这种权衡问题,我们引入了在解释中实现局部差分隐私(LDP)的新概念,并由此建立了一种名为XRand的防御措施来抵御此类攻击。我们表明,我们的机制限制了攻击者能够了解到的最重要特征的信息,同时保持了解释的准确性。