Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks. As a result, there is a surge of interest in using these models for making potentially important decisions in high-regret applications. However, despite GNNs’ impressive performance, it has been observed that carefully crafted perturbations on graph structures (or nodes attributes) lead them to make wrong predictions. Presence of these adversarial examples raises serious security concerns. Most of the existing robust GNN design/training methods are only applicable to white-box settings where model parameters are known and gradient based methods can be used by performing convex relaxation of the discrete graph domain. More importantly, these methods are not efficient and scalable which make them infeasible in time sensitive tasks and massive graph datasets. To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner. On several applications, we show that the proposed techniques are significantly less computationally expensive and, in some cases, more robust than the state-of-the-art methods making them suitable to large-scale problems which were out of the reach of traditional robust training methods.
图形神经网络(GNN)在几个基本的推理任务上取得了重大进步,因此,使用这些模型来做出潜在的重要决策,但是它需要GNNS的令人印象深刻的效果。已经观察到,精心制作的图形结构(或节点属性)使他们做出了错误的预测。如果模型参数是已知的,并且可以通过更重要的是,可以使用基于梯度的方法来使用凸的漏洞。我们提出了一个通用框架,该框架利用贪婪的搜索算法和零阶方法,以通用和有效的方式获得强大的GNN,我们表明所提出的技术在某些情况下的计算量明显较小,并且在某些情况下更少。比最先进的方法强大,使其适合于传统强大训练方法无法实现的大规模问题。