喵ID:x57KQe免责声明

Towards an Efficient and General Framework of Robust Training for Graph Neural Networks

建立高效、通用的图神经网络鲁棒训练框架

基本信息

DOI:
10.1109/icassp40776.2020.9054465
发表时间:
2020
期刊:
ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
影响因子:
--
通讯作者:
X. Lin
中科院分区:
文献类型:
--
作者: Kaidi Xu;Sijia Liu;Pin;Mengshu Sun;Caiwen Ding;B. Kailkhura;X. Lin研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks. As a result, there is a surge of interest in using these models for making potentially important decisions in high-regret applications. However, despite GNNs’ impressive performance, it has been observed that carefully crafted perturbations on graph structures (or nodes attributes) lead them to make wrong predictions. Presence of these adversarial examples raises serious security concerns. Most of the existing robust GNN design/training methods are only applicable to white-box settings where model parameters are known and gradient based methods can be used by performing convex relaxation of the discrete graph domain. More importantly, these methods are not efficient and scalable which make them infeasible in time sensitive tasks and massive graph datasets. To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner. On several applications, we show that the proposed techniques are significantly less computationally expensive and, in some cases, more robust than the state-of-the-art methods making them suitable to large-scale problems which were out of the reach of traditional robust training methods.
图形神经网络(GNN)在几个基本的推理任务上取得了重大进步,因此,使用这些模型来做出潜在的重要决策,但是它需要GNNS的令人印象深刻的效果。已经观察到,精心制作的图形结构(或节点属性)使他们做出了错误的预测。如果模型参数是已知的,并且可以通过更重要的是,可以使用基于梯度的方法来使用凸的漏洞。我们提出了一个通用框架,该框架利用贪婪的搜索算法和零阶方法,以通用和有效的方式获得强大的GNN,我们表明所提出的技术在某些情况下的计算量明显较小,并且在某些情况下更少。比最先进的方法强大,使其适合于传统强大训练方法无法实现的大规模问题。
参考文献(6)
被引文献(6)
Adversarial Attack on Graph Structured Data
DOI:
发表时间:
2018-06
期刊:
影响因子:
0
作者:
H. Dai;Hui Li-;Tian Tian-Tian;Xin Huang;L. Wang;Jun Zhu;Le Song
通讯作者:
H. Dai;Hui Li-;Tian Tian-Tian;Xin Huang;L. Wang;Jun Zhu;Le Song
On the Design of Black-Box Adversarial Examples by Leveraging Gradient-Free Optimization and Operator Splitting Method
DOI:
10.1109/iccv.2019.00021
发表时间:
2019-07
期刊:
2019 IEEE/CVF International Conference on Computer Vision (ICCV)
影响因子:
0
作者:
Pu Zhao;Sijia Liu;Pin-Yu Chen;Nghia Hoang;Kaidi Xu;B. Kailkhura;Xue Lin
通讯作者:
Pu Zhao;Sijia Liu;Pin-Yu Chen;Nghia Hoang;Kaidi Xu;B. Kailkhura;Xue Lin
Adversarial Attacks on Neural Networks for Graph Data
DOI:
10.1145/3219819.3220078
发表时间:
2018-01-01
期刊:
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING
影响因子:
0
作者:
Zuegner, Daniel;Akbarnejad, Amir;Guennemann, Stephan
通讯作者:
Guennemann, Stephan

数据更新时间:{{ references.updateTime }}

X. Lin
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓