喵ID:nDyXuE免责声明

Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information

基本信息

DOI:
10.1145/3437963.3441752
发表时间:
2021-01-01
期刊:
WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING
影响因子:
--
通讯作者:
Wang, Suhang
中科院分区:
其他
文献类型:
Proceedings Paper
作者: Dai, Enyan;Wang, Suhang研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Graph neural networks (GNNs) have shown great power in modeling graph structured data. However, similar to other machine learning models, GNNs may make predictions biased on protected sensitive attributes, e.g., skin color and gender. Because machine learning algorithms including GNNs are trained to reflect the distribution of the training data which often contains historical bias towards sensitive attributes. In addition, the discrimination in GNNs can be magnified by graph structures and the message-passing mechanism. As a result, the applications of GNNs in sensitive domains such as crime rate prediction would be largely limited. Though extensive studies of fair classification have been conducted on i.i.d data, methods to address the problem of discrimination on non-i.i.d data are rather limited. Furthermore, the practical scenario of sparse annotations in sensitive attributes is rarely considered in existing works. Therefore, we study the novel and important problem of learning fair GNNs with limited sensitive attribute information. FairGNN is proposed to eliminate the bias of GNNs whilst maintaining high node classification accuracy by leveraging graph structures and limited sensitive information. Our theoretical analysis shows that FairGNN can ensure the fairness of GNNs under mild conditions given limited nodes with known sensitive attributes. Extensive experiments on real-world datasets also demonstrate the effectiveness of FairGNN in debiasing and keeping high accuracy.
图神经网络(GNNs)在对图结构数据进行建模方面展现出了强大的能力。然而,与其他机器学习模型类似,图神经网络可能会在受保护的敏感属性(例如肤色和性别)上产生有偏差的预测。因为包括图神经网络在内的机器学习算法是根据训练数据的分布进行训练的,而训练数据往往包含对敏感属性的历史偏差。此外,图神经网络中的歧视可能会被图结构和消息传递机制放大。因此,图神经网络在诸如犯罪率预测等敏感领域的应用将受到很大限制。尽管在独立同分布(i.i.d)数据上已经对公平分类进行了大量研究,但解决非独立同分布数据歧视问题的方法却相当有限。此外,现有工作很少考虑敏感属性中稀疏标注的实际情况。因此,我们研究了在敏感属性信息有限的情况下学习公平图神经网络这一新颖且重要的问题。我们提出了FairGNN,通过利用图结构和有限的敏感信息来消除图神经网络的偏差,同时保持较高的节点分类准确率。我们的理论分析表明,在给定有限的已知敏感属性节点的温和条件下,FairGNN能够确保图神经网络的公平性。在真实数据集上进行的大量实验也证明了FairGNN在消除偏差和保持高精度方面的有效性。
参考文献(56)
被引文献(0)

数据更新时间:{{ references.updateTime }}

Wang, Suhang
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓