CRII: RI: Bayesian Models for Fairness, and Fairness for Bayesian Models
CRII:RI:公平性的贝叶斯模型以及贝叶斯模型的公平性
基本信息
- 批准号:1850023
- 负责人:
- 金额:$ 17.49万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2019
- 资助国家:美国
- 起止时间:2019-07-01 至 2023-06-30
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
In our interconnected society, artificial intelligence (AI) and machine learning (ML) systems have become ubiquitous. Every day, machine learning systems influence our purchasing decisions, our navigation through virtual and physical spaces, the friendships we make, and even the romantic relationships we form. The decisions automated by these systems have increasingly important real-world consequences, from credit scoring, to college admissions, to the prediction of re-offending behavior in the criminal justice system, as is already being used for bail and sentencing decisions across the United States of America. With the growing impact of artificial intelligence and machine learning technologies on our society, and their importance to the economic competitiveness and technological leadership of the United States, it is imperative that we ensure that these systems behave in a fair and trustworthy manner. Recent studies have shown that data-driven AI and ML systems can in some cases exhibit unfair and unjust behavior, for example due to biases hidden in the input data, or because of flawed engineering decisions. This project develops a suite of tools for modeling, measuring, and correcting unfair and discriminatory behavior in AI and ML systems. The research focuses on simultaneously addressing algorithmic discrimination that may occur across several overlapping dimensions, including gender, race, national origin, sexual orientation, disability status, and socioeconomic class. The novel AI techniques developed in this project address the two main technical challenges which specifically arise in this context: uncertainty in the measurement of fairness, and correlations in the data.When ensuring AI fairness regarding multiple protected dimensions such as gender and race, data sparsity rapidly becomes a challenge as the number of dimensions, or the number of values per dimension, increase. This data sparsity directly results in uncertainty in the measurement of fairness. The project will leverage Bayesian inference, a branch of statistics which specifically addresses uncertainty, to manage this issue. Correlations between the protected (and other) attributes will be leveraged using probabilistic graphical models, a class of machine learning models which encode dependence relationships. Using a novel Bayesian definition of fairness as a unifying framework, the project's contributions consist of three interdependent tracks. The first track will focus on developing general modeling techniques for the statistically efficient measurement of fairness, using latent variable models to produce parsimonious representations, and hierarchical modeling to achieve data efficiency. The second track develops adversarial optimization algorithms to train machine learning algorithms to respect fairness constraints when the data distribution is uncertain. In the third track, the project will develop methods for ensuring fairness in Bayesian inference, which can be used to prevent the inferences from reflecting negative stereotypes. The methods will be validated with case studies on applications across a wide range of data regimes, including modeling census income data, criminal justice recidivism prediction, and social media analytics.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
在我们相互联系的社会中,人工智能(AI)和机器学习(ML)系统已无处不在。 每天,机器学习系统都会影响我们的购买决策,通过虚拟和物理空间进行导航,我们建立的友谊,甚至我们形成的浪漫关系。 这些系统自动化的决定越来越重要,从信用评分到大学录取到对刑事司法系统中的再犯行为的预测,越来越重要的实际后果,已经用于全美的保释和判决决定。 随着人工智能和机器学习技术对我们社会的日益影响,及其对美国经济竞争力和技术领导的重要性,我们必须确保这些系统以公平而值得信赖的方式行事。 最近的研究表明,数据驱动的AI和ML系统在某些情况下可能表现出不公平和不公正的行为,例如由于输入数据中隐藏的偏见或由于工程的错误而言。 该项目开发了一套用于建模,测量和纠正AI和ML系统中不公平和歧视性行为的工具。 该研究的重点是同时解决在几个重叠维度上可能发生的算法歧视,包括性别,种族,民族起源,性取向,残疾状况和社会经济阶层。 该项目中开发的新型AI技术解决了在这种情况下特别出现的两个主要技术挑战:公平度的衡量和数据的相关性的不确定性。当确保对性别和种族等多个受保护维度(例如性别和种族)的AI公平性(如性别和种族)时,数据稀疏性会迅速成为一个挑战,或者是值的挑战,或者值的数量,或增加数量的数量。 这种数据稀疏性直接导致公平度量的不确定性。 该项目将利用贝叶斯推论(统计数据的一个专门解决不确定性)来管理此问题。 受保护(和其他)属性之间的相关性将使用概率图形模型,这是一类编码依赖关系的机器学习模型。 将新颖的贝叶斯对公平定义作为统一框架,该项目的贡献由三个相互依存的曲目组成。 第一条轨道将着重于开发通用建模技术,以使用潜在的变量模型来生成典型的表示形式,并使用层次建模来实现数据效率。 第二个曲目开发了对抗性优化算法,以训练机器学习算法,以尊重数据分布不确定时的公平约束。 在第三轨中,该项目将开发出确保贝叶斯推论中公平性的方法,该方法可用于防止推论反映负面刻板印象。这些方法将通过有关广泛数据制度的应用的案例研究来验证,包括对人口普查收入数据进行建模,刑事司法累犯预测和社交媒体分析。该奖项反映了NSF的法定任务,并被认为是值得通过基金会的知识分子和更广泛影响的评估来通过评估来获得支持的。
项目成果
期刊论文数量(14)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Do Humans Prefer Debiased AI Algorithms? A Case Study in Career Recommendation
人类更喜欢有偏差的人工智能算法吗?
- DOI:10.1145/3490099.3511108
- 发表时间:2022
- 期刊:
- 影响因子:0
- 作者:Wang, Clarice;Wang, Kathryn;Bian, Andrew;Islam, Rashidul;Keya, Kamrun Naher;Foulds, James;Pan, Shimei
- 通讯作者:Pan, Shimei
Can We Obtain Fairness For Free?
我们能免费获得公平吗?
- DOI:10.1145/3461702.3462614
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Islam, Rashidul;Pan, Shimei;Foulds, James R.
- 通讯作者:Foulds, James R.
Equitable Allocation of Healthcare Resources with Fair Survival Models
以公平生存模式公平分配医疗资源
- DOI:10.1137/1.9781611976700.22
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Keya, Kamrun Naher;Islam, Rashidul;Pan, Shimei;Stockwell, Ian;Foulds, James
- 通讯作者:Foulds, James
Differential Fairness
- DOI:
- 发表时间:2019
- 期刊:
- 影响因子:0
- 作者:James R. Foulds;Rashidul Islam;Kamrun Keya;Shimei Pan
- 通讯作者:James R. Foulds;Rashidul Islam;Kamrun Keya;Shimei Pan
Equitable Allocation of Healthcare Resources with Fair Cox Models
利用公平考克斯模型公平分配医疗资源
- DOI:
- 发表时间:2020
- 期刊:
- 影响因子:0
- 作者:Keya, K.;Islam, R.;Pan, S.;Stockwell, I.;Foulds, J. R.
- 通讯作者:Foulds, J. R.
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
James Foulds其他文献
The Monitoring Illicit Substance Use Consortium: A Study Protocol
监测非法药物使用联盟:研究方案
- DOI:
- 发表时间:
2024 - 期刊:
- 影响因子:0
- 作者:
C. Greenwood;P. Letcher;Esther Laurance;Joseph M. Boden;James Foulds;E. Spry;Jessica A. Kerr;J. Toumbourou;J. Heerde;Catherine Nolan;Yvonne Bonomo;Delyse M. Hutchinson;Tim Slade;S. Aarsman;Craig A. Olsson - 通讯作者:
Craig A. Olsson
James Foulds的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('James Foulds', 18)}}的其他基金
CAREER: Fair Artificial Intelligence for Intelligent Humans: Removing the Barriers to Deployment of Fair AI Technologies
职业:智能人类的公平人工智能:消除公平人工智能技术部署的障碍
- 批准号:
2046381 - 财政年份:2021
- 资助金额:
$ 17.49万 - 项目类别:
Continuing Grant
AI-DCL: Fairness for the Allocation of Healthcare Resources
AI-DCL:医疗资源分配的公平性
- 批准号:
1927486 - 财政年份:2019
- 资助金额:
$ 17.49万 - 项目类别:
Standard Grant
相似国自然基金
跨膜蛋白LRP5胞外域调控膜受体TβRI促钛表面BMSCs归巢、分化的研究
- 批准号:82301120
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
基于“免疫-神经”网络探讨眼针活化CI/RI大鼠MC靶向H3R调节“免疫监视”的抗炎机制
- 批准号:82374375
- 批准年份:2023
- 资助金额:51 万元
- 项目类别:面上项目
Dectin-2通过促进FcεRI聚集和肥大细胞活化加剧哮喘发作的机制研究
- 批准号:82300022
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
TβRI的UFM化修饰调控TGF-β信号通路和乳腺癌转移的作用及机制研究
- 批准号:32200568
- 批准年份:2022
- 资助金额:30.00 万元
- 项目类别:青年科学基金项目
藏药甘肃蚤缀β-咔啉生物碱类TβRI抑制剂的发现及其抗肺纤维化作用机制研究
- 批准号:
- 批准年份:2022
- 资助金额:30 万元
- 项目类别:青年科学基金项目
相似海外基金
RI:Small:Exploring Efficient Bayesian Model-Augmentation Techniques for Decomposible Contrastive Representation Learning
RI:Small:探索可分解对比表示学习的高效贝叶斯模型增强技术
- 批准号:
2223292 - 财政年份:2022
- 资助金额:
$ 17.49万 - 项目类别:
Standard Grant
RI: Small: Enabling Interpretable AI via Bayesian Deep Learning
RI:小型:通过贝叶斯深度学习实现可解释的人工智能
- 批准号:
2127918 - 财政年份:2021
- 资助金额:
$ 17.49万 - 项目类别:
Continuing Grant
RI: Small: New Directions in Probabilistic Deep Learning: Exponential Families, Bayesian Nonparametrics and Empirical Bayes
RI:小:概率深度学习的新方向:指数族、贝叶斯非参数和经验贝叶斯
- 批准号:
2127869 - 财政年份:2021
- 资助金额:
$ 17.49万 - 项目类别:
Standard Grant
CRII: RI: Self-Attention through the Bayesian Lens
CRII:RI:贝叶斯视角下的自注意力
- 批准号:
1850358 - 财政年份:2019
- 资助金额:
$ 17.49万 - 项目类别:
Standard Grant
RI: SMALL: Robust Reinforcement Learning Using Bayesian Models
RI:小:使用贝叶斯模型的鲁棒强化学习
- 批准号:
1815275 - 财政年份:2018
- 资助金额:
$ 17.49万 - 项目类别:
Standard Grant