CAREER: Fair Artificial Intelligence for Intelligent Humans: Removing the Barriers to Deployment of Fair AI Technologies
职业:智能人类的公平人工智能:消除公平人工智能技术部署的障碍
基本信息
- 批准号:2046381
- 负责人:
- 金额:$ 54.67万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Continuing Grant
- 财政年份:2021
- 资助国家:美国
- 起止时间:2021-03-01 至 2026-02-28
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
There is growing awareness that artificial intelligence (AI) and machine learning systems can in some cases behave in unfair and discriminatory ways with harmful consequences in many areas including criminal justice, hiring, medicine, and college admissions. Techniques for ensuring AI fairness have received a lot of attention in the AI literature. However, these techniques are yet to see a substantial degree of deployment in real systems, which has thus-far limited their real-world impact. This is likely due in part to several practical challenges for deploying fair AI technologies. Firstly, the conventional wisdom is that fairness brings a cost in prediction performance which could affect an organization's bottom-line. Secondly, it is difficult to know which mathematical definition of AI fairness is appropriate to adopt since the definitions conflict with each other and encode different value systems. Finally, there is a chicken-and-egg problem, in that public pressure for an organization to adopt fairness considerations into an AI system only increases after this has been successfully demonstrated elsewhere. This research will develop technical solutions to resolve these human-facing barriers for the adoption of AI fairness techniques, thereby increasing deployment and the subsequent positive real-world impact.To resolve the practical limitations of fair AI techniques, this research incorporates human-centered considerations into the design and execution of fair AI algorithms, connecting and advancing the state of the art in statistical machine learning, fair AI, and human-centered AI. The first track of the project will develop methods for obtaining “fairness for free,” in which the fairest possible solution is found when sacrificing little-to-no performance. The researchers will design black-box, gray-box, and white-box approaches to this task. Then, the second track of the research will focus on developing explainable AI and data visualization techniques to help humans assess and trade off the consequences of different competing notions of fairness. A key step to accomplish this is to create a unifying fairness framework which systematically encodes the space of possible fairness metrics. Finally, in the third track of the project, the researchers will develop practical solutions to several real-world applications of AI fairness, including the allocation of medical resources, and AI-based career counseling. The solutions will involve both applied and fundamental AI research, and will facilitate the evaluation of the methods developed in the first two tracks. The project also includes initiatives for outreach, broadening participation in science, technology, engineering, and mathematics (STEM) fields, training and educating graduate students, and curriculum development.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
人们越来越认识到,人工智能 (AI) 和机器学习系统在某些情况下可能会表现出不公平和歧视性的行为,从而在刑事司法、招聘、医学和大学招生等许多领域产生有害后果。然而,这些技术尚未在实际系统中得到广泛应用,这可能部分是由于其在现实世界中的一些实际挑战。首先,传统观点是部署公平的人工智能技术。公平性会带来预测绩效的成本,这可能会影响组织的底线。 其次,很难知道采用哪种人工智能公平性的数学定义是合适的,因为这些定义相互冲突并且编码了不同的价值体系。这是一个先有鸡还是先有蛋的问题,因为在其他地方成功证明这一点后,公众要求组织将公平考虑纳入人工智能系统的压力只会增加。这项研究将开发技术解决方案来解决这些面向人类的采用障碍。人工智能公平技术,从而增加部署和随后的积极为了解决公平人工智能技术的实际局限性,本研究将以人为本的考虑纳入公平人工智能算法的设计和执行中,连接并推进统计机器学习、公平人工智能和人类的最新技术该项目的第一个轨道将开发获得“免费公平”的方法,其中研究人员将在几乎不牺牲性能的情况下找到尽可能公平的解决方案。和白盒方法来完成这项任务。然后,研究的第二个方向将侧重于开发可解释的人工智能和数据可视化技术,以帮助人类评估和权衡不同竞争的公平概念的后果,实现这一目标的关键一步是创建一个系统编码的统一公平框架。最后,在该项目的第三个轨道中,研究人员将为人工智能公平的几个现实应用开发实用的解决方案,包括医疗资源的分配和基于人工智能的职业咨询。涉及应用和基础人工智能研究,并将促进对人工智能的评估该项目还包括外展活动、扩大科学、技术、工程和数学 (STEM) 领域的参与、培训和教育研究生以及课程开发等举措。该奖项反映了 NSF 的法定使命和目标。通过使用基金会的智力优点和更广泛的影响审查标准进行评估,该项目被认为值得支持。
项目成果
期刊论文数量(5)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Neural Embedding Allocation: Distributed Representations of Topic Models
- DOI:10.1162/coli_a_00457
- 发表时间:2019-09
- 期刊:
- 影响因子:9.3
- 作者:Kamrun Keya;Yannis Papanikolaou;James R. Foulds
- 通讯作者:Kamrun Keya;Yannis Papanikolaou;James R. Foulds
Do Humans Prefer Debiased AI Algorithms? A Case Study in Career Recommendation
人类更喜欢有偏差的人工智能算法吗?
- DOI:10.1145/3490099.3511108
- 发表时间:2022
- 期刊:
- 影响因子:0
- 作者:Wang, Clarice;Wang, Kathryn;Bian, Andrew;Islam, Rashidul;Keya, Kamrun Naher;Foulds, James;Pan, Shimei
- 通讯作者:Pan, Shimei
Can We Obtain Fairness For Free?
我们能免费获得公平吗?
- DOI:10.1145/3461702.3462614
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Islam, Rashidul;Pan, Shimei;Foulds, James R.
- 通讯作者:Foulds, James R.
When Biased Humans Meet Debiased AI: A Case Study in College Major Recommendation
- DOI:10.1145/3611313
- 发表时间:2023-09-01
- 期刊:
- 影响因子:3.4
- 作者:Wang,Clarice;Wang,Kathryn;Pan,Shimei
- 通讯作者:Pan,Shimei
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
James Foulds其他文献
The Monitoring Illicit Substance Use Consortium: A Study Protocol
监测非法药物使用联盟:研究方案
- DOI:
- 发表时间:
2024 - 期刊:
- 影响因子:0
- 作者:
C. Greenwood;P. Letcher;Esther Laurance;Joseph M. Boden;James Foulds;E. Spry;Jessica A. Kerr;J. Toumbourou;J. Heerde;Catherine Nolan;Yvonne Bonomo;Delyse M. Hutchinson;Tim Slade;S. Aarsman;Craig A. Olsson - 通讯作者:
Craig A. Olsson
James Foulds的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('James Foulds', 18)}}的其他基金
CRII: RI: Bayesian Models for Fairness, and Fairness for Bayesian Models
CRII:RI:公平性的贝叶斯模型以及贝叶斯模型的公平性
- 批准号:
1850023 - 财政年份:2019
- 资助金额:
$ 54.67万 - 项目类别:
Standard Grant
AI-DCL: Fairness for the Allocation of Healthcare Resources
AI-DCL:医疗资源分配的公平性
- 批准号:
1927486 - 财政年份:2019
- 资助金额:
$ 54.67万 - 项目类别:
Standard Grant
相似国自然基金
即时情绪影响机会公平决策的认知神经机制及情绪调节的干预作用
- 批准号:32300857
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
教育公平视角下在线学习平台的智能学习策略研究
- 批准号:72301269
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
高效公平的个性化联邦学习算法与理论
- 批准号:62376110
- 批准年份:2023
- 资助金额:49 万元
- 项目类别:面上项目
城市间绿地公平性的分异机制和健康效应研究
- 批准号:42301238
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
国土空间管制与土地市场分割下乡村发展公平及利益协调机制—武汉市典型生态保护区的研究
- 批准号:72374080
- 批准年份:2023
- 资助金额:41 万元
- 项目类别:面上项目
相似海外基金
Maternal Health Data Innovation and Coordination Hub
孕产妇健康数据创新与协调中心
- 批准号:
10748737 - 财政年份:2023
- 资助金额:
$ 54.67万 - 项目类别:
UZIMA-DS: UtiliZing health Information for Meaningful impact in East Africa through Data Science
UZIMA-DS:通过数据科学利用健康信息对东非产生有意义的影响
- 批准号:
10490293 - 财政年份:2021
- 资助金额:
$ 54.67万 - 项目类别:
UZIMA-DS: UtiliZing health Information for Meaningful impact in East Africa through Data Science
UZIMA-DS:通过数据科学利用健康信息对东非产生有意义的影响
- 批准号:
10659241 - 财政年份:2021
- 资助金额:
$ 54.67万 - 项目类别:
Improving AI/ML-readiness of Synthetic Data in a Resource-Constrained Setting
在资源受限的环境中提高合成数据的 AI/ML 准备度
- 批准号:
10841728 - 财政年份:2021
- 资助金额:
$ 54.67万 - 项目类别:
UZIMA-DS: UtiliZing health Information for Meaningful impact in East Africa through Data Science
UZIMA-DS:通过数据科学利用健康信息对东非产生有意义的影响
- 批准号:
10314084 - 财政年份:2021
- 资助金额:
$ 54.67万 - 项目类别: