FAI: Advancing Fairness in AI with Human-Algorithm Collaborations
FAI:通过人类算法合作促进人工智能的公平性
基本信息
- 批准号:2125692
- 负责人:
- 金额:$ 56.5万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2020
- 资助国家:美国
- 起止时间:2020-10-01 至 2023-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Artificial intelligence (AI) systems are increasingly used to assist humans in making high-stakes decisions, such as online information curation, resume screening, mortgage lending, police surveillance, public resource allocation, and pretrial detention. While the hope is that the use of algorithms will improve societal outcomes and economic efficiency, concerns have been raised that algorithmic systems might inherit human biases from historical data, perpetuate discrimination against already vulnerable populations, and generally fail to embody a given community's important values. Recent work on algorithmic fairness has characterized the manner in which unfairness can arise at different steps along the development pipeline, produced dozens of quantitative notions of fairness, and provided methods for enforcing these notions. However, there is a significant gap between the over-simplified algorithmic objectives and the complications of real-world decision-making contexts. This project aims to close the gap by explicitly accounting for the context-specific fairness principles of actual stakeholders, their acceptable fairness-utility trade-offs, and the cognitive strengths and limitations of human decision-makers throughout the development and deployment of the algorithmic system. To meet these goals, this project enables close human-algorithm collaborations that combine innovative machine learning methods with approaches from human-computer interaction (HCI) for eliciting feedback and preferences from human experts and stakeholders. There are three main research activities that naturally correspond to three stages of a human-in-the-loop AI system. First, the project will develop novel fairness elicitation mechanisms that will allow stakeholders to effectively express their perceptions on fairness. To go beyond the traditional approach of statistical group fairness, the investigators will formulate new fairness measures for individual fairness based on elicited feedback. Secondly, the project will develop algorithms and mechanisms to manage the trade-offs between the new fairness measures developed in the first step, and multiple existing fairness and accuracy measures. Finally, the project will develop algorithms to detect and mitigate human operators' biases, and methods that rely on human feedback to correct and de-bias existing models during the deployment of the AI system.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
越来越多地使用人工智能(AI)系统来帮助人类做出高风险决策,例如在线信息策划,恢复筛查,抵押贷款,警察监视,公共资源分配和审前拘留。尽管希望使用算法将改善社会成果和经济效率,但人们担心算法系统可能会从历史数据中继承人类偏见,使人们对已经脆弱的人群的歧视永久化,并且通常没有体现一个给定社区的重要价值。关于算法公平性的最新工作表征了不公平沿着开发管道的不同步骤出现的方式,产生了数十种定量的公平概念,并提供了执行这些概念的方法。但是,过度简化的算法目标与现实决策环境的并发症之间存在显着差距。该项目的目的是通过明确考虑实际利益相关者的上下文特定公平原则,他们可接受的公平 - 私人权衡权衡以及人类决策者在整个算法系统的发展和部署过程中的认知优势和局限性,来缩小差距。为了实现这些目标,该项目可以仔细进行人类合作的合作,这些合作将创新的机器学习方法与人类计算机互动(HCI)(HCI)的方法相结合,以引起人类专家和利益相关者的反馈和偏好。有三种主要的研究活动自然对应于人类在循环AI系统的三个阶段。首先,该项目将开发新颖的公平启发机制,使利益相关者能够有效地表达他们对公平性的看法。为了超越统计集团公平的传统方法,研究人员将根据引起的反馈制定新的公平措施。其次,该项目将开发算法和机制,以管理第一步中开发的新公平措施与多种现有公平和准确性措施之间的权衡。最后,该项目将开发算法来检测和减轻人类运营商的偏见,以及在AI系统部署期间依靠人类反馈来纠正和偏见的现有模型的方法。该奖项反映了NSF的法定任务,并被认为是通过基金会的智力优点和广泛的影响来评估CREITERIA的评估。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Zhiwei Steven Wu其他文献
Provable Multi-Party Reinforcement Learning with Diverse Human Feedback
可证明的多方强化学习与不同的人类反馈
- DOI:
10.48550/arxiv.2403.05006 - 发表时间:
2024 - 期刊:
- 影响因子:0
- 作者:
Huiying Zhong;Zhun Deng;Weijie J. Su;Zhiwei Steven Wu;Linjun Zhang - 通讯作者:
Linjun Zhang
Inducing Approximately Optimal Flow Using Truthful Mediators
使用真实的中介者诱导近似最佳的流动
- DOI:
- 发表时间:
2015 - 期刊:
- 影响因子:0
- 作者:
Ryan M. Rogers;Aaron Roth;Jonathan Ullman;Zhiwei Steven Wu - 通讯作者:
Zhiwei Steven Wu
Logarithmic Query Complexity for Approximate Nash Computation in Large Games
大型游戏中近似纳什计算的对数查询复杂度
- DOI:
- 发表时间:
2016 - 期刊:
- 影响因子:0.5
- 作者:
P. Goldberg;Francisco Javier Marmolejo;Zhiwei Steven Wu - 通讯作者:
Zhiwei Steven Wu
Competing Bandits: The Perils of Exploration Under Competition
强盗竞争:竞争中探索的危险
- DOI:
- 发表时间:
2019 - 期刊:
- 影响因子:0
- 作者:
Guy Aridor;Y. Mansour;Aleksandrs Slivkins;Zhiwei Steven Wu - 通讯作者:
Zhiwei Steven Wu
The Externalities of Exploration and How Data Diversity Helps Exploitation
探索的外部性以及数据多样性如何帮助开发
- DOI:
10.1145/3603195.3603199 - 发表时间:
2018 - 期刊:
- 影响因子:0
- 作者:
Manish Raghavan;Aleksandrs Slivkins;Jennifer Wortman Vaughan;Zhiwei Steven Wu - 通讯作者:
Zhiwei Steven Wu
Zhiwei Steven Wu的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Zhiwei Steven Wu', 18)}}的其他基金
CAREER: New Frontiers of Private Learning and Synthetic Data
职业:私人学习和合成数据的新领域
- 批准号:
2339775 - 财政年份:2024
- 资助金额:
$ 56.5万 - 项目类别:
Continuing Grant
Collaborative Research: SaTC: CORE: Medium: Private Model Personalization
协作研究:SaTC:核心:媒介:私人模型个性化
- 批准号:
2232693 - 财政年份:2023
- 资助金额:
$ 56.5万 - 项目类别:
Standard Grant
Collaborative Research: SaTC: CORE: Small: Foundations for the Next Generation of Private Learning Systems
协作研究:SaTC:核心:小型:下一代私人学习系统的基础
- 批准号:
2120611 - 财政年份:2021
- 资助金额:
$ 56.5万 - 项目类别:
Standard Grant
FAI: Advancing Fairness in AI with Human-Algorithm Collaborations
FAI:通过人类算法合作促进人工智能的公平性
- 批准号:
1939606 - 财政年份:2020
- 资助金额:
$ 56.5万 - 项目类别:
Standard Grant
相似国自然基金
果蝇幼虫前进运动发起的神经机制
- 批准号:
- 批准年份:2022
- 资助金额:54 万元
- 项目类别:面上项目
果蝇幼虫前进运动发起的神经机制
- 批准号:32271041
- 批准年份:2022
- 资助金额:54.00 万元
- 项目类别:面上项目
机器人鸟“前进”运动控制神经信息传导通路及反馈研究
- 批准号:61903230
- 批准年份:2019
- 资助金额:24.0 万元
- 项目类别:青年科学基金项目
内蒙古中东部毛登-前进场早石炭世强过铝花岗岩带地球化学成因及其构造意义
- 批准号:41702054
- 批准年份:2017
- 资助金额:23.0 万元
- 项目类别:青年科学基金项目
搅拌摩擦焊接过程前进阻力周期脉动振荡行为及调控研究
- 批准号:51675248
- 批准年份:2016
- 资助金额:62.0 万元
- 项目类别:面上项目
相似海外基金
Collaborative Research: Advancing Fairness for Emerging Infrastructure Systems with High Operational Dynamics
合作研究:促进具有高运营动态的新兴基础设施系统的公平性
- 批准号:
2309667 - 财政年份:2023
- 资助金额:
$ 56.5万 - 项目类别:
Standard Grant
Collaborative Research: Advancing Fairness for Emerging Infrastructure Systems with High Operational Dynamics
合作研究:促进具有高运营动态的新兴基础设施系统的公平性
- 批准号:
2309668 - 财政年份:2023
- 资助金额:
$ 56.5万 - 项目类别:
Standard Grant
FAI: Advancing Deep Learning Towards Spatial Fairness
FAI:推进深度学习迈向空间公平
- 批准号:
2147195 - 财政年份:2022
- 资助金额:
$ 56.5万 - 项目类别:
Standard Grant
Advancing FAIRness and TRUST in the gEAR portal
在 gEAR 门户中促进公平和信任
- 批准号:
10408360 - 财政年份:2021
- 资助金额:
$ 56.5万 - 项目类别:
FAI: Advancing Fairness in AI with Human-Algorithm Collaborations
FAI:通过人类算法合作促进人工智能的公平性
- 批准号:
1939606 - 财政年份:2020
- 资助金额:
$ 56.5万 - 项目类别:
Standard Grant