FAI: Fair AI in Public Policy - Achieving Fair Societal Outcomes in ML Applications to Education, Criminal Justice, and Health and Human Services

FAI:公共政策中的公平人工智能 - 在教育、刑事司法以及健康和公共服务领域的机器学习应用中实现公平的社会成果

基本信息

  • 批准号:
    2040929
  • 负责人:
  • 金额:
    $ 37.5万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2021
  • 资助国家:
    美国
  • 起止时间:
    2021-04-01 至 2024-03-31
  • 项目状态:
    已结题

项目摘要

This project advances the potential for Machine Learning (ML) to serve the social good by improving understanding of how to apply ML methods to high-stakes, real-world settings in fair and responsible ways. Government agencies and nonprofits use ML tools to inform consequential decisions. However, a growing number of academics, journalists, and policy-makers have expressed apprehension regarding the prominent (and growing) role that ML technology plays in the allocation of social benefits and burdens across diverse policy areas, including child welfare, health, and criminal justice. Many of these decisions impart long-lasting effects on the lives of their subjects. When applied inappropriately, they can harm already vulnerable and historically-disadvantaged communities. These concerns have given rise to a growing number of research efforts aimed at understanding disparities and developing tools that aim to minimize or mitigate them. To date, these efforts have been limited in their impact on real-world applications by focusing too narrowly on abstract technical concepts and computational methods at the expense of addressing the decisions and societal outcomes these methods affect. Such efforts also commonly fail to situate the work in real-world contexts or to draw input from the communities most affected by ML-assisted decision-making. This project seeks to fill these gaps in current research and practice in close partnership with government agencies and nonprofits.This project draws upon disciplinary perspectives from computer science, statistics, and public policy. Its first aim explores the mapping between policy goals and ML formulations. This aim focuses on what facts must be consulted to make coherent determinations about fairness, and anchors those assessments of fairness to near- and long-term societal outcomes for people subject to decisions. This work offers practical ways to engage with partners, policymakers, and affected communities to translate desired fairness goals into computationally tractable measures. Itssecond aim investigates fairness through the entire ML decision-support pipeline, from policy goals to data to models to interventions. It explores how different approaches to data collection, imputation, model selection, and evaluation impact the fairness of resulting tools. The project’s third aim is concerned with modeling the long-term societal outcomes of ML-assisted decision-making in policy domains, ultimately to guide a grounded approach to designing fairness-promoting methods. The project’s over-arching objective is to bridge the divide between active research in fair ML and applications in policy domains. It does that through innovative teaching and training activities, broadening the participation of under-represented groups in research and technology design, enhancing scientific and technological understanding among the public, practitioners, and legislators, and delivering a direct positive impact with partner agencies.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
该项目通过提高对如何以公平和负责任的方式将ML方法应用于高风险,现实世界中的现实环境的理解来提高机器学习的潜力(ML),以服务社会利益。政府机构和非营利组织使用ML工具来为结果做出指导。但是,越来越多的学者,新闻记者和政策制定者对ML技术在分配社会福利和伯恩斯在包括儿童福利,卫生和刑事司法在内的社会福利和伯伦斯在分配社会福利和伯伦斯在分配社会福利和伯伦斯所发挥的重要作用表示了担忧。这些决定中的许多决定对受试者的生活产生了长期影响。如果不当地应用它们,它们可能会损害已经脆弱和历史悠久的社区。这些担忧引起了越来越多的研究工作,旨在理解差异和开发旨在最大程度地减少或减轻它们的工具。迄今为止,这些努力对实际应用程序的影响一直受到限制,因为它们过于狭窄地集中在抽象的技术概念和计算方法上,但以解决这些方法影响的决策和社会成果的代价。这样的努力也通常无法将工作置于现实世界的环境中,也没有从最受ML辅助决策影响的社区中提取意见。该项目旨在通过与政府机构和非营利组织密切合作,在当前的研究和实践中填补这些空白。该项目借鉴了计算机科学,统计和公共政策的纪律观点。它的第一个目标探讨了政策目标与ML公式之间的映射。该目标重点是必须咨询哪些事实,以做出关于公平性的连贯决定,并将这些公平性评估锚定,以迈向近期和长期的社会成果,以应对做出决定。这项工作提供了与合作伙伴,政策制定者和受影响社区互动的实用方法,以将所需的公平目标转化为计算障碍的措施。 Itsecond Aim通过整个ML决策支持管道调查公平性,从政策目标到数据再到模型再到干预措施。它探讨了数据收集,归纳,模型选择和评估的不同方法如何影响所得工具的公平性。该项目的第三个目的是建模政策领域中ML辅助决策的长期社会成果,最终指导着设计公平促进方法的基础方法。该项目的整个目标是弥合公平ML中的积极研究与政策领域中的应用之间的鸿沟。它通过创新的教学和培训活动,扩大了代表性不足的群体参与研究和技术设计的参与,增进了公众,实践者和立法者之间的科学和技术理解,并直接与合作伙伴机构产生积极影响。该奖项反映了NSF的法定任务,并通过使用基础的Intelligal Imit和Sparrit和Sparriit和Sparriit和Sparriit和Sparriit和Sparriit和Sparriit和Sparriit和Sparriit和Sparriit和Broadia cromit和Sparriit和Sparriit和Sparriit and Improciation一起进行了评估。

项目成果

期刊论文数量(8)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms
评估数据驱动决策算法合理使用的有效性视角
  • DOI:
    10.1109/satml54575.2023.00050
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Coston, Amanda;Kawakami, Anna;Zhu, Haiyi;Holstein, Ken;Heidari, Hoda
  • 通讯作者:
    Heidari, Hoda
From Preference Elicitation to Participatory ML: A Critical Survey & Guidelines for Future Research
The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements
人工智能事件数据库作为提高人工智能危害意识的教育工具:对有效性、局限性的课堂探索
  • DOI:
    10.1145/3617694.3623223
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Feffer, Michael;Martelaro, Nikolas;Heidari, Hoda
  • 通讯作者:
    Heidari, Hoda
Informational Diversity and Affinity Bias in Team Growth Dynamics
团队成长动态中的信息多样性和亲和力偏差
  • DOI:
    10.1145/3617694.3623238
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Heidari, Hoda;Barocas, Solon;Kleinberg, Jon;Levy, Karen
  • 通讯作者:
    Levy, Karen
A Taxonomy of Human and ML Strengths in Decision-Making to Investigate Human-ML Complementarity
人类和机器学习在决策中的优势分类,以研究人类与机器学习的互补性
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Hoda Heidari其他文献

On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning
关于算法决策策略的长期影响:通过社会学习的努力不公平和特征隔离
The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment
算法的失败:描述放弃的动态
Allocating Opportunities in a Dynamic Model of Intergenerational Mobility
在代际流动的动态模型中分配机会
No-regret Learning in Games
在游戏中无悔学习
  • DOI:
  • 发表时间:
    2016
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Hoda Heidari
  • 通讯作者:
    Hoda Heidari
A Unifying Framework for Combining Complementary Strengths of Humans and ML toward Better Predictive Decision-Making
结合人类和机器学习的互补优势以实现更好的预测决策的统一框架
  • DOI:
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Charvi Rastogi;Liu Leqi;Kenneth Holstein;Hoda Heidari
  • 通讯作者:
    Hoda Heidari

Hoda Heidari的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似国自然基金

公平性考量下的资源汇集与分配问题研究
  • 批准号:
    72301240
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
基于不同经济阶级人群期望寿命差异下的养老保险制度公平性研究
  • 批准号:
    12301613
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
推荐算法公平性的可解释度量与实现方法
  • 批准号:
    62302412
  • 批准年份:
    2023
  • 资助金额:
    30.00 万元
  • 项目类别:
    青年科学基金项目
基于居民出行的乡村交通公平特征及其机制研究:以京津冀地区为例
  • 批准号:
    42301217
  • 批准年份:
    2023
  • 资助金额:
    30.00 万元
  • 项目类别:
    青年科学基金项目
面向可信推荐系统的可控性与公平性研究
  • 批准号:
    62372260
  • 批准年份:
    2023
  • 资助金额:
    50 万元
  • 项目类别:
    面上项目

相似海外基金

FAI: Advancing Optimization for Threshold-Agnostic Fair AI Systems
FAI:推进与阈值无关的公平人工智能系统的优化
  • 批准号:
    2147253
  • 财政年份:
    2022
  • 资助金额:
    $ 37.5万
  • 项目类别:
    Standard Grant
FAI: Toward Fair Decision Making and Resource Allocation with Application to AI-Assisted Graduate Admission and Degree Completion
FAI:通过应用于人工智能辅助研究生入学和学位完成来实现公平决策和资源分配
  • 批准号:
    2147276
  • 财政年份:
    2022
  • 资助金额:
    $ 37.5万
  • 项目类别:
    Standard Grant
FAI: Advancing Optimization for Threshold-Agnostic Fair AI Systems
FAI:推进与阈值无关的公平人工智能系统的优化
  • 批准号:
    2246757
  • 财政年份:
    2022
  • 资助金额:
    $ 37.5万
  • 项目类别:
    Standard Grant
FAI: AI Algorithms for Fair Auctions, Pricing, and Marketing
FAI:用于公平拍卖、定价和营销的人工智能算法
  • 批准号:
    2147361
  • 财政年份:
    2022
  • 资助金额:
    $ 37.5万
  • 项目类别:
    Standard Grant
FAI: Foundations of Fair AI in Medicine: Ensuring the Fair Use of Patient Attributes
FAI:医学中公平人工智能的基础:确保患者属性的公平使用
  • 批准号:
    2040880
  • 财政年份:
    2021
  • 资助金额:
    $ 37.5万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了