FAI: Fair Representation Learning: Fundamental Trade-Offs and Algorithms

FAI:公平表示学习:基本权衡和算法

基本信息

  • 批准号:
    2147116
  • 负责人:
  • 金额:
    $ 33.17万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2022
  • 资助国家:
    美国
  • 起止时间:
    2022-08-15 至 2025-07-31
  • 项目状态:
    未结题

项目摘要

Artificial intelligence based computer systems are increasingly reliant on effective information representation in order to support decision making in domains ranging from image recognition systems to identity control through face recognition. However, systems that rely on traditional statistics and prediction from historical or human-curated data also naturally inherit any past biased or discriminative tendencies. The overarching goal of the award is to mitigate this problem by using information representations that maintain its utility while eliminating information that could lead to discrimination against subgroups in a population. Specifically, this project will study the different trade-offs between utility and fairness of different data representations, and then identify solutions to reduce the gap to the best trade-off. Then, new representations and corresponding algorithms will be developed guided by such trade-off analysis. The investigators will provide performance limits based on the developed theory, and also evidence of efficacy in order to obtain fair machine learning systems and to gain societal trust. The application domain used in this research is face recognition systems.The undergraduate and graduate students who participate in the project will be trained to conduct cutting-edge research to integrate fairness into artificial intelligent based systems.The research agenda of this project is centered around answering two questions on learning fair representations, (i) What are the fundamental trade-offs between utility and fairness of data representations?, (ii) How to devise practical fair representation learning algorithms that can mitigate bias in machine learning systems and provably achieve the theoretical utility-fairness trade-offs? To answer the first question, the project will theoretically elucidate and empirically quantify the different trade-offs inherent to utility. This will be done consideringdifferent fairness definitions such as demographic parity, equalized odds, and equality of opportunity. To answer the second question, the project will develop representation learning algorithms that (a) are analytically tractable and provably fair, (b) mitigate worst-case bias, as opposed to average bias over instances or demographic groups, (c) are fair with respect to demographic information that is only partially known or fully unknown, and (d) mitigate demographic bias both due to an imbalance in samples as well as features through optimal data sampling and projection.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
基于人工智能的计算机系统越来越依赖有效的信息表示形式,以支持从图像识别系统到通过面部识别来控制身份控制的领域的决策。但是,依靠传统统计数据和从历史或人类策划数据中进行预测的系统也自然继承了过去的任何有偏见或歧视性倾向。该奖项的总体目标是通过使用维护其效用的信息表示,同时消除可能导致对人群中亚组的歧视的信息来减轻此问题。具体而言,该项目将研究不同数据表示的效用与公平性之间的不同权衡,然后确定解决方案以减少差距到最佳权衡。然后,将在这种权衡分析的指导下开发新的表示和相应的算法。研究人员将根据已发达的理论提供绩效限制,并提供效力的证据,以获得公平的机器学习系统并获得社会信任。这项研究中使用的应用领域是面部识别系统。参加该项目的本科生和研究生将接受培训,以进行尖端的研究,以将公平性整合到基于人工智能的系统中。该项目的研究议程围绕回答两个问题,围绕学习公平代表的两个问题,(i)效用的基本权衡和公平陈述是什么,练习了数据的练习,(ii)如何解释?减轻机器学习系统的偏见,并证明实现理论上的实用性权衡?为了回答第一个问题,该项目理论上将阐明并经验量化效用固有的不同权衡。考虑到不同的公平定义,例如人口统计学,均衡的几率和机会平等,这将进行。要回答第二个问题,该项目将开发出(a)在分析上是可触犯且可证明公平的表示算法的,(b)减轻最坏情况的偏见,而不是对实例或人群群体的平均偏见,(c)在人群信息方面是公平的。投影。该奖项反映了NSF的法定任务,并通过使用基金会的智力优点和更广泛的影响审查标准进行评估,被认为值得支持。

项目成果

期刊论文数量(1)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
On Characterizing the Trade-off in Invariant Representation Learning
  • DOI:
  • 发表时间:
    2021-09
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Bashir Sadeghi;Sepehr Dehdashtian;Vishnu Naresh Boddeti
  • 通讯作者:
    Bashir Sadeghi;Sepehr Dehdashtian;Vishnu Naresh Boddeti
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Vishnu Boddeti其他文献

MOAZ: A Multi-Objective AutoML-Zero Framework
MOAZ:多目标 AutoML-Zero 框架

Vishnu Boddeti的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似国自然基金

公平性考量下的资源汇集与分配问题研究
  • 批准号:
    72301240
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
基于不同经济阶级人群期望寿命差异下的养老保险制度公平性研究
  • 批准号:
    12301613
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
推荐算法公平性的可解释度量与实现方法
  • 批准号:
    62302412
  • 批准年份:
    2023
  • 资助金额:
    30.00 万元
  • 项目类别:
    青年科学基金项目
基于居民出行的乡村交通公平特征及其机制研究:以京津冀地区为例
  • 批准号:
    42301217
  • 批准年份:
    2023
  • 资助金额:
    30.00 万元
  • 项目类别:
    青年科学基金项目
面向可信推荐系统的可控性与公平性研究
  • 批准号:
    62372260
  • 批准年份:
    2023
  • 资助金额:
    50 万元
  • 项目类别:
    面上项目

相似海外基金

ClinEX - Clinical Evidence Extraction, Representation, and Appraisal
ClinEX - 临床证据提取、表示和评估
  • 批准号:
    10754029
  • 财政年份:
    2023
  • 资助金额:
    $ 33.17万
  • 项目类别:
Collaborative Research: NSF-CSIRO: RESILIENCE: Graph Representation Learning for Fair Teaming in Crisis Response
合作研究:NSF-CSIRO:RESILIENCE:危机应对中公平团队的图表示学习
  • 批准号:
    2303038
  • 财政年份:
    2023
  • 资助金额:
    $ 33.17万
  • 项目类别:
    Standard Grant
Collaborative Research: NSF-CSIRO: RESILIENCE: Graph Representation Learning for Fair Teaming in Crisis Response
合作研究:NSF-CSIRO:RESILIENCE:危机应对中公平团队的图表示学习
  • 批准号:
    2303037
  • 财政年份:
    2023
  • 资助金额:
    $ 33.17万
  • 项目类别:
    Standard Grant
Collaborative Research: SCH: Fair Federated Representation Learning for Breast Cancer Risk Scoring
合作研究:SCH:乳腺癌风险评分的公平联合表示学习
  • 批准号:
    2205289
  • 财政年份:
    2022
  • 资助金额:
    $ 33.17万
  • 项目类别:
    Standard Grant
Collaborative Research: SCH: Fair Federated Representation Learning for Breast Cancer Risk Scoring
合作研究:SCH:乳腺癌风险评分的公平联合表示学习
  • 批准号:
    2205080
  • 财政年份:
    2022
  • 资助金额:
    $ 33.17万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了