AI-DCL: EAGER: Explanations through Diverse, Feasible, and Interactive Counterfactuals
AI-DCL:EAGER:通过多样化、可行和交互式反事实进行解释
基本信息
- 批准号:1927322
- 负责人:
- 金额:$ 29.78万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2019
- 资助国家:美国
- 起止时间:2019-10-01 至 2021-04-30
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
This award supports a research project that will help people to better understand decision algorithms that are developed using machine learning techniques. The research team will facilitate that understanding by making use of a promising class of explanations that use counterfactual scenarios. Such explanations provide understanding by showing how outcomes change when hypothetical changes are made in factors that together serve to determine the decision outcome. As a concrete example, consider a person who applies for a loan from a financial company but is rejected by the loan distribution algorithm used by the company. To help the person understand why the decision algorithm rejected the application, the explanation algorithm would generate counterfactual scenarios in which the applicant's situation is hypothetically changed in viable ways (such as moving to a nearby city, or changing jobs) to see whether this affects the decision outcome. If this approach is successful, it would be applicable to a variety of societally critical domains where machine learning holds promise for improving decision making including healthcare, criminal justice, finance, and hiring. The project will have other impacts as well. The research team will release a public web site to engage the public with human-centered machine learning approaches. The PI will work with the University of Colorado Boulder's Science Discovery to present demos at events such as "Family Engineering Day" and "Boulder Computer Science Week". In addition to training graduate students, the PI will host high-school students as summer interns, integrate findings from the proposed work into educational activities at the University of Colorado Boulder, and make educational materials publicly available for use by instructors at other institutions. This research project seeks to explain machine decisions by generating diverse and feasible counterfactuals and developing user-centered interactive processes. The results of this project will constitute an important step towards building machine-in-the-loop methods to empower users in understanding algorithmic decisions. Specific contributions include developing diversity and distance metrics for generating diverse counterfactuals, integrating causal graphs to generate feasible counterfactuals that align with real-world processes, developing novel user-centered designs to examine human interaction with counterfactuals, and advancing design principles for explaining algorithmic decisions. The team will also develop human-centered designs that enable users to interact with counterfactual explanations. This will enable the researchers to conduct large-scale user studies to understand human preferences, which would in turn serve as an effective evaluation of their proposed method. The results of this research project will contribute to the emerging area of interpretable machine learning that emphasizes human-centered designs.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
该奖项支持一个研究项目,该项目将帮助人们更好地理解使用机器学习技术开发的决策算法。研究团队将通过使用反事实场景的一类有前景的解释来促进这种理解。这些解释通过展示当共同决定决策结果的因素发生假设变化时结果如何变化,从而提供理解。作为一个具体的例子,考虑一个人向一家金融公司申请贷款,但被该公司使用的贷款分配算法拒绝。为了帮助人们理解决策算法拒绝申请的原因,解释算法将生成反事实场景,其中假设申请人的情况以可行的方式改变(例如搬到附近的城市,或换工作),以了解这是否会影响决策结果。如果这种方法成功,它将适用于各种社会关键领域,机器学习有望改善决策,包括医疗保健、刑事司法、金融和招聘。该项目还将产生其他影响。研究团队将发布一个公共网站,让公众了解以人为本的机器学习方法。 PI 将与科罗拉多大学博尔德分校的 Science Discovery 合作,在“家庭工程日”和“博尔德计算机科学周”等活动中展示演示。除了培训研究生外,PI 还将接待高中生作为暑期实习生,将拟议工作的发现融入科罗拉多大学博尔德分校的教育活动中,并公开提供教育材料供其他机构的教师使用。该研究项目旨在通过生成多样化且可行的反事实并开发以用户为中心的交互过程来解释机器决策。该项目的结果将构成构建机器在环方法的重要一步,使用户能够理解算法决策。具体贡献包括开发多样性和距离度量来生成不同的反事实,整合因果图以生成与现实世界流程相符的可行的反事实,开发新颖的以用户为中心的设计来检查人类与反事实的交互,以及推进用于解释算法决策的设计原则。该团队还将开发以人为本的设计,使用户能够与反事实解释进行交互。这将使研究人员能够进行大规模的用户研究以了解人类的偏好,从而对他们提出的方法进行有效的评估。该研究项目的成果将为强调以人为本的设计的可解释机器学习新兴领域做出贡献。该奖项反映了 NSF 的法定使命,并通过使用基金会的智力价值和更广泛的影响审查标准进行评估,被认为值得支持。
项目成果
期刊论文数量(3)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
“为什么‘芝加哥’具有欺骗性?”
- DOI:10.1145/3313831.3376873
- 发表时间:2020-01
- 期刊:
- 影响因子:0
- 作者:Lai, Vivian;Liu, Han;Tan, Chenhao
- 通讯作者:Tan, Chenhao
Explaining machine learning classifiers through diverse counterfactual explanations
- DOI:10.1145/3351095.3372850
- 发表时间:2019-05-19
- 期刊:
- 影响因子:0
- 作者:R. Mothilal;Amit Sharma;Chenhao Tan
- 通讯作者:Chenhao Tan
Evaluating and Characterizing Human Rationales
评估和描述人类理性
- DOI:10.18653/v1/2020.emnlp-main.747
- 发表时间:2020-10-09
- 期刊:
- 影响因子:0
- 作者:Samuel Carton;Anirudh Rathore;Chenhao Tan
- 通讯作者:Chenhao Tan
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Chenhao Tan其他文献
Characterizing the Value of Information in Medical Notes
描述医疗笔记中信息的价值
- DOI:
- 发表时间:
2020 - 期刊:
- 影响因子:0
- 作者:
Chao;Shantanu Karnwal;S. Mullainathan;Z. Obermeyer;Chenhao Tan - 通讯作者:
Chenhao Tan
Probing Classifiers are Unreliable for Concept Removal and Detection
探测分类器对于概念删除和检测来说并不可靠
- DOI:
10.48550/arxiv.2207.04153 - 发表时间:
2022-07-08 - 期刊:
- 影响因子:0
- 作者:
Abhinav Kumar;Chenhao Tan;Amit Sharma - 通讯作者:
Amit Sharma
THU-IMG at TRECVID 2009
THU-IMG 参加 TRECVID 2009
- DOI:
- 发表时间:
2024-09-14 - 期刊:
- 影响因子:0
- 作者:
Yingyu Liang;Binbin Cao;Jianmin Li;Chenguang Zhu;Yongchao Zhang;Chenhao Tan;Ge Chen;Chen Sun;Jinhui Yuan;Mingxing Xu;Bo Zhang - 通讯作者:
Bo Zhang
No Permanent Friends or Enemies: Tracking Relationships between Nations from News
没有永远的朋友或敌人:从新闻中追踪国家之间的关系
- DOI:
10.18653/v1/n19-1167 - 发表时间:
2019-04-18 - 期刊:
- 影响因子:3.8
- 作者:
Xiaochuang Han;Eunsol Choi;Chenhao Tan - 通讯作者:
Chenhao Tan
Understanding and Predicting Human Label Variation in Natural Language Inference through Explanation
通过解释理解和预测自然语言推理中的人类标签变化
- DOI:
10.48550/arxiv.2304.12443 - 发表时间:
2023-04-24 - 期刊:
- 影响因子:0
- 作者:
Nan Jiang;Chenhao Tan;M. Marneffe - 通讯作者:
M. Marneffe
Chenhao Tan的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Chenhao Tan', 18)}}的其他基金
NSF-CSIRO: HCC: Small: From Legislations to Action: Responsible AI for Climate Change
NSF-CSIRO:HCC:小型:从立法到行动:负责任的人工智能应对气候变化
- 批准号:
2302785 - 财政年份:2023
- 资助金额:
$ 29.78万 - 项目类别:
Standard Grant
FAI: Towards Adaptive and Interactive Post Hoc Explanations
FAI:迈向自适应和交互式事后解释
- 批准号:
2040989 - 财政年份:2021
- 资助金额:
$ 29.78万 - 项目类别:
Standard Grant
CAREER: Harnessing Decision-focused Explanations as a Bridge between Humans and Artificial Intelligence
职业:利用以决策为中心的解释作为人类和人工智能之间的桥梁
- 批准号:
2126602 - 财政年份:2021
- 资助金额:
$ 29.78万 - 项目类别:
Continuing Grant
CRII: CHS: Harnessing Machine Learning to Improve Human Decision Making: A Case Study on Deceptive Detection
CRII:CHS:利用机器学习改善人类决策:欺骗检测案例研究
- 批准号:
2125113 - 财政年份:2021
- 资助金额:
$ 29.78万 - 项目类别:
Standard Grant
AI-DCL: EAGER: Explanations through Diverse, Feasible, and Interactive Counterfactuals
AI-DCL:EAGER:通过多样化、可行和交互式反事实进行解释
- 批准号:
2125116 - 财政年份:2021
- 资助金额:
$ 29.78万 - 项目类别:
Standard Grant
CAREER: Harnessing Decision-focused Explanations as a Bridge between Humans and Artificial Intelligence
职业:利用以决策为中心的解释作为人类和人工智能之间的桥梁
- 批准号:
1941973 - 财政年份:2020
- 资助金额:
$ 29.78万 - 项目类别:
Continuing Grant
CRII: CHS: Harnessing Machine Learning to Improve Human Decision Making: A Case Study on Deceptive Detection
CRII:CHS:利用机器学习改善人类决策:欺骗检测案例研究
- 批准号:
1849931 - 财政年份:2019
- 资助金额:
$ 29.78万 - 项目类别:
Standard Grant
相似国自然基金
番茄抗病毒基因DCL2b受病毒诱导调控的分子机理
- 批准号:
- 批准年份:2022
- 资助金额:54 万元
- 项目类别:面上项目
OH+HCl/DCl↔H2O/HOD+Cl态-态反应的全维微分截面研究
- 批准号:
- 批准年份:2022
- 资助金额:54 万元
- 项目类别:面上项目
RNAi介导的转S1基因大豆对SMV广谱抗性启动机制的解析
- 批准号:31801388
- 批准年份:2018
- 资助金额:25.0 万元
- 项目类别:青年科学基金项目
套索RNA通过拮抗DCL1复合物抑制植物miRNA产生的分子机制
- 批准号:31671261
- 批准年份:2016
- 资助金额:63.0 万元
- 项目类别:面上项目
拟南芥DCL4介导、不依赖DRB4的新抗病毒RNA沉默分子机制研究
- 批准号:31570145
- 批准年份:2015
- 资助金额:66.0 万元
- 项目类别:面上项目
相似海外基金
Education DCL: EAGER: Advancing Secure Coding Education: Empowering Students to Safely Utilize AI-powered Coding Assistant Tools
教育 DCL:EAGER:推进安全编码教育:使学生能够安全地利用人工智能驱动的编码辅助工具
- 批准号:
2335798 - 财政年份:2023
- 资助金额:
$ 29.78万 - 项目类别:
Standard Grant
Education DCL: EAGER: Developing Experiential Cybersecurity and Privacy Training for AI Practitioners
教育 DCL:EAGER:为人工智能从业者开发体验式网络安全和隐私培训
- 批准号:
2335700 - 财政年份:2023
- 资助金额:
$ 29.78万 - 项目类别:
Standard Grant
Education DCL: EAGER: Generative AI-based Personalized Cybersecurity Tutor for Fourth Industrial Revolution
教育 DCL:EAGER:第四次工业革命的基于生成人工智能的个性化网络安全导师
- 批准号:
2335046 - 财政年份:2023
- 资助金额:
$ 29.78万 - 项目类别:
Standard Grant
EAGER: AI-DCL: Exploratory research on the use of AI at the intersection of homelessness and child maltreatment
EAGER:AI-DCL:关于在无家可归和虐待儿童问题上使用人工智能的探索性研究
- 批准号:
2127754 - 财政年份:2021
- 资助金额:
$ 29.78万 - 项目类别:
Standard Grant
AI-DCL: EAGER: Explanations through Diverse, Feasible, and Interactive Counterfactuals
AI-DCL:EAGER:通过多样化、可行和交互式反事实进行解释
- 批准号:
2125116 - 财政年份:2021
- 资助金额:
$ 29.78万 - 项目类别:
Standard Grant