III: Medium: Collaborative Research: Towards Effective Interpretation of Deep Learning: Prediction, Representation, Modeling and Utilization

III:媒介:协作研究:走向深度学习的有效解释:预测、表示、建模和利用

基本信息

  • 批准号:
    1900767
  • 负责人:
  • 金额:
    $ 20万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Continuing Grant
  • 财政年份:
    2019
  • 资助国家:
    美国
  • 起止时间:
    2019-08-15 至 2024-07-31
  • 项目状态:
    已结题

项目摘要

While deep learning has achieved unprecedented prediction capabilities, it is often criticized as a black box because of lacking interpretability, which is very important in real-world applications such as healthcare and cybersecurity. For example, healthcare professionals would appropriately trust and effectively manage prediction results only if they can understand why and how a patient is diagnosed with prediabetes. The project is to investigate the interpretability of deep learning by following the fundamental elements in a data mining practice from representation, modeling to prediction. The results of the project are expected to improve the usability of deep learning in important applications, positively boosting the overall value of the deep learning based information systems. The education program that integrates data science, industrial engineering, and visualization is to train students with data analytics technologies in industrial systems, to attract and mentor members of underrepresented groups pursuing careers in STEM.The research goal of this project is to systematically explore interpretability of deep learning from a machine learning life cycle, i.e., representation, modeling and prediction, as well as the deployment of interpretability in various tasks. Specifically, this project aims to achieve the research goal by developing a series of interpretation algorithms and methods from the following aspects. It explores post-hoc interpretation methods to shed light on how deep learning models produce a specific prediction and generate a representation. It also investigates designing interpretable models from scratch, which aims to construct self-explanatory models and incorporate interpretability directly into the structure of a deep learning model. The aforementioned interpretation derived from a deep learning model is employed to promote the model performance. In addition, the applications of interpretability are utilized to debug model behaviors so as to ensure the model decision making process is consistent with human expert knowledge, as well as to promote model robustness when handling adversarial attacks.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
尽管深度学习实现了前所未有的预测能力,但由于缺乏可解释性,它通常被批评为黑匣子,这在医疗保健和网络安全等现实世界中非常重要。例如,医疗保健专业人员只有能够理解患者被诊断出患有糖尿病前期的原因,并有效地管理预测结果。该项目是通过遵循从表示形式,建模到预测的数据挖掘实践中的基本要素来研究深度学习的可解释性。预计该项目的结果将提高重要应用中深度学习的可用性,从而积极增强基于深度学习的信息系统的总体价值。整合数据科学,工业工程和可视化的教育计划是培训学生在工业系统中的数据分析技术,吸引和指导代表性不足的团体的成员从事STEM的职业。该项目的研究目标是系统地探索来自机器学习生命周期的深度学习的解释能力。具体而言,该项目旨在通过从以下方面开发一系列解释算法和方法来实现研究目标。它探讨了事后解释方法,以阐明深度学习模型如何产生特定的预测并产生表示形式。它还研究了从头开始设计可解释模型的设计,该模型旨在构建自我解释的模型,并将可解释性直接纳入深度学习模型的结构中。从深度学习模型中得出的上述解释用于促进模型性能。此外,可解释性的应用被用于调试模型行为,以确保模型决策过程与人类的专家知识一致,并在处理对抗性攻击时促进模型的鲁棒性。该奖项反映了NSF的法定任务,并通过评估该基金会的知识分子优点和广泛的影响来审查NSF的法定任务,并被视为值得通过评估的支持。

项目成果

期刊论文数量(7)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
The role of domain expertise in user trust and the impact of first impressions with intelligent systems
  • DOI:
    10.1609/hcomp.v8i1.7469
  • 发表时间:
    2020-01-01
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Nourani, M.;King, J.;Ragan, E.
  • 通讯作者:
    Ragan, E.
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
  • DOI:
    10.1609/hcomp.v8i1.7464
  • 发表时间:
    2020-08
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Donald R. Honeycutt;Mahsan Nourani;E. Ragan
  • 通讯作者:
    Donald R. Honeycutt;Mahsan Nourani;E. Ragan
DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification
DETOXER:一种可视化调试工具,具有时态多标签分类的多范围解释
  • DOI:
    10.1109/mcg.2022.3201465
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    1.8
  • 作者:
    Nourani, Mahsan;Roy, Chiradeep;Honeycutt, Donald R.;Ragan, Eric D.;Gogate, Vibhav
  • 通讯作者:
    Gogate, Vibhav
On the Importance of User Backgrounds and Impressions: Lessons Learned from Interactive AI Applications
  • DOI:
    10.1145/3531066
  • 发表时间:
    2022-04
  • 期刊:
  • 影响因子:
    3.4
  • 作者:
    Mahsan Nourani;Chiradeep Roy;Jeremy E. Block;Donald R. Honeycutt;Tahrima Rahman;E. Ragan;Vibhav Gogate
  • 通讯作者:
    Mahsan Nourani;Chiradeep Roy;Jeremy E. Block;Donald R. Honeycutt;Tahrima Rahman;E. Ragan;Vibhav Gogate
Anchoring Bias Affects Mental Model Formation and User Reliance in Explainable AI Systems
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Eric Ragan其他文献

Eric Ragan的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Eric Ragan', 18)}}的其他基金

CRII: III: Evaluating Provenance Visualizations for the Presentation and Communication of Investigative Data Analysis Processes
CRII:III:评估调查数据分析过程的呈现和交流的来源可视化
  • 批准号:
    1929693
  • 财政年份:
    2018
  • 资助金额:
    $ 20万
  • 项目类别:
    Standard Grant
CRII: III: Evaluating Provenance Visualizations for the Presentation and Communication of Investigative Data Analysis Processes
CRII:III:评估调查数据分析过程的呈现和交流的来源可视化
  • 批准号:
    1565725
  • 财政年份:
    2016
  • 资助金额:
    $ 20万
  • 项目类别:
    Standard Grant

相似国自然基金

复合低维拓扑材料中等离激元增强光学响应的研究
  • 批准号:
    12374288
  • 批准年份:
    2023
  • 资助金额:
    52 万元
  • 项目类别:
    面上项目
基于管理市场和干预分工视角的消失中等企业:特征事实、内在机制和优化路径
  • 批准号:
    72374217
  • 批准年份:
    2023
  • 资助金额:
    41.00 万元
  • 项目类别:
    面上项目
托卡马克偏滤器中等离子体的多尺度算法与数值模拟研究
  • 批准号:
    12371432
  • 批准年份:
    2023
  • 资助金额:
    43.5 万元
  • 项目类别:
    面上项目
中等质量黑洞附近的暗物质分布及其IMRI系统引力波回波探测
  • 批准号:
    12365008
  • 批准年份:
    2023
  • 资助金额:
    32 万元
  • 项目类别:
    地区科学基金项目
中等垂直风切变下非对称型热带气旋快速增强的物理机制研究
  • 批准号:
    42305004
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目

相似海外基金

III : Medium: Collaborative Research: From Open Data to Open Data Curation
III:媒介:协作研究:从开放数据到开放数据管理
  • 批准号:
    2420691
  • 财政年份:
    2024
  • 资助金额:
    $ 20万
  • 项目类别:
    Standard Grant
Collaborative Research: III: Medium: Designing AI Systems with Steerable Long-Term Dynamics
合作研究:III:中:设计具有可操纵长期动态的人工智能系统
  • 批准号:
    2312865
  • 财政年份:
    2023
  • 资助金额:
    $ 20万
  • 项目类别:
    Standard Grant
Collaborative Research: III: MEDIUM: Responsible Design and Validation of Algorithmic Rankers
合作研究:III:媒介:算法排序器的负责任设计和验证
  • 批准号:
    2312932
  • 财政年份:
    2023
  • 资助金额:
    $ 20万
  • 项目类别:
    Standard Grant
Collaborative Research: III: Medium: Algorithms for scalable inference and phylodynamic analysis of tumor haplotypes using low-coverage single cell sequencing data
合作研究:III:中:使用低覆盖率单细胞测序数据对肿瘤单倍型进行可扩展推理和系统动力学分析的算法
  • 批准号:
    2415562
  • 财政年份:
    2023
  • 资助金额:
    $ 20万
  • 项目类别:
    Standard Grant
III: Medium: Collaborative Research: Integrating Large-Scale Machine Learning and Edge Computing for Collaborative Autonomous Vehicles
III:媒介:协作研究:集成大规模机器学习和边缘计算以实现协作自动驾驶汽车
  • 批准号:
    2348169
  • 财政年份:
    2023
  • 资助金额:
    $ 20万
  • 项目类别:
    Continuing Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了