CAREER: Ethical Machine Learning in Health: Robustness in Data, Learning and Deployment

职业:健康领域的道德机器学习:数据、学习和部署的稳健性

基本信息

  • 批准号:
    2339381
  • 负责人:
  • 金额:
    $ 60万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Continuing Grant
  • 财政年份:
    2024
  • 资助国家:
    美国
  • 起止时间:
    2024-07-01 至 2029-06-30
  • 项目状态:
    未结题

项目摘要

Health is an area of immense potential for machine learning (ML), due to the increasing complexity of care management and large volume of data becoming available. Recent work has shown that models in healthcare lack robustness, and do not perform equally well across all patients and settings. Recent work in general model robustness have failed to translate to health settings in part because they do not consider the diversity of patients, conditions, and contexts that models will be used in. This project will create new ways to improve model robustness, and empower researchers to target more ethical deployments. This research will identify improvements for data use and model training that prioritize actionable models in health, by focusing on the nuance and complexity of health data. Ultimately these advances will also contribute to machine learning in other high-stakes areas such as lending, education and legal systems, that rely on routinely collected data to generate insights. Beyond the direct and long-term societal impact of these advances, this work will help lay the foundation for a new undergraduate-focused summer course focusing on bringing a larger, and more diverse, pipeline of students into machine learning in health. The importance of patient safety combined with poor model robustness limits the practical utility of ML in healthcare, and ethical deployment requires developing methods and metrics to ensure state-of-the-art models are robust. This project targets three ways to develop robust health models: ensuring representations and downstream models withstand incorrect data associations, achieving fair and robust model learning, and enhancing post-hoc robustness to outlier data during testing. First, targeting representational robustness to data error and change, it will build resilient models across patient subpopulations and variations in care through contrastive self-supervision in deep metric models. Second, in model learning, it will improve algorithms for stable training, balancing fairness/robustness trade-offs by combining private and public data for clinical prediction tasks. Third, it will target test-time methods for outlier detection and extending pre-trained models to cover minority subgroups. The project will result in methods that address robustness in data, learning, and testing, as crucial steps toward ethically deploying health models.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
由于护理管理的复杂性不断增加以及大量数据的可用,健康是机器学习 (ML) 潜力巨大的领域。最近的研究表明,医疗保健领域的模型缺乏稳健性,并且在所有患者和环境中表现不佳。最近关于一般模型稳健性的工作未能转化为健康环境,部分原因是它们没有考虑模型将使用的患者、条件和环境的多样性。该项目将创造新的方法来提高模型的稳健性,并赋予研究人员权力以更加道德的部署为目标。这项研究将通过关注健康数据的细微差别和复杂性,确定数据使用和模型训练的改进,优先考虑健康领域的可行模型。最终,这些进步还将有助于其他高风险领域的机器学习,例如贷款、教育和法律系统,这些领域依赖于常规收集的数据来生成见解。除了这些进步的直接和长期的社会影响之外,这项工作还将有助于为新的以本科生为中心的夏季课程奠定基础,该课程的重点是让更多、更多样化的学生进入健康领域的机器学习。患者安全的重要性加上较差的模型稳健性限制了机器学习在医疗保健中的实际应用,道德部署需要开发方法和指标以确保最先进的模型稳健。该项目的目标是通过三种方式开发稳健的健康模型:确保表示和下游模型能够承受不正确的数据关联,实现公平和稳健的模型学习,以及增强测试期间对异常数据的事后稳健性。首先,以数据错误和变化的代表性鲁棒性为目标,它将通过深度度量模型中的对比自我监督,跨患者亚群和护理变化建立弹性模型。其次,在模型学习中,它将改进稳定训练的算法,通过结合临床预测任务的私有和公共数据来平衡公平性/鲁棒性权衡。第三,它将针对异常值检测的测试时方法,并将预训练模型扩展到涵盖少数群体。该项目将产生解决数据、学习和测试稳健性的方法,作为以道德方式部署健康模型的关键步骤。该奖项反映了 NSF 的法定使命,并通过使用基金会的智力价值和更广泛的影响审查进行评估,被认为值得支持标准。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Marzyeh Ghassemi其他文献

Impact of Large Language Model Assistance on Patients Reading Clinical Notes: A Mixed-Methods Study
大语言模型辅助对患者阅读临床笔记的影响:一项混合方法研究
  • DOI:
  • 发表时间:
    2024
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Niklas Mannhardt;Elizabeth Bondi;Barbara Lam;Chloe O'Connell;M. Asiedu;Hussein Mozannar;Monica Agrawal;Alejandro Buendia;Tatiana Urman;I. Riaz;Catherine E. Ricciardi;Marzyeh Ghassemi;David Sontag
  • 通讯作者:
    David Sontag
Fair Multimodal Checklists for Interpretable Clinical Time Series Prediction
用于可解释临床时间序列预测的公平多模式检查表
  • DOI:
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Qixuan Jin;Haoran Zhang;Tom Hartvigsen;Marzyeh Ghassemi
  • 通讯作者:
    Marzyeh Ghassemi
Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation
视图可能具有欺骗性:通过特征空间增强改进 SSL
  • DOI:
  • 发表时间:
    2024
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Kimia Hamidieh;Haoran Zhang;Swami Sankaranarayanan;Marzyeh Ghassemi
  • 通讯作者:
    Marzyeh Ghassemi
OP-JAMI160142 488..495
OP-JAMI160142 488..495
  • DOI:
  • 发表时间:
    2017
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Mike Wu;Marzyeh Ghassemi;Mengling Feng;Leo A Celi;Peter Szolovits;Finale Doshi
  • 通讯作者:
    Finale Doshi
TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods
TRIPOD AI 声明:使用回归或机器学习方法报告临床预测模型的更新指南
  • DOI:
  • 发表时间:
    2024
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Gary S. Collins;K. Moons;Paula Dhiman;Richard D. Riley;A. L. Beam;B. Calster;Marzyeh Ghassemi;Xiaoxuan Liu;Johannes B Reitsma;M. Smeden;A. Boulesteix;Jennifer Catherine Camaradou;L. Celi;S. Denaxas;A. Denniston;Ben Glocker;Robert M Golub;Hugh Harvey;Georg Heinze;Michael M Hoffman;A. Kengne;Emily Lam;Naomi Lee;Elizabeth W Loder;Lena Maier;B. Mateen;M. Mccradden;Lauren Oakden;Johan Ordish;Richard Parnell;Sherri Rose;Karandeep Singh;L. Wynants;P. Logullo;Abhishek Gupta;Adrian Barnett;Adrian Jonas;Agathe Truchot;Aiden Doherty;Alan Fraser;Alex Fowler;Alex Garaiman;Alistair Denniston;Amin Adibi;André Carrington;Andre Esteva;Andrew Althouse;Andrew Soltan;A. Appelt;Ari Ercole;Armando Bedoya;B. Vasey;B. Desiraju;Barbara Seeliger;B. Geerts;Beatrice Panico;Benjamin Fine;Benjamin Goldstein;B. Gravesteijn;Benjamin Wissel;B. Holzhauer;Boris Janssen;Boyi Guo;Brooke Levis;Catey Bunce;Charles Kahn;Chris Tomlinson;Christopher Kelly;Christopher Lovejoy;Clare McGenity;Conrad Harrison Constanza;Andaur Navarro;D. Nieboer;Dan Adler;Danial Bahudin;Daniel Stahl;Daniel Yoo;Danilo Bzdok;Darren Dahly;D. Treanor;David Higgins;David McClernon;David Pasquier;David Taylor;Declan O’Regan;Emily Bebbington;Erik Ranschaert;E. Kanoulas;Facundo Diaz;Felipe Kitamura;Flavio Clesio;Floor van Leeuwen;Frank Harrell;Frank Rademakers;G. Varoquaux;Garrett S Bullock;Gary Weissman;George Fowler;George Kostopoulos;Georgios Lyratzaopoulos;Gianluca Di;Gianluca Pellino;Girish Kulkarni;G. Zoccai;Glen Martin;Gregg Gascon;Harlan Krumholz;H. Sufriyana;Hongqiu Gu;H. Bogunović;Hui Jin;Ian Scott;Ijeoma Uchegbu;Indra Joshi;Irene M. Stratton;James Glasbey;Jamie Miles;Jamie Sergeant;Jan Roth;Jared Wohlgemut;Javier Carmona Sanz;J. Bibault;Jeremy Cohen;Ji Eun Park;Jie Ma;Joel Amoussou;John Pickering;J. Ensor;J. Flores;Joseph LeMoine;Joshua Bridge;Josip Car;Junfeng Wang;Keegan Korthauer;Kelly Reeve;L. Ación;Laura J. Bonnett;Lief Pagalan;L. Buturovic;L. Hooft;Maarten Luke Farrow;Van Smeden;Marianne Aznar;Mario Doria;Mark Gilthorpe;M. Sendak;M. Fabregate;M. Sperrin;Matthew Strother;Mattia Prosperi;Menelaos Konstantinidis;Merel Huisman;Michael O. Harhay;Miguel Angel Luque;M. Mansournia;Munya Dimairo;Musa Abdulkareem;M. Nagendran;Niels Peek;Nigam Shah;Nikolas Pontikos;N. Noor;Oilivier Groot;Páll Jónsson;Patrick Bossuyt;Patrick Lyons;Patrick Omoumi;Paul Tiffin;Peter Austin;Q. Noirhomme;Rachel Kuo;Ram Bajpal;Ravi Aggarwal;Richiardi Jonas;Robert Platt;Rohit Singla;Roi Anteby;Rupa Sakar;Safoora Masoumi;Sara Khalid;Saskia Haitjema;Seong Park;Shravya Shetty;Stacey Fisher;Stephanie Hicks;Susan Shelmerdine;Tammy Clifford;Tatyana Shamliyan;Teus Kappen;Tim Leiner;Tim Liu;Tim Ramsay;Toni Martinez;Uri Shalit;Valentijn de Jong;Valentyn Bezshapkin;V. Cheplygina;Victor Castro;V. Sounderajah;Vineet Kamal;V. Harish;Wim Weber;W. Amsterdam;Xioaxuan Liu;Zachary Cohen;Zakia Salod;Zane Perkins
  • 通讯作者:
    Zane Perkins

Marzyeh Ghassemi的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似国自然基金

4-8岁儿童基于道德特征的选择性共情:发展及其机制
  • 批准号:
    32371111
  • 批准年份:
    2023
  • 资助金额:
    50 万元
  • 项目类别:
    面上项目
不确定性下的道德行为——行为经济学视角下的实验证据和理论模型
  • 批准号:
    72303195
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
门诊共济保障益贫效应及其与道德风险防控的协同机制与政策优化研究
  • 批准号:
    72304082
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
校园欺凌者的道德情绪及其神经机制:基于虚拟现实技术的研究
  • 批准号:
    32300895
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
道德型负面舆情对消费者行为决策的影响机制及其治理策略研究
  • 批准号:
    72261010
  • 批准年份:
    2022
  • 资助金额:
    27 万元
  • 项目类别:
    地区科学基金项目

相似海外基金

Training in Genomics Research (TiGeR)
基因组学研究培训 (TiGeR)
  • 批准号:
    10411387
  • 财政年份:
    2022
  • 资助金额:
    $ 60万
  • 项目类别:
Organizational and Cultural Dynamics in Genomics Companies: Industry Engagement in Navigating Social and Ethical Issues
基因组公司的组织和文化动态:行业参与解决社会和道德问题
  • 批准号:
    10328276
  • 财政年份:
    2021
  • 资助金额:
    $ 60万
  • 项目类别:
Organizational and Cultural Dynamics in Genomics Companies: Industry Engagement in Navigating Social and Ethical Issues
基因组公司的组织和文化动态:行业参与解决社会和道德问题
  • 批准号:
    10542816
  • 财政年份:
    2021
  • 资助金额:
    $ 60万
  • 项目类别:
Cherokee Nation Native American Research Centers for Health (NARCH) 11
切罗基族美洲原住民健康研究中心 (NARCH) 11
  • 批准号:
    10491132
  • 财政年份:
    2021
  • 资助金额:
    $ 60万
  • 项目类别:
Cherokee Nation Native American Research Centers for Health (NARCH) 11
切罗基族美洲原住民健康研究中心 (NARCH) 11
  • 批准号:
    10223765
  • 财政年份:
    2021
  • 资助金额:
    $ 60万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了