Causality in Machine Learning
机器学习中的因果关系
基本信息
- 批准号:RGPIN-2022-03667
- 负责人:
- 金额:$ 2.11万
- 依托单位:
- 依托单位国家:加拿大
- 项目类别:Discovery Grants Program - Individual
- 财政年份:2022
- 资助国家:加拿大
- 起止时间:2022-01-01 至 2023-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Causality in Machine Learning is often understood as the ability to understand decisions provided by a machine-learning model in terms of the knowledge of the domain in which the model operates, and the ability to reason about such decisions. A causal model has an "introspective" ability to reason about itself. Learning a causal model is a much more difficult task than the one performed by current Machine Learning methods, including Deep Learning, that determine a "correlational" or "pattern-matching" relationship between the inputs of the model and its decision. I propose here a research program in causality in Machine Learning. Causality is one of the main challenges before the field of Machine Learning. Moreover, having a causal representation of a model will allow a progress of Machine Learning towards the abilities of human intelligence, such as learning with a few examples. I propose to connect with the rich body of existing Artificial Intelligence work exploring the use of logic to reason about causes of changing states of the world and variables describing the world. The proposed research program is founded on my previous work. In particular, my active participation in a sub-area of Machine Learning known as Inductive Logic Programming will be useful. I propose to interpret models obtained with Deep Learning using logic. Use of Inductive Logic Programming will enable us to build models behaving similarly to models obtained by Deep Learning. These ILP models will be "distilled" from the Deep Learning models, and will be expressed as rules in first-order logic. This will make them interpretable by humans. It will also facilitate integrating previous knowledge expressed in logic with the learned models. Even partial success of research on causality is likely to have significant impact. Causality is necessary for broader social acceptance of models developed using Machine Learning for decision-making concerning humans. For instance, the European Union GDPR directive stipulates that any such model should be explainable, i.e. a person about whom the model has made a decision should be able to obtain an explanation of the model's decision understandable to them. Understanding models will eventually allow us to avoid models that make decisions about humans based on gender, ethnicity, etc. For example, the group in Pisa with which I collaborate has access to claim processing data of one of the leading Italian insurance companies. We will look at the explainability of decisions taken by their automated insurance clam processing systems. Addressing causality is a huge challenge. In this program I propose to make inroads into distillation of Deep Learning models into understandable models that also make causality explicit, and in being able to assign multiple factors as combined causes of a given effect predicted by a model. I will also young researchers who will continue the important work in causality for Machine Learning.
机器学习中的因果关系通常被理解为根据模型运行领域的知识来理解机器学习模型提供的决策的能力,以及推理此类决策的能力。因果模型具有自我推理的“内省”能力。学习因果模型是一项比当前机器学习方法(包括深度学习)执行的任务要困难得多的任务,深度学习确定模型输入与其决策之间的“相关”或“模式匹配”关系。我在这里提出了一个机器学习因果关系的研究计划。因果关系是机器学习领域面临的主要挑战之一。此外,拥有模型的因果表示将使机器学习朝着人类智能的能力迈进,例如通过一些例子进行学习。我建议与现有的丰富的人工智能工作联系起来,探索使用逻辑来推理世界状态变化的原因和描述世界的变量。拟议的研究计划是基于我之前的工作。特别是,我对机器学习的一个子领域(称为归纳逻辑编程)的积极参与将很有用。我建议使用逻辑来解释通过深度学习获得的模型。使用归纳逻辑编程将使我们能够构建行为类似于深度学习获得的模型的模型。这些 ILP 模型将从深度学习模型中“提炼”出来,并表示为一阶逻辑中的规则。这将使它们能够被人类解释。它还将有助于将以前以逻辑表达的知识与学习到的模型相结合。 即使因果关系研究取得部分成功也可能产生重大影响。因果关系对于更广泛的社会接受使用机器学习开发的用于人类决策的模型是必要的。例如,欧盟 GDPR 指令规定任何此类模型都应该是可解释的,即模型做出决策的人应该能够获得他们可以理解的模型决策的解释。理解模型最终将使我们能够避免基于性别、种族等做出人类决策的模型。例如,与我合作的比萨团队可以访问意大利领先保险公司之一的索赔处理数据。我们将研究他们的自动化保险蛤处理系统做出的决策的可解释性。解决因果关系是一个巨大的挑战。在这个项目中,我建议将深度学习模型提炼为可理解的模型,使因果关系变得明确,并能够将多个因素指定为模型预测的给定效果的组合原因。我还将邀请年轻的研究人员继续在机器学习的因果关系方面开展重要工作。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Matwin, Stan其他文献
Unsupervised named-entity recognition: Generating gazetteers and resolving ambiguity
- DOI:
10.1007/11766247_23 - 发表时间:
2006-01-01 - 期刊:
- 影响因子:0
- 作者:
Nadeau, David;Turney, Peter D.;Matwin, Stan - 通讯作者:
Matwin, Stan
RECURRENT NEURAL NETWORKS WITH STOCHASTIC LAYERS FOR ACOUSTIC NOVELTY DETECTION
- DOI:
10.1109/icassp.2019.8682901 - 发表时间:
2019-01-01 - 期刊:
- 影响因子:0
- 作者:
Duong Nguyen;Kirsebom, Oliver S.;Matwin, Stan - 通讯作者:
Matwin, Stan
deepBioWSD: effective deep neural word sense disambiguation of biomedical text data
- DOI:
10.1093/jamia/ocy189 - 发表时间:
2019-05-01 - 期刊:
- 影响因子:6.4
- 作者:
Pesaranghader, Ahmad;Matwin, Stan;Pesaranghader, Ali - 通讯作者:
Pesaranghader, Ali
A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data
- DOI:
10.1007/s10462-013-9400-4 - 发表时间:
2015-06-01 - 期刊:
- 影响因子:12
- 作者:
Esmin, Ahmed A. A.;Coelho, Rodrigo A.;Matwin, Stan - 通讯作者:
Matwin, Stan
A new algorithm for reducing the workload of experts in performing systematic reviews
- DOI:
10.1136/jamia.2010.004325 - 发表时间:
2010-07-01 - 期刊:
- 影响因子:6.4
- 作者:
Matwin, Stan;Kouznetsov, Alexandre;O'Blenis, Peter - 通讯作者:
O'Blenis, Peter
Matwin, Stan的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Matwin, Stan', 18)}}的其他基金
Interpretability for Machine Learning
机器学习的可解释性
- 批准号:
CRC-2019-00383 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Canada Research Chairs
Automated Monitoring of the Naval Information Space (AMNIS)
海军信息空间 (AMNIS) 自动监控
- 批准号:
550722-2020 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Alliance Grants
Research Challenges in Privacy-Aware Mobility Data Analysis and in Text Mining with Enriched Data
隐私意识移动数据分析和丰富数据文本挖掘的研究挑战
- 批准号:
RGPIN-2016-03913 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Interpretability For Machine Learning
机器学习的可解释性
- 批准号:
CRC-2019-00383 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Canada Research Chairs
Interpretability for Machine Learning
机器学习的可解释性
- 批准号:
CRC-2019-00383 - 财政年份:2020
- 资助金额:
$ 2.11万 - 项目类别:
Canada Research Chairs
Research Challenges in Privacy-Aware Mobility Data Analysis and in Text Mining with Enriched Data
隐私意识移动数据分析和丰富数据文本挖掘的研究挑战
- 批准号:
RGPIN-2016-03913 - 财政年份:2020
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Automated Monitoring of the Naval Information Space (AMNIS)
海军信息空间 (AMNIS) 自动监控
- 批准号:
550722-2020 - 财政年份:2020
- 资助金额:
$ 2.11万 - 项目类别:
Alliance Grants
Research Challenges in Privacy-Aware Mobility Data Analysis and in Text Mining with Enriched Data
隐私意识移动数据分析和丰富数据文本挖掘的研究挑战
- 批准号:
RGPIN-2016-03913 - 财政年份:2019
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Interpretability for Machine Learning
机器学习的可解释性
- 批准号:
CRC-2019-00383 - 财政年份:2019
- 资助金额:
$ 2.11万 - 项目类别:
Canada Research Chairs
相似国自然基金
基于可解释机器学习的科学知识角色转变预测研究
- 批准号:72304108
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
精神分裂症阴性症状经颅磁刺激治疗效应遗传影像学机器学习预测模型研究
- 批准号:82371510
- 批准年份:2023
- 资助金额:49 万元
- 项目类别:面上项目
基于机器学习的蚀变矿物勘查方法与应用研究:以西藏岗讲矿床为例
- 批准号:42363009
- 批准年份:2023
- 资助金额:32 万元
- 项目类别:地区科学基金项目
机器学习驱动的复杂量子系统鲁棒最优控制
- 批准号:62373342
- 批准年份:2023
- 资助金额:50 万元
- 项目类别:面上项目
基于机器学习方法的土壤多孔介质中EPFRs环境行为与生态毒性研究
- 批准号:42377385
- 批准年份:2023
- 资助金额:49 万元
- 项目类别:面上项目
相似海外基金
EAGER: North American Monsoon Prediction Using Causality Informed Machine Learning
EAGER:使用因果关系信息机器学习来预测北美季风
- 批准号:
2313689 - 财政年份:2023
- 资助金额:
$ 2.11万 - 项目类别:
Standard Grant
Causality, Counterfactuals and Meta-learning to Address the Complexity of Fairness in Data Science and Machine Learning
因果关系、反事实和元学习解决数据科学和机器学习中公平性的复杂性
- 批准号:
2751295 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Studentship
Investigating the interplay between endometriosis, polycystic ovarian syndrome, infertility, gynaecological cancer, and their shared risk factors.
研究子宫内膜异位症、多囊卵巢综合征、不孕症、妇科癌症及其共同危险因素之间的相互作用。
- 批准号:
449756 - 财政年份:2020
- 资助金额:
$ 2.11万 - 项目类别:
Studentship Programs
No causality in - no causality out: utility and limits of machine learning in HIV research using EMR data
没有因果关系 - 没有因果关系:使用 EMR 数据进行艾滋病毒研究中机器学习的效用和局限性
- 批准号:
400576 - 财政年份:2019
- 资助金额:
$ 2.11万 - 项目类别:
RTG: Advancing Machine Learning - Causality and Interpretability
RTG:推进机器学习 - 因果关系和可解释性
- 批准号:
1745640 - 财政年份:2018
- 资助金额:
$ 2.11万 - 项目类别:
Continuing Grant