Causality in Machine Learning
机器学习中的因果关系
基本信息
- 批准号:RGPIN-2022-03667
- 负责人:
- 金额:$ 2.11万
- 依托单位:
- 依托单位国家:加拿大
- 项目类别:Discovery Grants Program - Individual
- 财政年份:2022
- 资助国家:加拿大
- 起止时间:2022-01-01 至 2023-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Causality in Machine Learning is often understood as the ability to understand decisions provided by a machine-learning model in terms of the knowledge of the domain in which the model operates, and the ability to reason about such decisions. A causal model has an "introspective" ability to reason about itself. Learning a causal model is a much more difficult task than the one performed by current Machine Learning methods, including Deep Learning, that determine a "correlational" or "pattern-matching" relationship between the inputs of the model and its decision. I propose here a research program in causality in Machine Learning. Causality is one of the main challenges before the field of Machine Learning. Moreover, having a causal representation of a model will allow a progress of Machine Learning towards the abilities of human intelligence, such as learning with a few examples. I propose to connect with the rich body of existing Artificial Intelligence work exploring the use of logic to reason about causes of changing states of the world and variables describing the world. The proposed research program is founded on my previous work. In particular, my active participation in a sub-area of Machine Learning known as Inductive Logic Programming will be useful. I propose to interpret models obtained with Deep Learning using logic. Use of Inductive Logic Programming will enable us to build models behaving similarly to models obtained by Deep Learning. These ILP models will be "distilled" from the Deep Learning models, and will be expressed as rules in first-order logic. This will make them interpretable by humans. It will also facilitate integrating previous knowledge expressed in logic with the learned models. Even partial success of research on causality is likely to have significant impact. Causality is necessary for broader social acceptance of models developed using Machine Learning for decision-making concerning humans. For instance, the European Union GDPR directive stipulates that any such model should be explainable, i.e. a person about whom the model has made a decision should be able to obtain an explanation of the model's decision understandable to them. Understanding models will eventually allow us to avoid models that make decisions about humans based on gender, ethnicity, etc. For example, the group in Pisa with which I collaborate has access to claim processing data of one of the leading Italian insurance companies. We will look at the explainability of decisions taken by their automated insurance clam processing systems. Addressing causality is a huge challenge. In this program I propose to make inroads into distillation of Deep Learning models into understandable models that also make causality explicit, and in being able to assign multiple factors as combined causes of a given effect predicted by a model. I will also young researchers who will continue the important work in causality for Machine Learning.
机器学习中的因果关系通常被理解为理解机器学习模型提供的决策的能力,以了解模型运行的领域的知识以及对此类决策进行推理的能力。因果模型具有“内省”自我推理的能力。学习因果模型比当前的机器学习方法(包括深度学习)确定模型的输入与其决策之间的“相关”或“模式匹配”关系的任务要困难得多。我在这里提出了一项关于机器学习因果关系的研究计划。因果关系是机器学习领域之前的主要挑战之一。此外,拥有一个模型的因果代表将使机器学习朝着人类智能的能力取得进步,例如学习一些例子。我建议与现有的人工智能工作丰富的团体联系,以探讨逻辑的使用来理论改变世界状态的原因和描述世界的变量。拟议的研究计划是基于我以前的工作。特别是,我积极参与称为归纳逻辑编程的机器学习子区域将是有用的。我建议解释使用逻辑深度学习获得的模型。使用归纳逻辑编程将使我们能够构建与深度学习获得的模型相似的模型。这些ILP模型将从深度学习模型中“蒸馏”,并将作为一阶逻辑中的规则表达。这将使他们可以被人类解释。它还将促进将逻辑中表达的先前知识与学习的模型整合在一起。 即使是因果关系研究的部分成功也可能会产生重大影响。因果关系对于更广泛的社会接受是对使用机器学习开发用于人类决策的模型的更广泛的社会接受。例如,欧盟GDPR指令规定,任何此类模型都应解释,即该模型做出决定的人应该能够对模型的决定进行解释。理解模型最终将使我们避免根据性别,种族等做出决定的模型。我们将研究其自动化保险蛤处理系统做出的决定的解释性。解决因果关系是一个巨大的挑战。在此计划中,我建议将深度学习模型蒸馏成可理解的模型,从而使因果关系明确,并能够将多种因素分配为模型预测的给定效应的组合原因。我还将继续为机器学习而在因果关系方面继续重要工作的年轻研究人员。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Matwin, Stan其他文献
Unsupervised named-entity recognition: Generating gazetteers and resolving ambiguity
- DOI:
10.1007/11766247_23 - 发表时间:
2006-01-01 - 期刊:
- 影响因子:0
- 作者:
Nadeau, David;Turney, Peter D.;Matwin, Stan - 通讯作者:
Matwin, Stan
RECURRENT NEURAL NETWORKS WITH STOCHASTIC LAYERS FOR ACOUSTIC NOVELTY DETECTION
- DOI:
10.1109/icassp.2019.8682901 - 发表时间:
2019-01-01 - 期刊:
- 影响因子:0
- 作者:
Duong Nguyen;Kirsebom, Oliver S.;Matwin, Stan - 通讯作者:
Matwin, Stan
deepBioWSD: effective deep neural word sense disambiguation of biomedical text data
- DOI:
10.1093/jamia/ocy189 - 发表时间:
2019-05-01 - 期刊:
- 影响因子:6.4
- 作者:
Pesaranghader, Ahmad;Matwin, Stan;Pesaranghader, Ali - 通讯作者:
Pesaranghader, Ali
A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data
- DOI:
10.1007/s10462-013-9400-4 - 发表时间:
2015-06-01 - 期刊:
- 影响因子:12
- 作者:
Esmin, Ahmed A. A.;Coelho, Rodrigo A.;Matwin, Stan - 通讯作者:
Matwin, Stan
A new algorithm for reducing the workload of experts in performing systematic reviews
- DOI:
10.1136/jamia.2010.004325 - 发表时间:
2010-07-01 - 期刊:
- 影响因子:6.4
- 作者:
Matwin, Stan;Kouznetsov, Alexandre;O'Blenis, Peter - 通讯作者:
O'Blenis, Peter
Matwin, Stan的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Matwin, Stan', 18)}}的其他基金
Interpretability for Machine Learning
机器学习的可解释性
- 批准号:
CRC-2019-00383 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Canada Research Chairs
Automated Monitoring of the Naval Information Space (AMNIS)
海军信息空间 (AMNIS) 自动监控
- 批准号:
550722-2020 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Alliance Grants
Research Challenges in Privacy-Aware Mobility Data Analysis and in Text Mining with Enriched Data
隐私意识移动数据分析和丰富数据文本挖掘的研究挑战
- 批准号:
RGPIN-2016-03913 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Interpretability For Machine Learning
机器学习的可解释性
- 批准号:
CRC-2019-00383 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Canada Research Chairs
Interpretability for Machine Learning
机器学习的可解释性
- 批准号:
CRC-2019-00383 - 财政年份:2020
- 资助金额:
$ 2.11万 - 项目类别:
Canada Research Chairs
Research Challenges in Privacy-Aware Mobility Data Analysis and in Text Mining with Enriched Data
隐私意识移动数据分析和丰富数据文本挖掘的研究挑战
- 批准号:
RGPIN-2016-03913 - 财政年份:2020
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Automated Monitoring of the Naval Information Space (AMNIS)
海军信息空间 (AMNIS) 自动监控
- 批准号:
550722-2020 - 财政年份:2020
- 资助金额:
$ 2.11万 - 项目类别:
Alliance Grants
Research Challenges in Privacy-Aware Mobility Data Analysis and in Text Mining with Enriched Data
隐私意识移动数据分析和丰富数据文本挖掘的研究挑战
- 批准号:
RGPIN-2016-03913 - 财政年份:2019
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Interpretability for Machine Learning
机器学习的可解释性
- 批准号:
CRC-2019-00383 - 财政年份:2019
- 资助金额:
$ 2.11万 - 项目类别:
Canada Research Chairs
相似国自然基金
面向海量重力卫星观测数据精化处理的机器学习方法研究
- 批准号:42374004
- 批准年份:2023
- 资助金额:51 万元
- 项目类别:面上项目
车路云协同系统下基于机器学习的自适应信任评估机理研究
- 批准号:62362030
- 批准年份:2023
- 资助金额:32 万元
- 项目类别:地区科学基金项目
基于机器学习的青藏高原河岸沙丘分类与演化模式研究
- 批准号:42371008
- 批准年份:2023
- 资助金额:52 万元
- 项目类别:面上项目
机器学习辅助按需设计多酶活性纳米酶用于糖尿病足溃疡治疗研究
- 批准号:32371465
- 批准年份:2023
- 资助金额:50 万元
- 项目类别:面上项目
机器学习辅助研究氮气在金属硫化物团簇上的活化与转化
- 批准号:22303096
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
相似海外基金
EAGER: North American Monsoon Prediction Using Causality Informed Machine Learning
EAGER:使用因果关系信息机器学习来预测北美季风
- 批准号:
2313689 - 财政年份:2023
- 资助金额:
$ 2.11万 - 项目类别:
Standard Grant
Causality, Counterfactuals and Meta-learning to Address the Complexity of Fairness in Data Science and Machine Learning
因果关系、反事实和元学习解决数据科学和机器学习中公平性的复杂性
- 批准号:
2751295 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Studentship
Investigating the interplay between endometriosis, polycystic ovarian syndrome, infertility, gynaecological cancer, and their shared risk factors.
研究子宫内膜异位症、多囊卵巢综合征、不孕症、妇科癌症及其共同危险因素之间的相互作用。
- 批准号:
449756 - 财政年份:2020
- 资助金额:
$ 2.11万 - 项目类别:
Studentship Programs
No causality in - no causality out: utility and limits of machine learning in HIV research using EMR data
没有因果关系 - 没有因果关系:使用 EMR 数据进行艾滋病毒研究中机器学习的效用和局限性
- 批准号:
400576 - 财政年份:2019
- 资助金额:
$ 2.11万 - 项目类别:
RTG: Advancing Machine Learning - Causality and Interpretability
RTG:推进机器学习 - 因果关系和可解释性
- 批准号:
1745640 - 财政年份:2018
- 资助金额:
$ 2.11万 - 项目类别:
Continuing Grant