Antecedents and Consequences of Trust in Artificial Agents

信任人工代理的前因和后果

基本信息

  • 批准号:
    ES/V015176/1
  • 负责人:
  • 金额:
    $ 30.68万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2022
  • 资助国家:
    英国
  • 起止时间:
    2022 至 无数据
  • 项目状态:
    未结题

项目摘要

Machines powered by artificial intelligence (AI) are revolutionising the social world. We rely on AI when we check the traffic on Google Maps, when we connect with a driver on Uber, or when we apply for a credit check. But as the technological sophistication of AI increases, so too are the number and type of tasks that we rely on AI agents for - for example, to allocate scarce medical resources and assist with decisions about turning off life support, to recommend criminal sentences, and even to identify and kill enemy soldiers. AI agents are approaching a level of complexity that progressively requires them to embody not just artificial intelligence but also artificial morality, making decisions that would be directly described as moral or immoral if made by humans. The increased use of AI agents has the potential for tremendous economic and social benefits, but for society to reap these benefits, people need to be able to trust these AI agents. While we know that trust is critical, we know very little about the specific antecedents and consequences of such trust in AI, especially when it comes to the increasing use of AI in morally-relevant contexts. This is important because morality is far from simple: We live in a world replete with moral dilemmas, with different ethical theories favouring different mutually exclusive actions. Previous work in humans shows that we use moral judgments as a cue for trustworthiness, so that it is not enough to just ask whether we trust someone to make moral decisions: we have to consider the type of moral decision they are making, how they are making it, and in what context. If we want to understand trust in AI, we need to ask the same questions - but there is no guarantee that the answers will be the same. We need to understand how trust in AI depends depend on what kind of moral decision they are making (e.g. consequentialist or deontological judgments: Research Question #1) how they are making it (e.g. based on a coarse and interpretable set of decision rules or "black box" machine learning: Research Question #2), and in what relational and operational context (e.g. whether the machine performs close, personal tasks or abstract, impersonal ones, Research Question #3).In this project I will conduct 11 experiments to investigate how trust in AI is sensitive to what moral decisions are made; how they are made; and in what relational contexts. I will use a number of different experimental approaches tapping both implicit and explicit trust and recruit a range of populations (British laypeople; trained philosophers and AI industry experts; a study with a convenience sample of participants all around the world; and an international experiment with participants representative for age and gender recruited simultaneously in 7 countries). At the end of the grant period, I will host a full-day interdisciplinary conference/workshop consisting of both academic and non-academic attendees to bring together experts working in AI together to consider the psychological challenges of programming trustworthy AI and the philosophical issues of using public preferences as a basis for policy relating to ethical AI. This work will have important theoretical and methodological implications for research on the antecedents and consequences of trust in AI, highlighting the necessity of moving beyond simply asking whether we could trust AI to instead ask what types of decisions will we trust AI to make, what kinds of AI system we want making moral decisions, and in what contexts. These findings will have significant societal impact in helping public experts working on AI understand the how, when, and why people trust AI agents, allowing us to reap the economic and social benefits of AI that are fundamentally predicated on them being trusted by the public.
由人工智能(AI)提供支持的机器正在彻底改变社会世界。当我们检查Google Maps上的流量,与Uber上的驱动程序连接时或申请信用检查时,我们会依靠AI。但是,随着人工智能技术复杂性的增加,我们依靠AI代理的任务数量和类型也是如此 - 例如,分配稀缺的医疗资源并协助决定关闭生活支持的决定甚至要识别和杀死敌方士兵。人工智能代理人正在接近一定程度的复杂性,这些复杂性逐渐要求他们不仅体现人工智能,而且还体现人造道德,做出直接描述为人类的道德或不道德的决定。人工智能代理商的使用增加有可能获得巨大的经济和社会利益,但是要使社会获得这些利益,人们需要能够信任这些AI代理。虽然我们知道信任至关重要,但我们对这种信任对AI的特定前因和后果知之甚少,尤其是在与道德相关的环境中越来越多地使用AI时。这很重要,因为道德远非简单:我们生活在一个充满道德困境的世界中,具有不同的道德理论,有利于不同的相互排斥行动。以前在人类中的工作表明,我们使用道德判断作为信任的提示,因此仅问我们是否信任某人做出道德决定是不够的:我们必须考虑他们正在做出的道德决定类型,做到这一点,以及在什么情况下。如果我们想了解对AI的信任,我们需要提出同样的问题 - 但是不能保证答案会相同。我们需要了解AI中的信任如何取决于他们正在做出的道德决定(例如,结果主义者或道义论判断:研究问题#1)他们是如何做到的(例如,基于一组粗略且可解释的决策规则或“黑匣子“机器学习:研究问题2),以及在哪种关系和操作环境中(例如,机器是执行封闭,个人任务还是抽象的,非个人化的,研究问题#3)。在此项目中,我将进行11个实验调查对AI的信任如何对做出的道德决定敏感;它们是如何制作的;以及在什么关系环境中。我将使用许多不同的实验方法来利用隐式和明确的信任,并招募一系列人口(英国外行;受过训练的哲学家和AI行业专家;一项研究,一项研究了全世界参与者的便利样本;以及一项国际实验;参与者的年龄和性别代表在7个国家 /地区同时招募)。在赠款期结束时,我将举办一个由学术和非学术参与者组成的全日跨学科会议/研讨会,以将在AI中工作的专家聚集在一起,以考虑编程可信赖的AI和哲学问题的心理挑战以及使用公众偏好作为与道德AI有关的政策的基础。这项工作将对对AI信任的先例和后果的研究具有重要的理论和方法论意义,这强调了不仅仅何不是简单地询问我们是否可以信任AI,而是询问我们将信任AI做出哪些类型的决定,在AI系统中,我们希望做出道德决定,以及在什么情况下。这些发现将在帮助AI的公共专家了解人们信任AI代理的方式,何时和为什么使我们能够获得AI的经济和社会利益的方式,从根本上讲是由公众信任的AI的经济和社会利益。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Jim Everett其他文献

Jim Everett的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Jim Everett', 18)}}的其他基金

A Person-Centred Approach to Understanding Trust in Moral Machines
以人为本的方法来理解道德机器的信任
  • 批准号:
    EP/Y00440X/1
  • 财政年份:
    2024
  • 资助金额:
    $ 30.68万
  • 项目类别:
    Research Grant

相似国自然基金

职场网络闲逛行为的作用结果及其反馈效应——基于行为者和观察者视角的整合研究
  • 批准号:
    72302108
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
数智背景下的团队人力资本层级结构类型、团队协作过程与团队效能结果之间关系的研究
  • 批准号:
    72372084
  • 批准年份:
    2023
  • 资助金额:
    40 万元
  • 项目类别:
    面上项目
晚三叠世中-晚诺利期保山地块牙形石小型化成因分析:升温事件导致的结果?
  • 批准号:
    42302131
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
探究榴辉岩中石榴石成核和早期生长的平衡程度及其对石榴石等值线温压计结果的影响
  • 批准号:
    42372062
  • 批准年份:
    2023
  • 资助金额:
    53 万元
  • 项目类别:
    面上项目
基于高能重离子碰撞实验结果的输运模型与统计模型融合研究
  • 批准号:
    12365017
  • 批准年份:
    2023
  • 资助金额:
    31 万元
  • 项目类别:
    地区科学基金项目

相似海外基金

Understanding Race-Related Stress as a Mechanism Associated with Alcohol Craving to Inform Culturally-Adapting Alcohol Treatment for Black Adults
了解与种族相关的压力作为与酒精渴望相关的机制,为黑人成年人进行文化适应的酒精治疗提供信息
  • 批准号:
    10432044
  • 财政年份:
    2021
  • 资助金额:
    $ 30.68万
  • 项目类别:
Understanding Race-Related Stress as a Mechanism Associated with Alcohol Craving to Inform Culturally-Adapting Alcohol Treatment for Black Adults
了解与种族相关的压力作为与酒精渴望相关的机制,为黑人成年人进行文化适应的酒精治疗提供信息
  • 批准号:
    10631090
  • 财政年份:
    2021
  • 资助金额:
    $ 30.68万
  • 项目类别:
Understanding Race-Related Stress as a Mechanism Associated with Alcohol Craving to Inform Culturally-Adapting Alcohol Treatment for Black Adults
了解与种族相关的压力作为与酒精渴望相关的机制,为黑人成年人进行文化适应的酒精治疗提供信息
  • 批准号:
    10214970
  • 财政年份:
    2021
  • 资助金额:
    $ 30.68万
  • 项目类别:
Alzheimer's disease biomarker disclosure in African Americans and Whites – Personal and programmatic consequences of knowing ATN status
非裔美国人和白人中阿尔茨海默病生物标志物的披露 — 了解 ATN 状态的个人和规划后果
  • 批准号:
    9895618
  • 财政年份:
    2019
  • 资助金额:
    $ 30.68万
  • 项目类别:
Uncovering the Causes, Contexts, and Consequences of Elder Mistreatment in People with Dementia
揭示老年痴呆症患者遭受虐待的原因、背景和后果
  • 批准号:
    10396025
  • 财政年份:
    2018
  • 资助金额:
    $ 30.68万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了