Rule of Law in the Age of AI: Principles of Distributive Liability for Multi-Agent Societies

人工智能时代的法治:多主体社会的分配责任原则

基本信息

  • 批准号:
    ES/T007079/1
  • 负责人:
  • 金额:
    $ 51.67万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2020
  • 资助国家:
    英国
  • 起止时间:
    2020 至 无数据
  • 项目状态:
    已结题

项目摘要

The UK and Japan appeal to similar models of subjectivity in categorizing legal liability. Rooted historically and philosophically in the figure of the human actor capable of exercising free will within a given environment, such a model of subjectivity ascribes legal liability to human agents imagined as autonomous and independent. However, recent advancements in artificial intelligence (AI) that augment the autonomy of artificial agents such as autonomous driving systems, social robots equipped with artificial emotional intelligence, and intelligent surgery or diagnosis assistant system challenge this traditional notion of agency while presenting serious practical problems for determining legal liability within networks of distributed human-machine agency. For example, if the accident occurs from cooperation between human and an intelligent machine, we do not know how to distribute legal liability based on current legal theory. Although legal theory assumes that the autonomous human agent should take the responsibility of the accident, but in the case of human-intelligent machine interaction, human subjectivity itself is influenced by the behavior of intelligent machines, according to the findings of cognitive psychology, of the critical theory of subjectivity, and of the anthropology of science and technology. This lack of the transparent and clear distributive principles of legal liability may hamper the healthy development of society where human dignity and technological innovation can travel together, because, no one can trust the behavior and quality of the machine, that may cause corporal or lethal injury, without workable legal liability regime. Faced with this challenge, that is caused and will be aggravated by the proliferation of AI in UK and Japan, an objective of our study is to make the distributive principle of legal liability clear in the multi-agent society and proposing the relevant legal policy to establish the rule of law in the age of AI, that enables us to construct the "Najimi society" where humans and intelligent machines can cohabit, with sensitivity to the cultural diversity of the formation of subjectivity. In order to achieve the objective above, we create the three interrelated and collaborative research groups: Group 1: Law-Economics-Philosophy group that proposes the stylized model to analyze and evaluate the multi-agent situation, based on dynamic game theory connected to the philosophy of the relativity of human subjectivity, in order to figure out the distributive principle of legal liability and the legal policy for the rule of law in the age of AI, based on both the quantitative data and qualitative data from the other groups, with the support from experienced legal practitioner and policy makers.Group 2: Cognitive Robotics and Human Factors and Cognitive Psychology group that implements various computer simulation and psychological experiments to capture data on human interaction and performance with as well as attidues and experience of intelligent machines - in this case (simulated) autonomous vehicles. The outputs of this group will examine the validity of the first group's model and provide mainly the quantitative data relating to subjectivity with the first group, leading to help to construct more reliable model and workable legal principles and policies.Group 3: Cultural Anthropology group that engages in comparative ethnographic fieldwork on human-robot relations within Japan and the UK to better account for the cultural variability of distributed agency within differing social, legal, and scientific contexts. The output of this group will help the interpretation of the quantitative data and allow the first group to keep sensitivities to the diversity. By the inherently transdisiciplinary and international cooperation described above, our project will contribute to make UK and Japanese society more adoptive to emerging technology through clarifying the legal regime.
英国和日本在对法律责任进行分类时采用了类似的主观模型。这种主观性模型在历史和哲学上植根于能够在特定环境中行使自由意志的人类行为者的形象,将法律责任赋予被想象为自主和独立的人类代理人。然而,人工智能(AI)的最新进展增强了人工智能体的自主性,例如自动驾驶系统、配备人工情感智能的社交机器人以及智能手术或诊断辅助系统,挑战了这种传统的代理概念,同时也带来了严重的实际问题。确定分布式人机代理网络内的法律责任。例如,如果事故是人与智能机器合作发生的,我们不知道根据现有的法律理论如何分配法律责任。虽然法律理论假定自主的人类代理人应对事故承担责任,但在人与智能机器交互的情况下,人类的主观性本身就受到智能机器行为的影响,根据认知心理学的研究结果,主观性批判理论以及科学技术人类学。这种缺乏透明和明确的法律责任分配原则可能会阻碍人类尊严与技术创新同行的社会的健康发展,因为没有人可以相信机器的行为和质量,这可能会造成人身或致命的伤害,没有可行的法律责任制度。面对这一由英国和日本的人工智能扩散引起并将加剧的挑战,我们研究的目的是明确多主体社会中法律责任的分配原则,并提出相关的法律政策建立人工智能时代的法治,使我们能够构建人类与智能机器共存的“纳吉米社会”,敏感地形成主体性的文化多样性。为了实现上述目标,我们创建了三个相互关联和协作的研究小组: 第 1 组:法律-经济学-哲学小组,提出基于动态博弈论来分析和评估多主体情况的程式化模型运用人的主体性相对性哲学,以其他群体的定量数据和定性数据为基础,找出人工智能时代法律责任的分配原则和法治的法律政策。支持来自经验丰富的法律从业者和政策制定者。第 2 组:认知机器人学、人为因素和认知心理学小组,实施各种计算机模拟和心理实验,以捕获人类与智能机器的交互和表现以及态度和经验的数据 - 在本例中(模拟)自动驾驶汽车。该组的产出将检验第一组模型的有效性,并主要提供与第一组有关的主观性定量数据,从而有助于构建更可靠的模型和可操作的法律原则和政策。 第三组:文化人类学组,从事日本和英国人机关系的比较人种学实地考察,以更好地解释分布式机构在不同社会、法律和科学背景下的文化差异。该组的输出将有助于定量数据的解释,并使第一组保持对多样性的敏感性。通过上述本质上的跨学科和国际合作,我们的项目将通过澄清法律制度,有助于英国和日本社会更加接受新兴技术。

项目成果

期刊论文数量(7)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
The Effects of Cyber Readiness and Response on Human Trust in Self Driving Cars
网络准备和响应对自动驾驶汽车中人类信任的影响
  • DOI:
    10.54941/ahfe1003719
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Marcinkiewicz V
  • 通讯作者:
    Marcinkiewicz V
Towards anthropomorphising autonomous vehicles: speech and embodiment on trust and blame after an accident
走向拟人化的自动驾驶汽车:事故后关于信任和指责的言论和体现
  • DOI:
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Wallbridge CD
  • 通讯作者:
    Wallbridge CD
Public perception of autonomous vehicle capability determines judgment of blame and trust in road traffic accidents
  • DOI:
    10.1016/j.tra.2023.103887
  • 发表时间:
    2024-01
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Qiyuan Zhang;Christopher D. Wallbridge;Dylan M. Jones;Phillip L. Morgan
  • 通讯作者:
    Qiyuan Zhang;Christopher D. Wallbridge;Dylan M. Jones;Phillip L. Morgan
Judgements of Autonomous Vehicle Capability Determine Attribution of Blame in Road Traffic Accidents
自动驾驶能力判断决定道路交通事故责任归属
  • DOI:
    10.2139/ssrn.4093012
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Zhang Q
  • 通讯作者:
    Zhang Q
Using Simulation-software-generated Animations to Investigate Attitudes Towards Autonomous Vehicles Accidents
使用仿真软件生成的动画来调查人们对自动驾驶汽车事故的态度
  • DOI:
    10.1016/j.procs.2022.09.410
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Zhang Q
  • 通讯作者:
    Zhang Q
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Phillip Morgan其他文献

Maladaptive Behaviour in Phishing Susceptibility: How Email Context Influences the Impact of Persuasion Techniques
网络钓鱼易感性中的适应不良行为:电子邮件上下文如何影响说服技术的效果
  • DOI:
    10.54941/ahfe1003718
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    George Raywood;Dylan Jones;Phillip Morgan
  • 通讯作者:
    Phillip Morgan
The protection of interests: Organizational change in the Australian services canteens organization
利益保护:澳大利亚服务食堂组织的组织变革
  • DOI:
    10.1007/bf01733496
  • 发表时间:
    1986
  • 期刊:
  • 影响因子:
    5.4
  • 作者:
    G. Kenny;Phillip Morgan;B. Hinings
  • 通讯作者:
    B. Hinings
Clinicians risk becoming ‘liability sinks’ for artificial intelligence
临床医生面临人工智能“成为责任池”的风险
  • DOI:
  • 发表时间:
    2024
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Tom Lawton;Phillip Morgan;Zoe Porter;Shireen Hickey;Alice Cunningham;Nathan Hughes;Ioanna Iacovides;Yan Jia;Vishal Sharma;I. Habli
  • 通讯作者:
    I. Habli
Cyclist and pedestrian trust in automated vehicles: An on-road and simulator trial
骑车人和行人对自动驾驶汽车的信任:道路和模拟器试验
The uncertainty of students from a widening access context undertaking an integrated master’s degree in social studies
学生在获得社会研究综合硕士学位时面临的不确定性
  • DOI:
  • 发表时间:
    2019
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Caroline Lohmann;Phillip Morgan
  • 通讯作者:
    Phillip Morgan

Phillip Morgan的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似国自然基金

法律机制与独立董事治理:来自“康美药业案”的外生冲击
  • 批准号:
    72302105
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
法律的软性约束力:基于公共品博弈中最低贡献规定的实验研究
  • 批准号:
    72003101
  • 批准年份:
    2020
  • 资助金额:
    24 万元
  • 项目类别:
    青年科学基金项目
低表面亮度星系的恒星形成指标之间的相关性以及Kennicutt-Schmidt Law的研究
  • 批准号:
  • 批准年份:
    2020
  • 资助金额:
    24 万元
  • 项目类别:
    青年科学基金项目
政企扶贫纽带对上市公司法律风险的治理机制及效果研究
  • 批准号:
  • 批准年份:
    2020
  • 资助金额:
    24 万元
  • 项目类别:
    青年科学基金项目
国家自然科学基金管理工作法律风险及其防范研究
  • 批准号:
  • 批准年份:
    2019
  • 资助金额:
    20 万元
  • 项目类别:
    专项基金项目

相似海外基金

Demographic Patterns of Eugenic Sterilization in Five U.S. States: Mixed Methods Investigation of Reproductive Control of the 'Unfit'
美国五个州优生绝育的人口统计模式:“不健康者”生殖控制的混合方法调查
  • 批准号:
    10640886
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
Leveraging state drug overdose data to build a comprehensive case level national dataset to inform prevention and mitigation strategies.
利用州药物过量数据建立全面的病例级国家数据集,为预防和缓解策略提供信息。
  • 批准号:
    10701215
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
The role of adverse community-level policing exposure on disparities in Alzheimer's disease related dementias and deleterious multidimensional aging
社区层面的不良警务暴露对阿尔茨海默病相关痴呆和有害的多维衰老差异的作用
  • 批准号:
    10642517
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
Equality in the Algorithmic Age: A New Frontier for European Union Law?
算法时代的平等:欧盟法律的新领域?
  • 批准号:
    1071088
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Studentship
Legal Minimum Age of Marriage and Female Education
法定最低结婚年龄和女性教育
  • 批准号:
    23K01447
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了