Rule of Law in the Age of AI: Principles of Distributive Liability for Multi-Agent Societies

人工智能时代的法治:多主体社会的分配责任原则

基本信息

  • 批准号:
    ES/T007079/1
  • 负责人:
  • 金额:
    $ 51.67万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2020
  • 资助国家:
    英国
  • 起止时间:
    2020 至 无数据
  • 项目状态:
    已结题

项目摘要

The UK and Japan appeal to similar models of subjectivity in categorizing legal liability. Rooted historically and philosophically in the figure of the human actor capable of exercising free will within a given environment, such a model of subjectivity ascribes legal liability to human agents imagined as autonomous and independent. However, recent advancements in artificial intelligence (AI) that augment the autonomy of artificial agents such as autonomous driving systems, social robots equipped with artificial emotional intelligence, and intelligent surgery or diagnosis assistant system challenge this traditional notion of agency while presenting serious practical problems for determining legal liability within networks of distributed human-machine agency. For example, if the accident occurs from cooperation between human and an intelligent machine, we do not know how to distribute legal liability based on current legal theory. Although legal theory assumes that the autonomous human agent should take the responsibility of the accident, but in the case of human-intelligent machine interaction, human subjectivity itself is influenced by the behavior of intelligent machines, according to the findings of cognitive psychology, of the critical theory of subjectivity, and of the anthropology of science and technology. This lack of the transparent and clear distributive principles of legal liability may hamper the healthy development of society where human dignity and technological innovation can travel together, because, no one can trust the behavior and quality of the machine, that may cause corporal or lethal injury, without workable legal liability regime. Faced with this challenge, that is caused and will be aggravated by the proliferation of AI in UK and Japan, an objective of our study is to make the distributive principle of legal liability clear in the multi-agent society and proposing the relevant legal policy to establish the rule of law in the age of AI, that enables us to construct the "Najimi society" where humans and intelligent machines can cohabit, with sensitivity to the cultural diversity of the formation of subjectivity. In order to achieve the objective above, we create the three interrelated and collaborative research groups: Group 1: Law-Economics-Philosophy group that proposes the stylized model to analyze and evaluate the multi-agent situation, based on dynamic game theory connected to the philosophy of the relativity of human subjectivity, in order to figure out the distributive principle of legal liability and the legal policy for the rule of law in the age of AI, based on both the quantitative data and qualitative data from the other groups, with the support from experienced legal practitioner and policy makers.Group 2: Cognitive Robotics and Human Factors and Cognitive Psychology group that implements various computer simulation and psychological experiments to capture data on human interaction and performance with as well as attidues and experience of intelligent machines - in this case (simulated) autonomous vehicles. The outputs of this group will examine the validity of the first group's model and provide mainly the quantitative data relating to subjectivity with the first group, leading to help to construct more reliable model and workable legal principles and policies.Group 3: Cultural Anthropology group that engages in comparative ethnographic fieldwork on human-robot relations within Japan and the UK to better account for the cultural variability of distributed agency within differing social, legal, and scientific contexts. The output of this group will help the interpretation of the quantitative data and allow the first group to keep sensitivities to the diversity. By the inherently transdisiciplinary and international cooperation described above, our project will contribute to make UK and Japanese society more adoptive to emerging technology through clarifying the legal regime.
英国和日本在对法律责任分类中呼吁采用类似的主观性模式。从历史和哲学上讲,在人类演员的身影上,能够在给定环境中行使自由意志的人物,这种主观性模式将法律责任归因于被认为是自主和独立的人类代理人。但是,人工智能(AI)的最新进展增加了人工驾驶系统的自主权,包括人工情商的社交机器人,智能手术或智能手术或诊断助理系统挑战了这一传统的代理商概念,同时提出了严重的实际问题,以确定分布式人类机构网络内的确定分布式人类机构网络内的法律责任。例如,如果事故来自人与智能机器之间的合作,我们不知道如何根据当前的法律理论分配法律责任。尽管法律理论假设自主人类代理人应该承担事故的责任,但是在人类智慧的机器互动的情况下,人类主观性本身受到智能机器的行为的影响,根据认知心理学的发现,主观性理论的发现,以及科学和技术的人类学。缺乏透明,明确的法律责任分配原则可能会阻碍人类尊严和技术创新可以一起旅行的社会的健康发展,因为没有人可以相信机器的行为和质量,可能会造成体制或致命的伤害,而无需可行的法律责任制度。面对这一挑战,这是造成的,并且将因AI在英国和日本的扩散而加剧,我们的研究的目标是使多方社会中的法律责任的分配原则明确,并提议在AI时代建立法律规则的相关法律政策,使我们能够构建“ najimi Society”的形式,使人可以构建人类的界面,并使人类的成立能够构成人类的界面,并使人的界限能够构成人类的界面主观性。为了实现上述目标,我们创建了三个相互关联和协作的研究小组:第1组:法律 - 经济学 - 哲学小组,提出了基于与主体性和Quartiation Promitiation Promitiation Promitiation of Quartival of Quant of Quart of Lare of Lare的规则的规则的哲学相关的动态游戏理论,提出了分析和评估多样性的模型,以分析和评估多方面的情况,该模型是基于动态游戏理论,该理论是相关的。群体2:认知机器人和人为因素以及认知心理学小组的支持,来自其他群体的定性数据,这些数据实现了各种计算机模拟和心理学实验,以捕获有关人类互动和性能的数据,以及与智能机器的相互作用和性能的数据(在这种情况下 - 在这种情况下 - 在这种情况下(模拟)自动驾驶汽车。该小组的产出将检查第一组模型的有效性,并主要提供与第一组有关的主观性有关的定量数据,从而有助于构建更可靠的模型和更可靠的法律原则和政策。组3:文化人类学小组,参与对日本内部及其在文化上的差异化的人类与人类关系的比较人民志现场工作,并在文化上进行了不同的范围。该组的输出将有助于解释定量数据,并允许第一组对多样性保持敏感性。通过上述固有的跨性学和国际合作,我们的项目将有助于通过澄清法律制度使英国和日本社会对新兴技术的收养。

项目成果

期刊论文数量(7)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
The Effects of Cyber Readiness and Response on Human Trust in Self Driving Cars
网络准备和响应对自动驾驶汽车中人类信任的影响
  • DOI:
    10.54941/ahfe1003719
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Marcinkiewicz V
  • 通讯作者:
    Marcinkiewicz V
Towards anthropomorphising autonomous vehicles: speech and embodiment on trust and blame after an accident
走向拟人化的自动驾驶汽车:事故后关于信任和指责的言论和体现
  • DOI:
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Wallbridge CD
  • 通讯作者:
    Wallbridge CD
Public perception of autonomous vehicle capability determines judgment of blame and trust in road traffic accidents
  • DOI:
    10.1016/j.tra.2023.103887
  • 发表时间:
    2024-01
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Qiyuan Zhang;Christopher D. Wallbridge;Dylan M. Jones;Phillip L. Morgan
  • 通讯作者:
    Qiyuan Zhang;Christopher D. Wallbridge;Dylan M. Jones;Phillip L. Morgan
Using Simulation-software-generated Animations to Investigate Attitudes Towards Autonomous Vehicles Accidents
使用仿真软件生成的动画来调查人们对自动驾驶汽车事故的态度
  • DOI:
    10.1016/j.procs.2022.09.410
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Zhang Q
  • 通讯作者:
    Zhang Q
Judgements of Autonomous Vehicle Capability Determine Attribution of Blame in Road Traffic Accidents
自动驾驶能力判断决定道路交通事故责任归属
  • DOI:
    10.2139/ssrn.4093012
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Zhang Q
  • 通讯作者:
    Zhang Q
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Phillip Morgan其他文献

Maladaptive Behaviour in Phishing Susceptibility: How Email Context Influences the Impact of Persuasion Techniques
网络钓鱼易感性中的适应不良行为:电子邮件上下文如何影响说服技术的效果
  • DOI:
    10.54941/ahfe1003718
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    George Raywood;Dylan Jones;Phillip Morgan
  • 通讯作者:
    Phillip Morgan
The impact on retention figures of the introduction of a comfort call during a contact lens trial
  • DOI:
    10.1016/j.clae.2018.03.078
  • 发表时间:
    2018-06-01
  • 期刊:
  • 影响因子:
  • 作者:
    Emma Cooney;Phillip Morgan
  • 通讯作者:
    Phillip Morgan
The protection of interests: Organizational change in the Australian services canteens organization
利益保护:澳大利亚服务食堂组织的组织变革
  • DOI:
    10.1007/bf01733496
  • 发表时间:
    1986
  • 期刊:
  • 影响因子:
    5.4
  • 作者:
    G. Kenny;Phillip Morgan;B. Hinings
  • 通讯作者:
    B. Hinings
Clinicians risk becoming ‘liability sinks’ for artificial intelligence
临床医生面临人工智能“成为责任池”的风险
  • DOI:
  • 发表时间:
    2024
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Tom Lawton;Phillip Morgan;Zoe Porter;Shireen Hickey;Alice Cunningham;Nathan Hughes;Ioanna Iacovides;Yan Jia;Vishal Sharma;I. Habli
  • 通讯作者:
    I. Habli
Cyclist and pedestrian trust in automated vehicles: An on-road and simulator trial
骑车人和行人对自动驾驶汽车的信任:道路和模拟器试验

Phillip Morgan的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似国自然基金

法律机制与独立董事治理:来自“康美药业案”的外生冲击
  • 批准号:
    72302105
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
注册制下资本市场法律监管制度建设的效果评估及优化策略研究
  • 批准号:
    72373021
  • 批准年份:
    2023
  • 资助金额:
    41.00 万元
  • 项目类别:
    面上项目
2023年国际科技法律动向研究:以数据安全法比较研究为中心
  • 批准号:
    L2224024
  • 批准年份:
    2022
  • 资助金额:
    30.00 万元
  • 项目类别:
    专项项目
政企扶贫纽带对上市公司法律风险的治理机制及效果研究
  • 批准号:
  • 批准年份:
    2020
  • 资助金额:
    24 万元
  • 项目类别:
    青年科学基金项目
低表面亮度星系的恒星形成指标之间的相关性以及Kennicutt-Schmidt Law的研究
  • 批准号:
  • 批准年份:
    2020
  • 资助金额:
    24 万元
  • 项目类别:
    青年科学基金项目

相似海外基金

Demographic Patterns of Eugenic Sterilization in Five U.S. States: Mixed Methods Investigation of Reproductive Control of the 'Unfit'
美国五个州优生绝育的人口统计模式:“不健康者”生殖控制的混合方法调查
  • 批准号:
    10640886
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
Leveraging state drug overdose data to build a comprehensive case level national dataset to inform prevention and mitigation strategies.
利用州药物过量数据建立全面的病例级国家数据集,为预防和缓解策略提供信息。
  • 批准号:
    10701215
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
The role of adverse community-level policing exposure on disparities in Alzheimer's disease related dementias and deleterious multidimensional aging
社区层面的不良警务暴露对阿尔茨海默病相关痴呆和有害的多维衰老差异的作用
  • 批准号:
    10642517
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
Equality in the Algorithmic Age: A New Frontier for European Union Law?
算法时代的平等:欧盟法律的新领域?
  • 批准号:
    1071088
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Studentship
Legal Minimum Age of Marriage and Female Education
法定最低结婚年龄和女性教育
  • 批准号:
    23K01447
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了