Institute for Trustworthy AI in Law and Society (TRAILS)
法律与社会可信人工智能研究所 (TRAILS)
基本信息
- 批准号:2229885
- 负责人:
- 金额:$ 2000万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Cooperative Agreement
- 财政年份:2023
- 资助国家:美国
- 起止时间:2023-06-01 至 2028-05-31
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
Artificial Intelligence (AI) systems have potential to enhance human capacity and increase productivity. They also can catalyze innovation and mitigate complex problems. Current AI systems are not created in a way that is transparent making them a challenge to public trust. The opaque processes used produce results that are not well understood. Trust is further undermined by the harms that AI systems can cause. Those most affected are the communities excluded from participating in AI system developments. This lack of trustworthiness will result in slower adoption of these AI technologies. It is critical to AI innovation to include groups affected by the benefits and harms of these AI systems. The TRAILS (Trustworthy AI in Law and Society) Institute is a partnership of the University of Maryland, The George Washington University, Morgan State University, and Cornell University. It encourages community participation in AI development of techniques, tools, and scientific theories. Design and policy recommendations produced will promote the trustworthiness of AI systems. A first goal of the TRAILS Institute is to discover ways to change the design and development of AI systems. This will help communities make informed choices about AI technology adoption. A second goal is the development of best practices for industry and government. This will foster AI innovation while keeping communities safe, engaged, and informed. The TRAILS Institute has explicit plans for increasing participation of affected communities. This includes participation of K-12 education up through Congressional staff. These plans will elicit the concerns and expectations from the affected communities. They also provide improved understanding of the risks and benefits of AI-enabled systems.The TRAILS Institute's research program identifies four key thrusts. These thrusts target key aspects of the AI system development lifecycle. The first is Social Values. It involves increasing participation throughout all aspects of AI development. This ensures the values produced by AI systems reflect community and interested parties’ values. This includes participatory design with diverse communities. The result is community-based interventions and adaptations for the AI development lifecycle. The second thrust is Technical Design. It includes the development of algorithms to promote transparency and trust in AI. This includes the development of tools that increase robustness in AI systems. It also promotes user and developer understanding of how AI systems operate. The third trust is Socio-Technical Perceptions. This involves the development of novel measures including psychometric techniques and experimental paradigms. These measures will assess the interpretability and explainability of AI systems. This will enable a deeper understanding and perception of existing metrics and algorithms. This provides understanding of the values perceived and held by included community members. The fourth thrust is Governance. It includes documentation and analysis of governance regimes for both data and technologies. These provide the underpinning AI for the development of platform and technology regulation. Ethnographers will analyze the institute itself and partner organizations. They will document ways in which technical choices translate to governance impacts. The research focus is in two use-inspired areas. The first being information dissemination systems (e.g., social medial platforms). The second is energy-intensive systems (e.g., autonomous systems). The institute's education and workforce development efforts in AI include new educational offerings. These cater to many markets, ranging from secondary through executive education. The TRAILS Institute is especially focused on expanding access to foundational education. The focus is on historically marginalized and minoritized groups of learners and users. The institute will work with these communities to learn from, educate, and recruit participants. The focus is to retain, support, and empower those marginalized in mainstream AI. The integration of these communities into this AI research program broadens participation in AI development and governance.The National Institute of Standards and Technology (NIST) is partnering with NSF to provide funding for this Institute.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
人工智能(AI)系统具有提高人类能力并提高生产率的潜力。他们还可以催化创新并减轻复杂的问题。当前的AI系统不是以透明的方式创建的,这使其对公众信任构成挑战。所使用的不透明过程产生的结果尚不清楚。 AI系统可能造成的危害进一步破坏了信任。受影响最大的人是不包括参加AI系统开发的社区。缺乏可信赖性将导致这些AI技术的采用缓慢。对于AI创新而言,包括受这些AI系统的利益和危害影响的群体至关重要。小径(可信赖的法律与社会)研究所是马里兰大学,乔治华盛顿大学,摩根州立大学和康奈尔大学的合作伙伴关系。它鼓励社区参与AI技术,工具和科学理论的发展。生产的设计和政策建议将促进AI系统的可信赖性。步道学院的第一个目标是发现改变AI系统设计和开发的方法。这将帮助社区做出有关AI技术采用的明智选择。第二个目标是开发行业和政府的最佳实践。这将促进AI创新,同时确保社区的安全,参与和知情。步道学院制定了明确的计划,以增加受影响社区的参与。这包括通过国会工作人员参加K-12教育。这些计划将引起受影响社区的关注和期望。他们还提供了对支持AI的系统的风险和好处的改进。 Trails Institute的研究计划确定了四个关键推力。这些推力目标是AI系统开发生命周期的关键方面。首先是社会价值观。它涉及在AI开发的各个方面增加参与。这确保了AI系统产生的价值观反映了社区和感兴趣的各方的价值观。这包括与潜水员社区的参与设计。结果是基于社区的干预措施和对AI开发生命周期的改编。第二个推力是技术设计。它包括开发算法以促进对AI的透明度和信任。这包括开发增加AI系统鲁棒性的工具。它还促进了用户和开发人员对AI系统运行方式的理解。第三个信任是社会技术的看法。这涉及开发新的措施,包括心理测量技术和实验范例。这些措施将评估AI系统的解释性和解释。这将使对现有指标和算法有更深入的了解和感知。这提供了对所包括的社区成员感知和持有的价值观的理解。第四个推力是治理。它包括数据和技术的治理制度的文档和分析。这些为平台和技术法规的开发提供了基础AI。民族志学家将分析研究所本身和合作伙伴组织。他们将记录技术选择转化为治理影响的方式。研究重点是两个使用启发的领域。第一个是信息传播系统(例如社交媒体平台)。第二个是能源密集型系统(例如自主系统)。该研究所在人工智能中的教育和劳动力发展工作包括新的教育产品。这些迎合许多市场,从中学到高管教育。小径研究所特别专注于扩大接受基础教育的机会。重点是历史上的边缘化和少数学习者和用户群体。该研究所将与这些社区合作,向参与者学习,教育和招募参与者。重点是保留,支持和赋予主流AI边缘化的人的能力。这些社区融入该AI研究计划的融合扩大了对AI发展和治理的参与。美国国家标准技术研究所(NIST)正在与NSF合作,为该研究所提供资金。该奖项反映了NSF的法定任务,并使用基金会的知识分子优点和更广泛的影响评估标准,通过评估被认为是宝贵的支持。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Hal Daume其他文献
Seamful XAI: Operationalizing Seamful Design in Explainable AI
Seamful XAI:在可解释的 AI 中实施 Seamful 设计
- DOI:
- 发表时间:
2024 - 期刊:
- 影响因子:0
- 作者:
Upol Ehsan;Qingzi Vera Liao;Samir Passi;Mark O. Riedl;Hal Daume - 通讯作者:
Hal Daume
Hal Daume的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Hal Daume', 18)}}的其他基金
RI: EAGER: Collaborative Research: Adaptive Heads-up Displays for Simultaneous Interpretation
RI:EAGER:协作研究:用于同声传译的自适应平视显示器
- 批准号:
1748663 - 财政年份:2017
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
RI: Small: Linguistic Semantics and Discourse from Leaky Distant Supervision
RI:小:来自泄漏远程监督的语言语义和话语
- 批准号:
1618193 - 财政年份:2016
- 资助金额:
$ 2000万 - 项目类别:
Continuing Grant
EAGER: Discrete Algorithms in NLP
EAGER:NLP 中的离散算法
- 批准号:
1451430 - 财政年份:2014
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
RI: SMALL: Statistical Linguistic Typology
RI:小:统计语言类型学
- 批准号:
1153487 - 财政年份:2011
- 资助金额:
$ 2000万 - 项目类别:
Continuing Grant
ICML 2011 Proposal for Student Poster Program and Travel Scholarships
ICML 2011 年学生海报计划和旅行奖学金提案
- 批准号:
1130109 - 财政年份:2011
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
Collaborative Research: EAGER: Computational Thinking Olympiad
合作研究:EAGER:计算思维奥林匹克竞赛
- 批准号:
1048401 - 财政年份:2010
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
RI: SMALL: Statistical Linguistic Typology
RI:小:统计语言类型学
- 批准号:
0916372 - 财政年份:2009
- 资助金额:
$ 2000万 - 项目类别:
Continuing Grant
Computational Thinking Olympiad: Brainstorming Workshop
计算思维奥林匹克:头脑风暴研讨会
- 批准号:
0848473 - 财政年份:2008
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
Cross-Task Learning for Natural Language Processing
自然语言处理的跨任务学习
- 批准号:
0712764 - 财政年份:2007
- 资助金额:
$ 2000万 - 项目类别:
Continuing Grant
相似海外基金
Toward Trustworthy Generative AI by Integrating Large Language Model with Knowledge Graph
通过将大型语言模型与知识图相结合,迈向可信赖的生成式人工智能
- 批准号:
24K20834 - 财政年份:2024
- 资助金额:
$ 2000万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Human-centric Digital Twin Approaches to Trustworthy AI and Robotics for Improved Working Conditions
以人为本的数字孪生方法,实现值得信赖的人工智能和机器人技术,以改善工作条件
- 批准号:
10109582 - 财政年份:2024
- 资助金额:
$ 2000万 - 项目类别:
EU-Funded
Accelerating Trustworthy AI: developing a first-to-market AI System Risk Management Platform for Insurance Product creation
加速可信人工智能:开发首个上市的人工智能系统风险管理平台,用于保险产品创建
- 批准号:
10093285 - 财政年份:2024
- 资助金额:
$ 2000万 - 项目类别:
Collaborative R&D
CAREER: An Integrated Trustworthy AI Research and Education Framework for Modeling Human Behavior in Climate Disasters
职业生涯:用于模拟气候灾害中人类行为的综合可信人工智能研究和教育框架
- 批准号:
2338959 - 财政年份:2024
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
CAP: Capacity Building for Trustworthy AI in Medical Systems (TAIMS)
CAP:医疗系统中值得信赖的人工智能的能力建设(TAIMS)
- 批准号:
2334391 - 财政年份:2023
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant