CRII: SaTC: RUI: Understanding and Collectively Mitigating Harms from Deepfake Imagery
CRII:SaTC:RUI:理解并共同减轻 Deepfake 图像的危害
基本信息
- 批准号:2348326
- 负责人:
- 金额:$ 17.31万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2024
- 资助国家:美国
- 起止时间:2024-06-01 至 2026-05-31
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
This CRII research is taking an innovative approach to augment underserved communities’ cybersecurity and privacy. Although generative AI can support creative expression and boost worker productivity, it also enables the creation of seemingly-real images, video, and audio —known as deepfakes. Deepfakes are being used to silence, steal, extort, and defraud, manipulate stock prices and public opinion, and harm the reputation of people and organizations. These actions have a negative impact on national security, civic institutions, and people’s personal and professional lives. As generative AI technologies become increasingly sophisticated and accessible, anyone can create a deepfake with a single photo or audio clip. This research systematically documents how deepfakes harm underserved communities and devises novel, community-centric solutions to become more resilient to these harms. The project also supports the training of diverse students in advanced social computing and AI technologies to address urgent societal need.This research transcends individualistic approaches to security and privacy by applying concepts from social computing, usable security, and community-based participatory research, and it uses a mixed-methods approach to synthesize the potentially prosocial as well as antisocial applications of deepfakes, and their impact on people’s personal and private lives. The findings could inform government and technology policy that addresses the needs of underserved groups, and the design of responsible generative AI tools. The research team is collaborating closely with two underserved groups in the Philadelphia area, Black communities and Asian immigrant/diaspora communities, to build a scalable platform to encourage and support community-level cybersecurity practices. The site will also enable deployment and evaluation studies, advancing foundational knowledge about harm, trust, and security and privacy.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
CRII 的这项研究正在采取一种创新方法来增强服务不足的社区的网络安全和隐私,尽管生成式人工智能可以支持创造性表达并提高工人的生产力,但它也可以创建看似真实的图像、视频和音频——即所谓的“深度伪造”。这些行为对国家安全、公民机构以及人们的个人和职业生活产生负面影响。人工智能技术随着深度伪造技术变得越来越复杂和易于使用,任何人都可以用一张照片或音频剪辑来创建深度伪造品,该研究系统地记录了深度伪造品如何危害服务不足的社区,并以新颖的、以社区为中心的解决方案增强对这些危害的抵御能力。该项目还支持培训。不同学生学习先进的社会计算和人工智能技术,以满足紧迫的社会需求。这项研究通过应用社会计算、可用安全和基于社区的参与性研究的概念,超越了个人主义的安全和隐私方法,并采用了混合方法合成研究团队正在研究深度假货的潜在亲社会和反社会应用,以及它们对人们个人和私人生活的影响。与费城地区两个服务不足的群体(黑人社区和亚洲移民/侨民社区)密切合作,建立一个可扩展的平台,以鼓励和支持社区级网络安全实践。该网站还将支持部署和评估研究,推进有关危害的基础知识。 ,该奖项反映了 NSF 的法定使命,并通过使用基金会的智力价值和更广泛的影响审查标准进行评估,被认为值得支持。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Sukrit Venkatagiri其他文献
It ’ s QuizTime : A Study of Online Verification Practices on Twitter
QuizTime:Twitter 在线验证实践研究
- DOI:
- 发表时间:
2019 - 期刊:
- 影响因子:0
- 作者:
Sukrit Venkatagiri;Jacob Thebault;Sarwat Kazmi;Efua Akonor;Kurt Luther - 通讯作者:
Kurt Luther
The expanding circles of information behavior and human–computer interaction
信息行为和人机交互的不断扩大的圈子
- DOI:
10.1177/09610006211015782 - 发表时间:
2021-05-24 - 期刊:
- 影响因子:1.7
- 作者:
T. Gorichanaz;Sukrit Venkatagiri - 通讯作者:
Sukrit Venkatagiri
Promoting Ethical Technology Design Practices by Leveraging Human Psychology
利用人类心理学促进道德技术设计实践
- DOI:
10.1145/3656156.3663716 - 发表时间:
2024-07-01 - 期刊:
- 影响因子:0
- 作者:
Emily Foster;Sukrit Venkatagiri - 通讯作者:
Sukrit Venkatagiri
OSINT Research Studios: A Flexible Crowdsourcing Framework to Scale Up Open Source Intelligence Investigations
OSINT 研究工作室:灵活的众包框架,扩大开源情报调查规模
- DOI:
10.48550/arxiv.2401.00928 - 发表时间:
2024-01-01 - 期刊:
- 影响因子:0
- 作者:
Anirban Mukhopadhyay;Sukrit Venkatagiri;Kurt Luther - 通讯作者:
Kurt Luther
GroundTruth: Augmenting Expert Image Geolocation with Crowdsourcing and Shared Representations
GroundTruth:通过众包和共享表示增强专家图像地理定位
- DOI:
10.1145/3359209 - 发表时间:
2019-11-07 - 期刊:
- 影响因子:0
- 作者:
Sukrit Venkatagiri;Jacob Thebault;Rachel Kohler;John Purviance;Rifat Sabbir Mansur;Kurt Luther - 通讯作者:
Kurt Luther
Sukrit Venkatagiri的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
相似海外基金
CRII: SaTC: RUI: When Logic Locking Meets Hardware Trojan Mitigation and Fault Tolerance
CRII:SaTC:RUI:当逻辑锁定遇到硬件木马缓解和容错时
- 批准号:
2245247 - 财政年份:2023
- 资助金额:
$ 17.31万 - 项目类别:
Standard Grant
CRII: SaTC: RUI: Understanding and Addressing the Security and Privacy Needs of At-Risk Populations
CRII:SaTC:RUI:理解和解决高危人群的安全和隐私需求
- 批准号:
2334061 - 财政年份:2023
- 资助金额:
$ 17.31万 - 项目类别:
Standard Grant
CRII: SaTC: RUI: An Intelligent Data-Driven Framework to Achieve Proactive Cybersecurity
CRII:SaTC:RUI:实现主动网络安全的智能数据驱动框架
- 批准号:
2246220 - 财政年份:2023
- 资助金额:
$ 17.31万 - 项目类别:
Standard Grant
CRII: SaTC: RUI: Towards Trustworthy and Accountable IoT Data Marketplaces
CRII:SaTC:RUI:迈向值得信赖和负责任的物联网数据市场
- 批准号:
2153464 - 财政年份:2022
- 资助金额:
$ 17.31万 - 项目类别:
Standard Grant
CRII: SaTC: RUI: Understanding and Addressing the Security and Privacy Needs of At-Risk Populations
CRII:SaTC:RUI:理解和解决高危人群的安全和隐私需求
- 批准号:
1948344 - 财政年份:2020
- 资助金额:
$ 17.31万 - 项目类别:
Standard Grant