CompCog: HNDS-R: Self-Supervision of Visual Learning From Spatiotemporal Context

CompCog:HNDS-R:时空背景下视觉学习的自我监督

基本信息

  • 批准号:
    2216127
  • 负责人:
  • 金额:
    $ 49.7万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2022
  • 资助国家:
    美国
  • 起止时间:
    2022-09-15 至 2025-08-31
  • 项目状态:
    未结题

项目摘要

Modern computer vision models are trained on sets of images numbering in the billions and yet they are still far less robust than the visual systems of small children who have a much smaller range of visual experiences. This project will use our understanding of how infants experience the world in their first years of life in order to develop new methods of training artificial intelligence programs to decode information they receive from a camera. One advantage that children have over computers is that they experience the visual world as a journey through space, rather than as a series of randomly collected, unrelated images. Children thus have a way to evaluate the similarity of two visual scenes based on the child's vantage point for each scene. The investigators will generate highly realistic scenes modeled on the perspective of a young child moving through a house, which will be used to develop a computer algorithm that learns how to recognize objects, surfaces, and other visual concepts. The work will provide new insights into improving computer vision for real-world problems, a field that is under rapid growth due to its application in areas including household robots, assistive robots, and self-driving cars. The project will support interdisciplinary graduate and postdoctoral training as well as production of widely accessible STEM educational resources through Neuromatch, which is a summer school that emerged during the pandemic as a way to reach students while incurring minimal cost and maintaining a low carbon footprint. The investigators develop a critical theory of visual learning, inspired by how human children learn, with the potential to reshape the fundamentals of learning in computer vision and machine learning. The research hypothesizes that a key ingredient in human visual learning is spatiotemporal contiguity, which is the fact that images in the world are experienced in a sequence as a child moves through space. The project has two components aimed at ultimately developing a new algorithm for visual learning based on human learning. First, a data set will be created using ray-tracing to generate sequences of photorealistic images in a similar way that a child would experience them. Then, these images will be coupled with recent innovations in self-supervised deep learning to determine how spatiotemporal image sequences can augment computer vision using image classification and other tasks as tests. The resulting algorithm will produce artificial neural networks that respond to visual patterns. Those responses can be compared with the responses of neural networks in the human brain as measured through fMRI to determine through representational-similarity analysis if the sequence-learning mechanism is a better approximation of human visual learning than state-of-the-art computer vision methods. Moreover, this analysis technique can be used as a searchlight to highlight the regions in the brain that are most similar to the newly developed artificial neural networks; this is helpful for determining how different brain areas contribute to visual learning. Students supported by this project will conduct research at the interface between psychology and computer science and the project will also contribute to the development of STEM educational resources.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
现代的计算机视觉模型是在数十亿次编号的一组图像中训练的,但它们的稳健性远低于具有视觉体验范围要小得多的小孩的视觉系统。该项目将利用我们对婴儿在生命的头几年中如何体验世界的理解,以开发新的培训人工智能计划的方法来解码他们从相机中收到的信息。儿童比计算机拥有的优点之一是,他们将视觉世界体验为穿越太空的旅程,而不是作为一系列随机收集的,无关的图像。因此,孩子们可以根据每个场景的孩子的有利位置来评估两个视觉场景的相似性。 调查人员将生成基于幼儿穿越房屋的观点的高度现实场景,该场景将用于开发计算机算法,该算法学习如何识别对象,表面和其他视觉概念。这项工作将为改善现实世界问题的计算机愿景提供新的见解,该领域由于其在家用机器人,辅助机器人和自动驾驶汽车等领域的应用而迅速增长。该项目将支持跨学科的毕业生和博士后培训,并通过NeuroMatch生产广泛可访问的STEM教育资源,这是一所暑期学校,在大流行期间出现,是吸引学生的一种方式,同时又导致了最小的成本并保持低碳足迹。 研究人员开发了一种视觉学习的批判理论,灵感来自于人类儿童学习的方式,并有可能重塑计算机视觉和机器学习中学习的基础知识。该研究假设人类视觉学习中的关键要素是时空的连续性,这是世界上的图像随着孩子在太空中移动而按顺序经历。该项目有两个组件,旨在最终开发一种基于人类学习的视觉学习的新算法。首先,将使用射线追踪创建一个数据集,以类似的方式与孩子体验到它们的相似方式生成一系列逼真的图像。然后,这些图像将与自我监督深度学习的最新创新相结合,以确定时空图像序列如何使用图像分类和其他任务作为测试来增强计算机视觉。由此产生的算法将产生对视觉模式响应的人工神经网络。可以将这些反应与通过fMRI测量的人脑中神经网络的反应进行比较,以通过表示相似性分析确定序列学习机制是否比先进的计算机视觉方法更好地近似人类视觉学习。此外,该分析技术可用作探照灯,以突出与新开发的人工神经网络最相似的大脑区域。这有助于确定不同的大脑区域如何促进视觉学习。 该项目支持的学生将在心理学与计算机科学之间的界面进行研究,该项目还将为STEM教育资源的发展做出贡献。该奖项反映了NSF的法定任务,并被认为是值得通过基金会的知识分子和更广泛影响的评估评估来获得支持的。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Bradley Wyble其他文献

Bradley Wyble的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Bradley Wyble', 18)}}的其他基金

CompCog: Bridging the gap between behavioral and neural correlates of attention using a computational model of neural mechanisms
CompCog:使用神经机制的计算模型弥合注意力的行为和神经相关性之间的差距
  • 批准号:
    1734220
  • 财政年份:
    2017
  • 资助金额:
    $ 49.7万
  • 项目类别:
    Standard Grant
Integrating Spatial and Temporal Models of Visual Attention
整合视觉注意力的时空模型
  • 批准号:
    1331073
  • 财政年份:
    2013
  • 资助金额:
    $ 49.7万
  • 项目类别:
    Standard Grant

相似海外基金

Collaborative Research: HNDS-I: NewsScribe - Extending and Enhancing the Media Cloud Searchable Global Online News Archive
合作研究:HNDS-I:NewsScribe - 扩展和增强媒体云可搜索全球在线新闻档案
  • 批准号:
    2341858
  • 财政年份:
    2024
  • 资助金额:
    $ 49.7万
  • 项目类别:
    Standard Grant
Collaborative Research: HNDS-I: NewsScribe - Extending and Enhancing the Media Cloud Searchable Global Online News Archive
合作研究:HNDS-I:NewsScribe - 扩展和增强媒体云可搜索全球在线新闻档案
  • 批准号:
    2341859
  • 财政年份:
    2024
  • 资助金额:
    $ 49.7万
  • 项目类别:
    Standard Grant
Collaborative Research: HNDS-I. Mobility Data for Communities (MD4C): Uncovering Segregation, Climate Resilience, and Economic Development from Cell-Phone Records
合作研究:HNDS-I。
  • 批准号:
    2420945
  • 财政年份:
    2024
  • 资助金额:
    $ 49.7万
  • 项目类别:
    Standard Grant
Collaborative Research: HNDS-I: Cyberinfrastructure for Human Dynamics and Resilience Research
合作研究:HNDS-I:人类动力学和复原力研究的网络基础设施
  • 批准号:
    2318203
  • 财政年份:
    2023
  • 资助金额:
    $ 49.7万
  • 项目类别:
    Standard Grant
Collaborative Research: HNDS-R: Human Networks, Sustainable Development, and Lived Experience in a Nonindustrial Society
合作研究:HNDS-R:人类网络、可持续发展和非工业社会的生活经验
  • 批准号:
    2212898
  • 财政年份:
    2023
  • 资助金额:
    $ 49.7万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了