CompCog: Template Contrast and Saliency (TCAS) Toolbox: a tool to visualize parallel attentive evaluation of scenes
CompCog:模板对比度和显着性 (TCAS) 工具箱:一种可视化场景并行注意力评估的工具
基本信息
- 批准号:1921735
- 负责人:
- 金额:$ 65.69万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2019
- 资助国家:美国
- 起止时间:2019-08-15 至 2023-07-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
One of the most common visual tasks humans do is use their eyes to find objects in the world around them. This task involves analyzing all the visual objects and backgrounds in the scene. This is a complicated task because the brain has to separate objects from the background. The brain also has to process the color, shape, and size of all objects. The aim of the research is to build a mathematical model that can find objects in scenes, despite the difficulty of the problem. The model is inspired by the visual system. It uses two ways to process information. First, it uses central vision to get a fine-grained analysis of the object it is looking at. Second, it also uses peripheral vision, which is the area around and away from central vision. Peripheral vision can analyze several objects at the same time but is less precise than central vision. The ultimate goal of the project is to develop a free, open-source software toolbox that anyone can use. The toolbox will visualize how the visual system processes complex scenes. It will determine which regions in a scene should be ignored and which regions the eyes should focus on. One strength of the proposal is that it makes specific predictions that can be tested in various fields of neuroscience. It might also lead to improvements in visual aids for visually impaired individuals because it can guide users toward areas in a scene that are likely to contain the target object.The starting point for the proposed work is a mathematically explicit model of goal-directed visual processing. The model incorporates two components of visual complexity: a parameter that measures the visual difference between objects in the scene and the object the observer is looking for (the target) and a parameter that measures how similar objects in the scene are to one another. The preliminary work indicated that the model is very capable of predicting how long it will take observers to find targets in visually complex scenes. The first two goals of the present research aim at evaluating other components of visual complexity to improve the model and its ability to predict visual processing in more complex visual scenes. The experiments in Goals 1 and 2 will help determine how to combine the visual qualities of objects (such as color, shape and texture) as well as how to account for the contrast between objects and their background. Results from Goals 1 and 2 will directly guide the development of a computational toolbox. The toolbox will allow users to visualize visual processing of simple and complex scenes and make predictions about where observers are likely to move their eyes as a function of their current goals (freely inspect the scene or find a specific object within it). The proposed work combines behavioral psychophysics and computational simulations (Goals 1 and 2), toolbox implementation and eye-tracking validation (Goal 3). The merits of the toolbox include the fact that: 1) it combines different types of visual processing (visual conspicuity contrast and target template contrast), 2) it can predict eye movements over different time scales, and 3) it can evaluate the contribution of these two types of processing to performance. This implementation is important because the contribution of these two processes is known to vary as a function of search goals (free-view vs. goal-directed) and search strategy adopted by observers (active search vs. passive search). Finally, another innovation of the toolbox is that it will be able to make predictions when targets are only defined in abstract terms, that is, when observers only have vague descriptions about the item they are supposed to find in the scene, which is particularly challenging for current computer vision systems to achieve.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
人类最常见的视觉任务之一是用眼睛寻找周围世界的物体。此任务涉及分析场景中的所有视觉对象和背景。这是一项复杂的任务,因为大脑必须将物体与背景分开。大脑还必须处理所有物体的颜色、形状和大小。该研究的目的是建立一个数学模型,尽管问题很困难,但可以在场景中找到物体。该模型的灵感来自于视觉系统。它使用两种方式来处理信息。首先,它使用中心视觉对其所观察的物体进行细粒度的分析。其次,它还使用周边视觉,即远离中心视觉的区域。周边视觉可以同时分析多个物体,但不如中央视觉精确。该项目的最终目标是开发一个任何人都可以使用的免费开源软件工具箱。该工具箱将可视化视觉系统如何处理复杂的场景。它将确定场景中的哪些区域应被忽略以及眼睛应关注哪些区域。该提案的优点之一是它做出了可以在神经科学各个领域进行测试的具体预测。它还可能会改善视障人士的视觉辅助工具,因为它可以引导用户走向场景中可能包含目标对象的区域。拟议工作的起点是目标导向视觉处理的数学显式模型。该模型包含视觉复杂性的两个组成部分:一个参数用于测量场景中的对象与观察者正在寻找的对象(目标)之间的视觉差异,另一个参数用于测量场景中的对象彼此之间的相似程度。初步工作表明,该模型非常能够预测观察者在视觉复杂的场景中找到目标需要多长时间。本研究的前两个目标旨在评估视觉复杂性的其他组成部分,以改进模型及其在更复杂的视觉场景中预测视觉处理的能力。目标 1 和 2 中的实验将有助于确定如何结合对象的视觉质量(例如颜色、形状和纹理)以及如何考虑对象与其背景之间的对比度。目标 1 和 2 的结果将直接指导计算工具箱的开发。该工具箱将允许用户可视化简单和复杂场景的视觉处理,并预测观察者可能根据当前目标移动眼睛的位置(自由检查场景或找到其中的特定对象)。拟议的工作结合了行为心理物理学和计算模拟(目标 1 和 2)、工具箱实现和眼动追踪验证(目标 3)。该工具箱的优点包括:1)它结合了不同类型的视觉处理(视觉显着对比度和目标模板对比度),2)它可以预测不同时间尺度上的眼球运动,3)它可以评估这两类处理对性能都有影响。这种实现很重要,因为已知这两个过程的贡献会随着搜索目标(自由视图与目标导向)和观察者采用的搜索策略(主动搜索与被动搜索)而变化。最后,该工具箱的另一个创新是,当目标仅以抽象术语定义时,即当观察者对他们应该在场景中找到的项目只有模糊描述时,它将能够进行预测,这是特别具有挑战性的该奖项反映了 NSF 的法定使命,并通过使用基金会的智力价值和更广泛的影响审查标准进行评估,被认为值得支持。
项目成果
期刊论文数量(7)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Prioritization in visual attention does not work the way you think it does.
视觉注意力的优先顺序并不像你想象的那样有效。
- DOI:10.1037/xhp0000887
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Ng, Gavin J.;Buetti, Simona;Patel, Trisha N.;Lleras, Alejandro
- 通讯作者:Lleras, Alejandro
Predicting how color and shape combine in the human visual system to direct attention
- DOI:10.1038/s41598-019-56238-9
- 发表时间:2019-12-30
- 期刊:
- 影响因子:4.6
- 作者:Buetti, Simona;Xu, Jing;Lleras, Alejandro
- 通讯作者:Lleras, Alejandro
Incorporating the properties of peripheral vision into theories of visual search
将周边视觉的特性纳入视觉搜索理论
- DOI:10.1038/s44159-022-00097-1
- 发表时间:2022
- 期刊:
- 影响因子:0
- 作者:Lleras, Alejandro;Buetti, Simona;Xu, Zoe Jing
- 通讯作者:Xu, Zoe Jing
Distractor–distractor interactions in visual search for oriented targets explain the increased difficulty observed in nonlinearly separable conditions.
视觉搜索定向目标时的干扰因素与干扰因素的相互作用解释了在非线性可分离条件下观察到的难度增加。
- DOI:10.1037/xhp0000941
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Xu, Zoe;Lleras, Alejandro;Shao, Yujie;Buetti, Simona
- 通讯作者:Buetti, Simona
A target contrast signal theory of parallel processing in goal-directed search
- DOI:10.3758/s13414-019-01928-9
- 发表时间:2020-02-05
- 期刊:
- 影响因子:1.7
- 作者:Lleras, Alejandro;Wang, Zhiyuan;Buetti, Simona
- 通讯作者:Buetti, Simona
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Simona Buetti其他文献
Simona Buetti的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
相似国自然基金
停止模板对注意的调控机制
- 批准号:32371107
- 批准年份:2023
- 资助金额:50 万元
- 项目类别:面上项目
聚合物自模板策略可控构筑界面耦合金属-碳复合材料及在3D打印微型电容器中的性能研究
- 批准号:52303342
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
高温BiMeO3基压电织构陶瓷的模板“钝化”构筑和电学性能调控研究
- 批准号:52372106
- 批准年份:2023
- 资助金额:50 万元
- 项目类别:面上项目
基于光生载流子输运驱动“由内向外”去除模板分子印迹聚合物的构建及检测应用
- 批准号:22374113
- 批准年份:2023
- 资助金额:50 万元
- 项目类别:面上项目
快照多光谱成像滤光阵列模板设计及图像重建研究
- 批准号:62375211
- 批准年份:2023
- 资助金额:48 万元
- 项目类别:面上项目