Eye movements during real-world visual search: A behavioral & computational study
现实世界视觉搜索期间的眼球运动:行为
基本信息
- 批准号:8441609
- 负责人:
- 金额:$ 25.36万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2002
- 资助国家:美国
- 起止时间:2002-09-05 至 2015-02-28
- 项目状态:已结题
- 来源:
- 关键词:AccountingAddressAffectAppearanceAppetitive BehaviorAttentionBehaviorBehavioralBehavioral ModelBehavioral ResearchCategoriesCodeCognition DisordersColorCommunitiesComplexComputer SimulationDataDiagnosisDiseaseEmergency SituationEmotional disorderEventEye MovementsGoalsHumanImageJointsKnowledgeLabelLaboratoriesLaboratory StudyLeadLearningLeftLiteratureLocationManualsMethodsModelingMoldsMotorMovementNeuronsPatternPerceptual DisordersPopulationProcessPublic HealthResearchResearch PersonnelRetinaShapesSimulateSorting - Cell MovementStagingStimulusSystemTechniquesTelephoneTextTimeUncertaintyUrsidae FamilyVisualVisual attentionVisual impairmentWorkagedbasebehavior predictioncomputer studieseffective therapyfield studyfrontiergazeimprovedinstrumentinterestneuropsychologicaloculomotor behaviorpaymentresponsesample fixationtheoriestoolvisual search
项目摘要
PROJECT SUMMARY / ABSTRACT
A joint behavioral/modeling approach is used to better understand the top-down constraints that guide overt
visual attention in realistic contexts. In previous work we developed a biologically-plausible model of eye
movements during search that used oriented and color-selective linear filters, population averaging over time,
and an artificial retina to represent stimuli of arbitrary complexity. The simulated fixation-by-fixation behavior of
this model compared well to human behavior, using stimuli ranging from Os and Qs to fully realistic scenes.
However, this model was limited in that it had to be shown the target's exact appearance, and it could not exploit
scene context to constrain attention to likely target locations. Consequently, it is largely unknown how people
shift their attention as they look for scene constrained targets or targets that are defined categorically.
These limitations are addressed in six studies. Studies 1-2 explore how people use scene context to narrow
their search for a specific target in realistic scenes. A text precue provides information about the target's location
in relation to a region of the scene ("in the field"; Study 1) or a scene landmark ("next to the blue building"; Study
2). Behavioral work quantifies the effects of these informational manipulations on search guidance;
computational work implements the behavioral scene constraints and integrates them into the existing search
model. Studies 3-6 address the relationship between search guidance and the level of detail in a target's
description. Study 3 builds on previous work by designating targets either categorically (e.g., "find the teddy
bear") or through use of a preview (e.g., a picture of a specific teddy bear), but increases the number of target
categories to determine the boundary conditions on categorical search. Study 4 asks whether categorical targets
are coded at the basic or subordinate levels, and Study 5 analyzes the distractors fixated during search to
determine the features used to code these categorical targets. In Study 6 we use text labels to vary the degree
of information in a target precue (e.g., a work boot target might be described as "footwear", a "boot", or a "tan
work boot with red laces"). Study 7 describes the sorts of questions that can be asked once scene constraints
and categorical target descriptions are integrated under a single theoretical framework, and Study 8 points to an
entirely new research direction made possible by the modeling techniques that will be developed for this project.
All of these studies are synergistic in that model predictions are used to guide behavioral studies, which in turn
produce the data needed to refine the model and to make even more specific behavioral predictions. The
project's long term objective is to obtain an understanding of how people allocate their overt visual attention in
realistic contexts, specifically in terms of how partial information about an object's location in a scene or its
appearance can be used to acquire targets in a search task. This understanding is expressed in the form of a
computational model, one that can now use simple spatial relations and the visual features of learned target
classes to acquire semantically-defined targets.
项目概要/摘要
联合行为/建模方法用于更好地理解指导公开的自上而下的约束
现实环境中的视觉注意力。在之前的工作中,我们开发了一种生物学上合理的眼睛模型
使用定向和颜色选择性线性滤波器的搜索过程中的移动,随时间的人口平均,
以及一个人造视网膜来表示任意复杂的刺激。模拟的逐个固定行为
该模型使用从 Os 和 Qs 到完全现实场景的刺激,与人类行为相比较。
然而,该模型的局限性在于它必须显示目标的确切外观,并且无法利用
场景上下文将注意力限制在可能的目标位置。因此,人们如何
当他们寻找场景受限的目标或明确定义的目标时,转移他们的注意力。
六项研究解决了这些局限性。研究 1-2 探讨人们如何利用场景上下文来缩小范围
他们在现实场景中寻找特定目标。文本提示提供有关目标位置的信息
与场景的某个区域(“在现场”;研究 1)或场景地标(“蓝色建筑物旁边”;研究
2)。行为工作量化了这些信息操纵对搜索指导的影响;
计算工作实现行为场景约束并将其集成到现有搜索中
模型。研究 3-6 研究了搜索指导与目标的详细程度之间的关系
描述。研究 3 基于之前的工作,通过明确指定目标(例如,“找到泰迪熊”)
熊”)或通过使用预览(例如,特定泰迪熊的图片),但会增加目标数量
类别以确定分类搜索的边界条件。研究 4 询问分类目标是否
在基本或从属级别进行编码,研究 5 分析了搜索过程中固定的干扰因素
确定用于编码这些分类目标的特征。在研究 6 中,我们使用文本标签来改变程度
目标预提示中的信息(例如,工作靴目标可能被描述为“鞋类”、“靴子”或“棕褐色”)
红色鞋带的工作靴”)。研究 7 描述了场景限制时可以提出的问题类型
和分类目标描述被整合在一个单一的理论框架下,研究 8 指出了一个
为本项目开发的建模技术使全新的研究方向成为可能。
所有这些研究都是协同的,因为模型预测被用来指导行为研究,而行为研究又
生成完善模型并做出更具体的行为预测所需的数据。这
项目的长期目标是了解人们如何分配明显的视觉注意力
现实环境,特别是关于场景中对象的位置或其部分信息如何
外观可用于获取搜索任务中的目标。这种理解以如下形式表达:
计算模型,现在可以使用简单的空间关系和学习目标的视觉特征
类来获取语义定义的目标。
项目成果
期刊论文数量(33)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Clutter perception is invariant to image size.
- DOI:10.1016/j.visres.2015.04.017
- 发表时间:2015-11
- 期刊:
- 影响因子:1.8
- 作者:Zelinsky GJ;Yu CP
- 通讯作者:Yu CP
Modeling visual clutter perception using proto-object segmentation.
- DOI:10.1167/14.7.4
- 发表时间:2014-06
- 期刊:
- 影响因子:1.8
- 作者:Chen-Ping Yu;D. Samaras;G. Zelinsky
- 通讯作者:Chen-Ping Yu;D. Samaras;G. Zelinsky
New Evidence for Strategic Differences between Static and Dynamic Search Tasks: An Individual Observer Analysis of Eye Movements.
- DOI:10.3389/fpsyg.2013.00008
- 发表时间:2013
- 期刊:
- 影响因子:3.8
- 作者:Dickinson CA;Zelinsky GJ
- 通讯作者:Zelinsky GJ
Modeling guidance and recognition in categorical search: bridging human and computer object detection.
- DOI:10.1167/13.3.30
- 发表时间:2012-08
- 期刊:
- 影响因子:1.8
- 作者:G. Zelinsky;Yifan Peng;A. Berg;D. Samaras
- 通讯作者:G. Zelinsky;Yifan Peng;A. Berg;D. Samaras
Modelling eye movements in a categorical search task.
在分类搜索任务中对眼球运动进行建模。
- DOI:10.1098/rstb.2013.0058
- 发表时间:2013
- 期刊:
- 影响因子:0
- 作者:Zelinsky,GregoryJ;Adeli,Hossein;Peng,Yifan;Samaras,Dimitris
- 通讯作者:Samaras,Dimitris
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
GREGORY J ZELINSKY其他文献
GREGORY J ZELINSKY的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('GREGORY J ZELINSKY', 18)}}的其他基金
Eye movements during real-world visual search
现实世界视觉搜索期间的眼球运动
- 批准号:
6507586 - 财政年份:2002
- 资助金额:
$ 25.36万 - 项目类别:
Eye movements during real-world visual search
现实世界视觉搜索期间的眼球运动
- 批准号:
6778395 - 财政年份:2002
- 资助金额:
$ 25.36万 - 项目类别:
Eye movements during real-world visual search: A behavioral & computational study
现实世界视觉搜索期间的眼球运动:行为
- 批准号:
8035978 - 财政年份:2002
- 资助金额:
$ 25.36万 - 项目类别:
Eye movements during real-world visual search: A behavioral & computational study
现实世界视觉搜索期间的眼球运动:行为
- 批准号:
7789584 - 财政年份:2002
- 资助金额:
$ 25.36万 - 项目类别:
Eye movements during real-world visual search: A behavioral & computational study
现实世界视觉搜索期间的眼球运动:行为
- 批准号:
7653265 - 财政年份:2002
- 资助金额:
$ 25.36万 - 项目类别:
Eye movements during real-world visual search: A behavioral & computational study
现实世界视觉搜索期间的眼球运动:行为
- 批准号:
8235077 - 财政年份:2002
- 资助金额:
$ 25.36万 - 项目类别:
Eye movements during real-world visual search
现实世界视觉搜索期间的眼球运动
- 批准号:
6931237 - 财政年份:2002
- 资助金额:
$ 25.36万 - 项目类别:
Eye movements during real-world visual search
现实世界视觉搜索期间的眼球运动
- 批准号:
7118721 - 财政年份:2002
- 资助金额:
$ 25.36万 - 项目类别:
Eye movements during real-world visual search
现实世界视觉搜索期间的眼球运动
- 批准号:
6655672 - 财政年份:2002
- 资助金额:
$ 25.36万 - 项目类别:
相似国自然基金
时空序列驱动的神经形态视觉目标识别算法研究
- 批准号:61906126
- 批准年份:2019
- 资助金额:24.0 万元
- 项目类别:青年科学基金项目
本体驱动的地址数据空间语义建模与地址匹配方法
- 批准号:41901325
- 批准年份:2019
- 资助金额:22.0 万元
- 项目类别:青年科学基金项目
大容量固态硬盘地址映射表优化设计与访存优化研究
- 批准号:61802133
- 批准年份:2018
- 资助金额:23.0 万元
- 项目类别:青年科学基金项目
针对内存攻击对象的内存安全防御技术研究
- 批准号:61802432
- 批准年份:2018
- 资助金额:25.0 万元
- 项目类别:青年科学基金项目
IP地址驱动的多径路由及流量传输控制研究
- 批准号:61872252
- 批准年份:2018
- 资助金额:64.0 万元
- 项目类别:面上项目
相似海外基金
Climate Change Effects on Pregnancy via a Traditional Food
气候变化通过传统食物对怀孕的影响
- 批准号:
10822202 - 财政年份:2024
- 资助金额:
$ 25.36万 - 项目类别:
Differences in Hospital Nursing Resources among Black-Serving Hospitals as a Driver of Patient Outcomes Disparities
黑人服务医院之间医院护理资源的差异是患者结果差异的驱动因素
- 批准号:
10633905 - 财政年份:2023
- 资助金额:
$ 25.36万 - 项目类别:
Competitive Bidding in Medicare and the Implications for Home Oxygen Therapy in COPD
医疗保险竞争性招标以及对慢性阻塞性肺病家庭氧疗的影响
- 批准号:
10641360 - 财政年份:2023
- 资助金额:
$ 25.36万 - 项目类别:
Alzheimer's Disease and Related Dementia-like Sequelae of SARS-CoV-2 Infection: Virus-Host Interactome, Neuropathobiology, and Drug Repurposing
阿尔茨海默病和 SARS-CoV-2 感染的相关痴呆样后遗症:病毒-宿主相互作用组、神经病理生物学和药物再利用
- 批准号:
10661931 - 财政年份:2023
- 资助金额:
$ 25.36万 - 项目类别:
NeuroMAP Phase II - Recruitment and Assessment Core
NeuroMAP 第二阶段 - 招募和评估核心
- 批准号:
10711136 - 财政年份:2023
- 资助金额:
$ 25.36万 - 项目类别: