Neural Systems in Auditory and Speech Categorization
听觉和言语分类中的神经系统
基本信息
- 批准号:10194447
- 负责人:
- 金额:$ 28.63万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2017
- 资助国家:美国
- 起止时间:2017-07-01 至 2023-06-30
- 项目状态:已结题
- 来源:
- 关键词:AcousticsAdultArticulationAuditoryAuditory areaBehaviorBehavioralBiological ModelsBrainCategoriesClassificationCorpus striatum structureCuesDataDimensionsDorsalElectrocorticogramFeedbackFoundationsFunctional Magnetic Resonance ImagingGoalsHeadHippocampus (Brain)IndividualKnowledgeLeadLearningLocationMapsMeasurementMeasuresMediatingMethodsModalityMonitorNeurobiologyNeuronal PlasticityNeurophysiology - biologic functionOutcomeParticipantPathway interactionsPerformancePrefrontal CortexProcessRewardsRoleSensorySignal TransductionSpeechSpeech PerceptionSpeech SoundStimulusStreamStructureSuperior temporal gyrusSystemTemporal LobeTestingTimeTrainingVentral StriatumVisualbasecaudate nucleusdensitydesignexperienceexperimental studyinnovationlearning outcomemultimodalityneural circuitneural modelneurobiological mechanismneuroimagingneurophysiologynon-Nativenovelputamenrelating to nervous systemresponsesoundspatiotemporalspeech accuracy
项目摘要
Using complementary multi-modal neuroimaging methods (functional magnetic resonance imaging (fMRI) and
electrocorticography (ECoG)) in conjunction with rigorous behavioral approaches, we will examine the role of
multiple cortico-striatal and sensory cortical networks in the acquisition and automatization of novel non-
speech and speech categories in the mature adult brain. We test the scientific premise of a dual-learning
systems (DLS) model by probing neural function using fMRI or ECoG during the process of feedback-dependent
category learning. In contrast to popular single-learning system (SLS) approaches, DLS posits that two neurally-
dissociable cortico-striatal systems are critical to speech learning: an explicit, sound-to-rule cortico-striatal
system, that maps sounds onto rules, and an implicit, sound-to-reward cortico-striatal system that implicitly
associates sounds with actions that lead to immediate reward. Per DLS, the two systems contribute to the
emerging expertise of the learner. Via closed loops, the highly plastic cortico-striatal systems ‘train’ key less
labile temporal lobe networks to categorize information by validated rules or rewards. Once categories are
learned to the point of automaticity, cortico-striatal networks are no longer required to mediate behavior.
Instead, abstract categorical information within the temporal cortex drives highly accurate speech categorization.
In Aim 1.1, we use fMRI to examine the relative dominance of the two cortico-striatal networks in learning
multidimensional non-speech category structures that are experimenter-constrained to either rely on rules (rule-
based, RB), or on implicit integration of multidimensional cues (information-integration, II). We predict that key
regions of the sound-to-rule network, the prefrontal cortex (PFC), hippocampus, and caudate nucleus show
greater activation during RB, relative to II learning; in contrast, key regions within the sound-to-reward network,
the putamen and the ventral striatum show greater activation during II, relative to RB learning. In Aims 1.2 and
1.3, we leverage the temporal precision of ECoG measurements from high-density grids in temporal, PFC, and
Hippocampal regions to examine the extent to which temporal lobe representational changes during RB learning
are an outcome of error-monitoring processes within the PFC and hippocampus. In Aim 2, we probe neural
function using fMRI or ECoG to assess network and representational changes during the acquisition of non-
native supra-segmental and segmental categories to native-like performance levels. We predict that early
‘novice’ speech acquisition involves sound-to-rule mapping; later ‘experienced’ involves sound-to-reward
mapping. In contrast, only cortical networks are active at the point of ‘native-like automaticity’ in categorization.
Using innovative single-trial classification and network-level decoding analyses on ECoG data, we examine
learning-induced changes in speech representation within the temporal lobe. Further, we examine the extent to
which error monitoring processes within the PFC and the hippocampus drive emergent temporal lobe
representations of novel speech categories.
使用互补的多模态神经影像方法(功能磁共振成像(fMRI)和
皮质电图(ECoG)与严格的行为方法相结合,我们将检查
多个皮质纹状体和感觉皮质网络在新颖的非
我们测试了双重学习的科学前提。
系统(DLS)模型,通过在依赖反馈的过程中使用 fMRI 或 ECoG 探测神经功能
与流行的单一学习系统(SLS)方法相比,DLS 定位了两种神经网络:
可分离的皮质纹状体系统对于言语学习至关重要:一个明确的、声音规则的皮质纹状体
系统,将声音映射到规则,以及隐式的声音奖励皮质纹状体系统,隐式地
根据 DLS,这两个系统将声音与可立即获得奖励的行为联系起来。
通过闭环,高度可塑性的皮质纹状体系统可以“训练”无钥匙的学习者的新兴专业知识。
一旦类别确定,不稳定的颞叶网络就可以通过经过验证的规则或奖励对信息进行分类。
学习到自动化程度后,皮质纹状体网络不再需要调解行为。
相反,颞皮层内的抽象分类信息驱动高度准确的语音分类。
在目标 1.1 中,我们使用功能磁共振成像来检查两个皮质纹状体网络在学习中的相对优势
多维非语音类别结构受实验者限制,要么依赖于规则(规则
基于,RB),或多维线索的隐式整合(信息整合,II)。
声音规则网络区域、前额皮质 (PFC)、海马体和尾状核显示
相对于 II 学习,RB 期间的激活程度更高;相比之下,声音奖励网络中的关键区域,
相对于目标 1.2 和 RB 学习,壳核和腹侧纹状体在 II 期间表现出更大的激活。
1.3,我们利用高密度网格在时间、PFC 和
海马区域检查 RB 学习期间颞叶表征变化的程度
是 PFC 和海马体中错误监控过程的结果。在目标 2 中,我们探索神经元。
使用 fMRI 或 ECoG 的功能来评估在获取非
我们预测,早期的超细分和细分类别将达到类似本地人的绩效水平。
“新手”的语音习得涉及到声音到规则的映射;后来的“经验丰富”的则涉及到声音到奖励的映射。
相比之下,只有皮质网络在分类中处于“类本地自动化”状态时才处于活跃状态。
使用创新的单试验分类和 ECoG 数据的网络级解码分析,我们检查
学习引起的颞叶内言语表征的变化进一步,我们检查了其程度。
PFC 和海马内的哪些错误监控过程驱动突发颞叶
新语音类别的表示。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Bharath Chandrasekaran其他文献
Bharath Chandrasekaran的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Bharath Chandrasekaran', 18)}}的其他基金
SYMPOSIUM ON COGNITIVE AUDITORY NEUROSCIENCE (SCAN)
认知听觉神经科学研讨会(扫描)
- 批准号:
10078266 - 财政年份:2020
- 资助金额:
$ 28.63万 - 项目类别:
SYMPOSIUM ON COGNITIVE AUDITORY NEUROSCIENCE (SCAN)
认知听觉神经科学研讨会(扫描)
- 批准号:
9914387 - 财政年份:2020
- 资助金额:
$ 28.63万 - 项目类别:
SYMPOSIUM ON COGNITIVE AUDITORY NEUROSCIENCE (SCAN)
认知听觉神经科学研讨会(扫描)
- 批准号:
10319585 - 财政年份:2020
- 资助金额:
$ 28.63万 - 项目类别:
Online modulation of auditory brainstem responses to speech
在线调节听觉脑干对言语的反应
- 批准号:
8698087 - 财政年份:2014
- 资助金额:
$ 28.63万 - 项目类别:
Online modulation of auditory brainstem responses to speech
听觉脑干对言语反应的在线调制
- 批准号:
9040920 - 财政年份:2014
- 资助金额:
$ 28.63万 - 项目类别:
Online modulation of auditory brainstem responses to speech
听觉脑干对言语反应的在线调制
- 批准号:
8827317 - 财政年份:2014
- 资助金额:
$ 28.63万 - 项目类别:
相似国自然基金
单核细胞产生S100A8/A9放大中性粒细胞炎症反应调控成人Still病发病及病情演变的机制研究
- 批准号:82373465
- 批准年份:2023
- 资助金额:49 万元
- 项目类别:面上项目
成人型弥漫性胶质瘤患者语言功能可塑性研究
- 批准号:82303926
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
MRI融合多组学特征量化高级别成人型弥漫性脑胶质瘤免疫微环境并预测术后复发风险的研究
- 批准号:82302160
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
SERPINF1/SRSF6/B7-H3信号通路在成人B-ALL免疫逃逸中的作用及机制研究
- 批准号:82300208
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
基于动态信息的深度学习辅助设计成人脊柱畸形手术方案的研究
- 批准号:82372499
- 批准年份:2023
- 资助金额:49 万元
- 项目类别:面上项目
相似海外基金
Behavioral and neural characteristics of adaptive speech motor control
自适应语音运动控制的行为和神经特征
- 批准号:
10562043 - 财政年份:2023
- 资助金额:
$ 28.63万 - 项目类别:
Expanding articulatory information from ultrasound imaging of speech using MRI-based image simulations and audio measurements
使用基于 MRI 的图像模拟和音频测量来扩展语音超声成像的发音信息
- 批准号:
10537976 - 财政年份:2022
- 资助金额:
$ 28.63万 - 项目类别:
FACTORS INFLUENCING AUDIOVISUAL SPEECH BENEFIT IN CHILDREN WITH HEARING LOSS
影响听力损失儿童视听言语益处的因素
- 批准号:
10634697 - 财政年份:2022
- 资助金额:
$ 28.63万 - 项目类别:
FACTORS INFLUENCING AUDIOVISUAL SPEECH BENEFIT IN CHILDREN WITH HEARING LOSS
影响听力损失儿童视听言语益处的因素
- 批准号:
10515800 - 财政年份:2022
- 资助金额:
$ 28.63万 - 项目类别:
A multidimensional study on articulation deficits in Parkinsons disease
帕金森病关节缺陷的多维研究
- 批准号:
10388940 - 财政年份:2021
- 资助金额:
$ 28.63万 - 项目类别: