Multisensory foundations of speech perception in infancy
婴儿期言语感知的多感官基础
基本信息
- 批准号:8575658
- 负责人:
- 金额:$ 14.96万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2013
- 资助国家:美国
- 起止时间:2013-08-12 至 2015-05-31
- 项目状态:已结题
- 来源:
- 关键词:AccountingAcousticsAdultAge-MonthsAirAssociation LearningAuditoryAuditory systemBirthBone ConductionDevelopmentDiscriminationEnvironmentEsthesiaFaceFeedbackFetusFinancial compensationFoundationsGesturesGrantGrowthHearingHumanIndividualInfantInterventionJointsKnowledgeLanguageLanguage DevelopmentLanguage DisordersLearningLifeLiteratureMapsMediatingModalityMotorMovementNatureOutcomePatternPerceptionPeripheralPlant RootsPregnancyProcessProductionRecording of previous eventsRestSensorySignal TransductionSpeechSpeech DevelopmentSpeech PerceptionSpeech SoundSystemTactileTestingUterusVisionVisualVoiceWorkauditory discriminationbaseclassical conditioningexperienceinfancylanguage perceptionmotor impairmentmultisensoryoral motorpreferencepublic health relevanceresearch studysensory systemsomatosensorysoundspeech processingtraitvisual informationvisual motor
项目摘要
DESCRIPTION (provided by applicant): Infants are born with a preference for listening to speech over non-speech, and with a set of perceptual sensitivities that enable them to discriminate most of the speech sound differences used in the world's languages, thus preparing them to acquire any language. By 10-months of age infants become experts at perceiving their native language. This involves improvements in discrimination of native consonant contrasts, but more importantly for this grant, a decline in discrimination of non-native
consonant distinctions. In the adult, speech perception is richly multimodal. What we hear is influenced by visual information in talking faces, by self-produced articulations, and even by external tactile stimulation. While speech perception is also multisensory in young infants, the genesis of this is debated. According to one view, multisensory perception is established through learned integration: seeing and hearing a particular speech sound allows learning of the commonalities in each. This grant proposes and tests the hypothesis that infant speech perception is multisensory without specific prior learning experience. Debates regarding the ontogeny of human language have centered on the issue of whether the perceptual building blocks of language are acquired through experience or whether they are innate. Yet, this nature vs. nurture controversy is rapidly being replaced by a much more nuanced framework. Here, it is proposed that the earliest developing sensory system - likely somatosensory in the case of speech, including somatosensory feedback from oral-motor movements that are first manifest in the fetus, provides an organization on which auditory speech can build once the peripheral auditory system comes on-line by 22 weeks gestation. Heard speech, both of the maternal voice via bone conduction and of external (filtered) speech through the uterus, is organized in part by this somatosensory/motor foundation. At birth, when vision becomes available, seen speech maps on to this already established foundation. These interconnected perceptual systems, thus, provide a set of parameters for matching heard, seen, and felt speech at birth. Importantly, it is argued that these multisensory perceptual foundations are established for language-general perception: they set in place an organization that provides redundancy among the oral-motor gesture, the visible oral-motor movements, and the auditory percept of any speech sound. Hence, specific learning of individual cross-modal matches is not required. Our thesis, then, is that while multisensory speech perception has a developmental history (and hence is not akin to an 'innate' starting point), the multisensory sensitivities should be in place without specific experience of specific speech sounds. Thus multisensory processing should be as evident for non-native, never-before-experienced speech sounds, as it is for native and hence familiar ones. To test this hypothesis against the alternative hypothesis of learned integration, English infants will be tested on non-native, or unfamiliar speech sound contrasts, and will be compared to Hindi infants, for whom these contrasts are native. Four sets of experiments, each using a multi-modal Distributional Learning paradigm, are proposed. Infants will be tested at 6-months, an age at which they can still discriminate non-native speech sounds, and at 10-months, an age after they begin to fail. It is proposed that if speech perception is multisensory without specific
experience, the addition of matching visual, tactile, or motor information should facilitate discrimination of a non-native speech sound contrast at 10-months, while the addition of mismatching information should disrupt discrimination at 6-months. If multisensory speech perception is learned, this pattern should be seen only for Hindi infants, for whom the contrasts are familiar and hence already intersensory. The Specific Aims are to test the influence of: 1) Visual information on Auditory speech perception (Experimental Set 1); 2) Oral-Motor gestures on Auditory speech perception (Experimental Set 2); 3) Oral- Motor gestures on Auditory-Visual speech perception (Experimental Set 3); and 4) Tactile information on Auditory speech perception (Experimental Set 4). This work is of theoretical import for characterizing speech perception development in typically developing infants, and provides a framework for understanding the roots of possible delay in infants born with a sensory or oral-motor impairment. The opportunities provided by, and constraints imposed by an initial multi-sensory speech percept allow infants to rapidly acquire knowledge from their language-learning environment, while a deficit in one of the contributing modalities could compromise optimal speech and language development.
描述(由申请人提供):婴儿天生偏爱听语音而不是非语音,并且具有一系列感知敏感性,使他们能够区分世界语言中使用的大多数语音差异,从而使他们能够获取任何语言。到10个月的年龄,婴儿就成为感知其母语的专家。这涉及改善本地辅音对比的歧视,但更重要的是,这笔赠款的歧视下降了
辅音区分。在成年人中,语音感知是多式联运的。我们所听到的会受到谈话面孔的视觉信息的影响,由自我产生的发音,甚至受外部触觉刺激的影响。虽然语音感知在年轻婴儿中也是多感官,但辩论的起源是有争议的。根据一种观点,通过学习的整合建立了多感官感知:观察和听到特定的语音可以了解每个语音的共同点。该赠款提出并检验了以下假设:婴儿语音感知是多感的,没有特定的先前学习经验。关于人类语言个体发育的辩论集中在是否通过经验获得感知性语言构建基础的问题或它们是天生的。然而,这种性质与养育争议迅速被一个更细微的框架所取代。在这里,有人提出,最早的发展感官系统 - 在语音的情况下,可能是体感,包括口腔运动运动中首先体现在胎儿中的体感觉反馈,提供了一个组织,一旦外围听觉系统即将到来,就可以在该组织上建立声音,妊娠22周。听到的语音是通过骨传导和子宫外部(过滤)语音的孕产妇声音,部分由这种体感/运动基础组织。出生时,当愿景可用时,就可以看到这个已经建立的基础上的语音图。因此,这些相互联系的感知系统为出生时的听到,看到和感觉到语音提供了一组参数。重要的是,有人认为,这些多感觉感知基础是用于语言的感知:它们建立了一个组织,该组织在口腔运动手势,可见的口腔运动运动以及任何语音听觉的听觉知觉之间提供了冗余。因此,不需要对单个跨模式匹配的特定学习。因此,我们的论点是,尽管多感官的语音感知具有发育历史(因此不像“先天”起点),但如果没有特定的语音声音的特定经验,则应具有多感官敏感性。因此,对于非本地,从未经验的语音声音,多感官处理应该是显而易见的,就像对本地和熟悉的语音一样。为了检验这一假设,可以针对学习融合的替代假设,将在非母语或陌生的语音对比中测试英国婴儿,并将其与印地语婴儿进行比较,这些对比是本地对比的。提出了四组实验,每套使用多模式分布学习范式。婴儿将在6个月的时间进行测试,在这个年龄,他们仍然可以区分非母语的语音,而在10个月的时间开始,他们开始失败后的一个年龄。有人提出,如果语音感知是多感的,没有特定的
经验,添加匹配的视觉,触觉或电机信息应促进在10个月时在10个月时歧视非本地语音对比度,而不匹配信息的添加应在6个月的时间里扰乱歧视。如果学习了多感官的语音感知,则只能看到这种模式,仅针对印地语婴儿,对对比是熟悉的,因此已经相互感官。具体目的是测试:1)关于听觉语音感知的视觉信息(实验集1); 2)关于听觉语音感知的口服运动手势(实验集2); 3)关于听觉视觉语音感知的口腔运动手势(实验组3); 4)有关听觉语音感知的触觉信息(实验集4)。这项工作是在典型发展婴儿中表征语音感知发展的理论上导入,并提供了一个框架,以理解患有感官或口服运动障碍的婴儿可能延迟的根源。最初的多感官语音感知所提供的机会以及约束,使婴儿能够从其语言学习环境中迅速获取知识,而其中一种贡献方式的赤字可能会损害最佳语音和语言发展。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
JANET F. WERKER其他文献
JANET F. WERKER的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('JANET F. WERKER', 18)}}的其他基金
Multisensory foundations of speech perception in infancy
婴儿期言语感知的多感官基础
- 批准号:
8720041 - 财政年份:2013
- 资助金额:
$ 14.96万 - 项目类别:
相似国自然基金
基于声学原位测试的金属表面液滴弹跳次数仿生调控
- 批准号:52350039
- 批准年份:2023
- 资助金额:80 万元
- 项目类别:专项基金项目
航天低温推进剂加注系统气液状态声学监测技术研究
- 批准号:62373276
- 批准年份:2023
- 资助金额:50 万元
- 项目类别:面上项目
声学信号调控语音反馈脑网络在腭裂代偿语音康复中的机制研究
- 批准号:82302874
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
非厄米声学晶格系统中的拓扑物理研究
- 批准号:12374418
- 批准年份:2023
- 资助金额:53 万元
- 项目类别:面上项目
海洋声学功能材料发展战略研究
- 批准号:52342304
- 批准年份:2023
- 资助金额:30.00 万元
- 项目类别:专项项目
相似海外基金
The Noisy Life of the Musician: Implications for Healthy Brain Aging
音乐家的喧闹生活:对大脑健康老化的影响
- 批准号:
10346105 - 财政年份:2022
- 资助金额:
$ 14.96万 - 项目类别:
Continuous Photoacoustic Monitoring of Neonatal Stroke in Intensive Care Unit
重症监护病房新生儿中风的连续光声监测
- 批准号:
10548689 - 财政年份:2022
- 资助金额:
$ 14.96万 - 项目类别:
Peripheral and central contributions to auditory temporal processing deficits and speech understanding in older cochlear implantees
外周和中枢对老年人工耳蜗植入者听觉时间处理缺陷和言语理解的贡献
- 批准号:
10444172 - 财政年份:2022
- 资助金额:
$ 14.96万 - 项目类别:
SPEECH PERCEPTION AND AUDITORY ABILITIES IN INFANTS, CHILDREN, AND ADULTS WITH DOWN SYNDROME
患有唐氏综合症的婴儿、儿童和成人的言语感知和听觉能力
- 批准号:
10420315 - 财政年份:2022
- 资助金额:
$ 14.96万 - 项目类别:
Effects of Auditory Neuropathy and Cochlear Hearing Loss on Speech Perception
听觉神经病变和耳蜗听力损失对言语感知的影响
- 批准号:
10458872 - 财政年份:2022
- 资助金额:
$ 14.96万 - 项目类别: