Sign Finding and Reading SFAR on GPU Accelerated Mobile Devices
在 GPU 加速的移动设备上查找和读取 SFAR 的标志
基本信息
- 批准号:8779810
- 负责人:
- 金额:$ 22.97万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2014
- 资助国家:美国
- 起止时间:2014-09-30 至 2017-02-28
- 项目状态:已结题
- 来源:
- 关键词:AccelerationAccess to InformationAlgorithmsAntirrhinumArchitectureBackCodeComputer Vision SystemsDistantEnvironmentEquipmentEyeFeedbackHybridsLicensingLightLiteratureMapsModificationPerformancePhasePopulationPrintingProcessQuality of lifeReaderReadingResearchResearch InstituteRiskRunningSKI geneSelf-Help DevicesServicesSolutionsSystemTechniquesTelephoneTest ResultTestingTextTimeVisionVisually Impaired Personsassistive device/technologyauthoritybaseblinddesignexperiencehandheld mobile deviceimprovednext generationoperationphase 1 studypublic health relevancevolunteer
项目摘要
DESCRIPTION (provided by applicant): The inability to access information on printed signs directly impacts the mobility independence of the over 1.2 million blind persons in the U.S. Many previously proposed technological solutions to this problem either required physical modifications to the environment (talking signs or the placement of coded markers) or required the user to carry around specialized computational equipment, which can be stigmatizing. A recently pursued strategy is to utilize the computational capabilities of smart phones and techniques from computer vision to allow blind persons to read signs at a distance using commercially available, non-stigmatizing, smart- phones. However, despite the fact that sophisticated algorithms exist to recognize and extract sign text from cluttered video input (as evidenced, for example, by mapping services such as Google Maps automatically locating and blurring out only license plate text in street-view maps) current mobile solutions for reading sign
text at a distance perform relatively poorly. This poor performance is largely because until recently, smart-phone processors have simply not been able to execute state-of-the-art computer vision text extraction and recognition algorithms at real-time rates, which forced previous mobile sign readers to utilize older, simplistic, less effective algorithms. Next-generation smart-phones run on fundamentally different, hybrid processor architectures (such as the Tegra 4, Snapdragon 800, both released in 2013) with dedicated embedded graphical processing units (GPUs) and multi-core CPUs, which make them ideal for high-performance, vision-heavy computation. In this study, we propose to develop a smart-phone-based system for finding and reading signs at a distance which significantly outperforms previous such readers by implementing state-of-the-art text extraction algorithms on modern smart-phone hybrid GPU/CPU processor architectures. In Phase I, the proposed system will be developed and tested with blind users. In Phase II, feedback from user testing will be integrated into system design and the performance will be improved to permit operation in extremely challenging (such as low light) environments.
描述(由申请人提供):无法访问印刷标志的信息直接影响美国超过120万盲人的移动性独立性,许多先前提出的针对此问题的技术解决方案需要对环境进行物理修改(谈话的标志或编码标记的放置),或者要求用户携带专用计算设备,而这些设备可以携带特定的计算设备,而这些设备可以固定。最近追求的策略是利用计算机视觉中智能手机和技术的计算能力,以允许盲人使用市售,不耻辱的智能手机在远距离阅读标志。然而,尽管存在复杂的算法是为了从混乱的视频输入中识别和提取符号文本(例如,通过绘制诸如Google Maps(例如,在街道视图中)自动找到并仅模糊的车牌文本绘制诸如Google Maps之类的服务)当前的移动解决方案用于阅读符号
距离的文字相对较差。这种糟糕的性能很大程度上是因为直到最近,智能手机处理器根本无法以实时速率执行最先进的计算机视觉文本提取和识别算法,这迫使以前的移动符号读取器使用较旧,简单,效率较低的算法。下一代智能手机以根本不同的混合处理器体系结构(例如Tegra 4,Snapdragon 800,均于2013年发行),均以专用的嵌入式图形处理单元(GPU)和多核CPUS运行,这使它们非常适合高表演,视觉,稳定的计算。在这项研究中,我们建议开发一种基于智能手机的系统,用于查找和阅读距离的距离,通过在现代智能手机混合GPU/CPU处理器架构上实施最先进的文本提取算法,从而极大地超过了以前的读者。在第一阶段,将与盲人用户一起开发和测试拟议的系统。在第二阶段中,用户测试的反馈将集成到系统设计中,并且将改进性能,以允许在极具挑战性(例如低光)环境中运行。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Christian Bruccoleri其他文献
Christian Bruccoleri的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
相似国自然基金
基于多模态敏感信息识别的自适应社交网络访问控制机制研究
- 批准号:62302540
- 批准年份:2023
- 资助金额:30.00 万元
- 项目类别:青年科学基金项目
供应链产品信息搜索系统的可验证性和隐私保护研究
- 批准号:61902124
- 批准年份:2019
- 资助金额:27.0 万元
- 项目类别:青年科学基金项目
信息中心网络中的访问控制和内容有效性保护关键技术研究
- 批准号:
- 批准年份:2019
- 资助金额:60 万元
- 项目类别:面上项目
面向移动社交网络的智能化数据访问控制机制
- 批准号:61802083
- 批准年份:2018
- 资助金额:21.0 万元
- 项目类别:青年科学基金项目
基于密码的外包数据访问控制中的防信息推理研究
- 批准号:61862059
- 批准年份:2018
- 资助金额:36.0 万元
- 项目类别:地区科学基金项目
相似海外基金
Smartphone app to examine effects of cannabis use on driving behavior
智能手机应用程序可检查大麻使用对驾驶行为的影响
- 批准号:
10458349 - 财政年份:2022
- 资助金额:
$ 22.97万 - 项目类别:
PREMIERE: A PREdictive Model Index and Exchange REpository
PREMIERE:预测模型索引和交换存储库
- 批准号:
10668938 - 财政年份:2019
- 资助金额:
$ 22.97万 - 项目类别:
CRCNS: Bayesian inference in spiking sensory neurons
CRCNS:尖峰感觉神经元的贝叶斯推理
- 批准号:
9124841 - 财政年份:2014
- 资助金额:
$ 22.97万 - 项目类别:
Machine Learning and Natural Language Processing for Biomedical Applications
生物医学应用的机器学习和自然语言处理
- 批准号:
10927050 - 财政年份:
- 资助金额:
$ 22.97万 - 项目类别: