Bayesian-centric Multimodal Hands-free Computer Interaction Technologies for People with Quadriplegia

针对四肢瘫痪患者的以贝叶斯为中心的多模式免提计算机交互技术

基本信息

  • 批准号:
    2113485
  • 负责人:
  • 金额:
    $ 39.99万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2021
  • 资助国家:
    美国
  • 起止时间:
    2021-08-01 至 2024-07-31
  • 项目状态:
    已结题

项目摘要

Interacting with computers remains a challenge for people with quadriplegia. Assistive technologies that enable hands-free interaction with computers are primarily based on eye-gaze, voice, and orally-controlled input modalities, each with its own strengths and weaknesses. However, these assistive technologies do not support collaborative use of multiple input modalities, such as using eye gaze to quickly narrow down the region containing the intended target for executing a spoken command. The overarching goal of the proposed project is to research, design and engineer intelligent and collaborative multimodal hands-free interaction techniques that synergistically combine inputs from different input modalities to accurately predict and act on the user's interaction intent. Synergistic integration of the input modalities and intelligent inferring of the user’s interaction intent amplify the collective strengths of the individual modalities while mitigating their weaknesses. More importantly, these techniques will also learn user-specific interaction patterns from the user’s interaction history for personalizing the prediction of each individual user’s intended action. Overall, the transformative assistive multimodal interaction system, SeeSayClick, that will emerge from this project, will make it far easier for people with quadriplegia to create and consume digital information and thereby fully participate in this digitized economy. The resulting higher productivity of such users will lead to improved access to education and employment opportunities. Lastly, this project will serve as a platform for training students and exposing them to careers in assistive technology development and rehabilitation engineering.The novelty of the envisioned SeeSayClick assistive technology will be the tight integration of multiple interaction modalities that will work together synergistically and resolve ambiguities in interaction, and as a consequence, reduce the interaction burden substantially. The basis for the integration will be rooted in Bayesian inference methods for human computer interaction. These methods provide a principled approach for combining multiple sources of information, possibly noisy, to predict the user's intended interaction action, such as combining: (1) the locational information from gaze with (2) the spoken commands, and (3) prior knowledge from the interaction context to infer the intended target's precise location for selection and execution. By incorporating interaction history as prior into the Bayesian methods, the proposed approach for integrating multiple input modalities will also learn user-specific interaction patterns to personalize the prediction and enhance the prediction accuracy even further for each individual user. Besides cursor operations and command execution, the Bayesian methods will be coupled to a language model for text entry and editing operations.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
对于四肢瘫痪的人来说,与计算机的互动仍然是一个挑战。能够与计算机进行免提交互的辅助技术主要基于眼睛注视,声音和口服控制的输入方式,每种方式都具有自己的优点和劣势。但是,这些辅助技术不支持多种输入模式的协作使用,例如使用Eye Chaze快速缩小执行口语命令的预期目标的区域。拟议项目的总体目标是研究,设计和工程师智能和协作多模式的免提交互技术,该技术协同结合了来自不同输入方式的输入,以准确预测和采取用户的交互意图。输入方式的协同整合以及用户互动意图的智能推断会放大单个方式的集体优势,同时减轻其弱点。更重要的是,这些技术还将从用户的交互历史记录中学习特定于用户的交互模式,以个性化每个用户的预期操作的预测。总体而言,将从该项目中出现的变革性辅助多模式交互系统SeesayClick将使四肢瘫痪的人更容易创建和消费数字信息,从而充分参与这种数字化的经济。此类用户的生产率更高将导致改善获得教育和就业机会的机会。最后,该项目将作为培训学生并将他们暴露于辅助技术开发和康复工程领域的职业的平台。所设想的“ SeesayClick seeSayClick辅助技术”的新颖性将是多种互动模式的紧密整合,这些互动模式将协同工作并解决互动中的歧义性,并因此降低了相互作用的效果。集成的基础将植根于人类计算机相互作用的贝叶斯推理方法。这些方法提供了一种组合多个信息源的主要方法,可以预测用户预期的交互作用,例如组合:(1)从目光到(2)口语命令的位置信息以及(3)相互作用上下文中的先验知识以推断预定的目标位置的精确位置,以进行选择和执行。通过将互动历史记录作为先验记录到贝叶斯方法中,提出的集成多个输入模式的方法还将学习特定于用户的交互模式,以个性化预测并进一步提高每个单个用户的预测准确性。除了光标操作和命令执行外,贝叶斯方法还将与文本输入和编辑操作的语言模型结合起来。该奖项反映了NSF的法定任务,并被认为是通过基金会的知识分子的智力优点和更广泛影响的评估标准来通过评估来获得支持的。

项目成果

期刊论文数量(2)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
GlanceWriter: Writing Text by Glancing Over Letters with Gaze
  • DOI:
    10.1145/3544548.3581269
  • 发表时间:
    2023-04
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Wenzhe Cui;R. Liu;Zhi Li;Yifan Wang;Andrew Wang;Xia Zhao;S. Rashidian;Furqan Baig;I. Ramakrishnan;Fusheng Wang;Xiaojun Bi
  • 通讯作者:
    Wenzhe Cui;R. Liu;Zhi Li;Yifan Wang;Andrew Wang;Xia Zhao;S. Rashidian;Furqan Baig;I. Ramakrishnan;Fusheng Wang;Xiaojun Bi
EyeSayCorrect: Eye Gaze and Voice Based Hands-free Text Correction for Mobile Devices
EyeSayCorrect:适用于移动设备的基于眼睛注视和语音的免提文本校正
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Xiaojun Bi其他文献

Human Pose Estimation Based on Improved Hourglass Networks
基于改进沙漏网络的人体姿态估计
Bayesian Hierarchical Pointing Models
贝叶斯分层指向模型
A Wideband Balun Filter Based on Folded Ring Slotline Resonator and Dual-Feedback Stubs with >17.8 f0 Stopband Rejection
基于折叠环槽线谐振器和双反馈短截线的宽带巴伦滤波器,阻带抑制 >17.8 f0
  • DOI:
    10.1109/tcsii.2022.3174954
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Shiming Guan;Xiaojun Bi;Qiang Ma;Qinfen Xu
  • 通讯作者:
    Qinfen Xu
NEUTRINO EMISSION FROM DARK MATTER ANNIHILATION/DECAY IN LIGHT OF COSMIC e AND DATA
从宇宙 e 角度看暗物质湮灭/衰变产生的中微子发射
Automatically Generating and Improving Voice Command Interface from Operation Sequences on Smartphones
根据智能手机上的操作序列自动生成和改进语音命令界面

Xiaojun Bi的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Xiaojun Bi', 18)}}的其他基金

CHS: Small: Establishing Action Laws for Touch Interaction
CHS:小:建立触摸交互的动作法则
  • 批准号:
    1815514
  • 财政年份:
    2018
  • 资助金额:
    $ 39.99万
  • 项目类别:
    Standard Grant

相似国自然基金

面向多中心脑影像的脑功能异常模式鉴别分析方法研究
  • 批准号:
    62306327
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
企业网络视角下城市群经济多中心结构的演化机理与发育模式——以长江中游城市群为例
  • 批准号:
    42101205
  • 批准年份:
    2021
  • 资助金额:
    24.00 万元
  • 项目类别:
    青年科学基金项目
网络消费时代下的城市商业空间多中心发育模式与协同机理研究
  • 批准号:
  • 批准年份:
    2020
  • 资助金额:
    55 万元
  • 项目类别:
    面上项目
多中心空间发展模式促进我国全球价值链地位提升的机理、路径及对策研究
  • 批准号:
    71903001
  • 批准年份:
    2019
  • 资助金额:
    19.0 万元
  • 项目类别:
    青年科学基金项目

相似海外基金

Next Generation Tools For Genome-Centric Multimodal Data Integration In Personalised Cardiovascular Medicine
个性化心血管医学中以基因组为中心的多模式数据集成的下一代工具
  • 批准号:
    10104323
  • 财政年份:
    2024
  • 资助金额:
    $ 39.99万
  • 项目类别:
    EU-Funded
NEXT GENERATION TOOLS FOR GENOME-CENTRIC MULTIMODAL DATA INTEGRATION IN PERSONALISED CARDIOVASCULAR MEDICINE
用于个性化心血管医学中以基因组为中心的多模式数据集成的下一代工具
  • 批准号:
    10098097
  • 财政年份:
    2024
  • 资助金额:
    $ 39.99万
  • 项目类别:
    EU-Funded
Intersubjective AI-driven multimodal interaction for advanced user-centric human robot collaborative applications (Jarvis)
主体间人工智能驱动的多模式交互,用于以用户为中心的高级人类机器人协作应用程序 (Jarvis)
  • 批准号:
    10099311
  • 财政年份:
    2024
  • 资助金额:
    $ 39.99万
  • 项目类别:
    EU-Funded
Integrated Passenger-Centric Planning of Multimodal Transport Networks (MultiModX)
以乘客为中心的多式联运网络综合规划 (MultiModX)
  • 批准号:
    10091678
  • 财政年份:
    2023
  • 资助金额:
    $ 39.99万
  • 项目类别:
    EU-Funded
明末江南の絵画市場の拡大が文人画様式の伝播に及ぼした影響―個性派画家を中心に
明末江南绘画市场的拓展对文人画风传播的影响——以个人主义画家为中心
  • 批准号:
    22K00187
  • 财政年份:
    2022
  • 资助金额:
    $ 39.99万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了