Collaborative Research: CCSS: Continuous Facial Sensing and 3D Reconstruction via Single-ear Wearable Biosensors

合作研究:CCSS:通过单耳可穿戴生物传感器进行连续面部传感和 3D 重建

基本信息

  • 批准号:
    2401415
  • 负责人:
  • 金额:
    $ 25万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2023
  • 资助国家:
    美国
  • 起止时间:
    2023-10-01 至 2025-01-31
  • 项目状态:
    未结题

项目摘要

Facial landmark tracking and 3D reconstruction are popular and well-studied fields in the intersection of computer vision, graphics, and machine learning. Despite their countless applications such as human-computer interaction, facial expressions analysis, and emotion recognition, existing camera-based solutions require users to be confined to a particular location and face a camera at all times without occlusions. This highly constrained setting prevents them from being deployed in many emerging application scenarios, in which users are likely to engage in three-dimensional body/head movements. This project aims to provide a new form of single-ear biosensing system that can unobtrusively, continuously, and reliably sense the entire facial and eye movements, track major facial landmarks, and further render 3D facial animations via cross-modal transfer learning. The research outcome of this project will push the limits of ear-worn biosensing to enable rich sensing capabilities that are currently infeasible, such as camera-free facial landmark tracking, and real-time 3D facial reconstruction, etc. Relying on the learning model studied in this project, the project team is building two representative applications, i.e., facial sensing for mobile virtual reality (VR)/augmented reality (AR), and speech enhancement using the reconstructed facial landmark dynamics. The project will substantially advance the wearable and biosensing techniques as well as transfer learning across multiple sensing modalities.The project is bridging the gap between the anatomical and muscular knowledge of the human face and electrical and computational modeling techniques to develop analytical models, hardware, and software libraries for sensing face-based physiological signals. In particular, the project team is building a low-power low-noise circuit to sense the entire facial muscle activities using single-ear biosensors. The team is also developing a compressing algorithm that activates the sensing and communication components only when facial changes are detected, which can significantly increase the battery lifetime and reduce the computational cost of the wearable system. Moreover, to enable camera-free 3D facial reconstruction, the team is developing a cross-modal learning model that consists of a visual facial landmark detection network and a biosignal network, in which knowledge embodied in the vision model can be transferred to the biosignal domain during training. To further enhance the model’s robustness, the team is integrating the third modality (i.e., inertial sensors) into the cross-modal learning model and exploring domain adaptation and continual learning techniques. Additionally, the team is exploring model compression and acceleration techniques to enable the on-device deployment on existing head-worn devices such as VR/AR headsetsThis award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
在计算机视觉,图形和机器学习的交集中,面部地标跟踪和3D重建是流行且研究的领域。尽管它们的无数应用,例如人类计算机的互动,面部表情分析和情感识别,但现有的基于摄像头的解决方案仍需要将用户限制在特定位置,并始终面对摄像机而无需闭合。这种高度限制的设置使它们无法在许多新兴的应用程序方案中部署,在这种情况下,用户可能会从事三维身体/头部运动。该项目旨在提供一种新形式的单次生物传感系统,该系统可以不断地,连续且可靠地感知整个面部和眼动,跟踪主要面部标志,并通过跨模式转移学习进一步渲染3D面部动画。该项目的研究结果将推动耳朵染的生物传感的局限性,以实现目前可儿童的丰富感应能力,例如无摄影机的面部标志跟踪以及实时3D面部重建等。依靠该项目的学习模型依靠该项目的学习模型,该项目团队使用两个代表性的应用程序来建立两个代表性的介绍,即移动的现实(VR)现实(vr)(VR)(VR)(VR),VR(VR),VR(VR)重建面部标志性动态。该项目将大大推进可穿戴和生物传感技术,并跨多种灵敏度方式进行转移学习。该项目正在弥合人脸的解剖学和肌肉知识之间的差距,以及电气和计算建模技术,以开发分析模型,硬件和软件库,以促进基于敏感的物理信号。特别是,该项目团队正在使用单口生物传感器来建立一个低功率低噪声电路,以感知整个面部肌肉活动。该团队还开发了一种压缩算法,该算法只有在检测到面部变化时才激活感官和通信组件,这可以大大提高电池寿命并降低可穿戴系统的计算成本。此外,为了启用无摄像头3D面部重建,该团队正在开发一个跨模式学习模型,该模型由视觉面部地标检测网络和生物信号网络组成,在训练过程中,可以将体现在视觉模型中的知识传递到生物信号领域。为了进一步增强模型的鲁棒性,团队正在将第三种模式(即惯性传感器)整合到跨模式学习模型中,并探索域的适应和持续学习技术。此外,该团队正在探索模型压缩和加速技术,以使现有头扮演的设备(例如VR/AR耳机)的设备部署能够反映NSF的法定任务,并被认为是通过基金会的知识分子优点和更广泛的影响审查标准来通过评估来支持的。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Phuc Nguyen其他文献

Governance, Boards of Directors and the Impact of Contracting on Not-for-profit Organizations - An Australian Study
  • DOI:
    10.1111/spol.12055
  • 发表时间:
    2014-04-01
  • 期刊:
  • 影响因子:
    3.2
  • 作者:
    Considine, Mark;O'Sullivan, Siobhan;Phuc Nguyen
  • 通讯作者:
    Phuc Nguyen
Photometry based Blood Oxygen Estimation through Smartphone Cameras
通过智能手机摄像头进行基于光度测定的血氧估算
Compulsory land acquisition for urban expansion: livelihood reconstruction after land loss in Hue’s peri-urban areas, Central Vietnam
城市扩张强制征地:越南中部顺化城郊地区失去土地后的生计重建
  • DOI:
    10.3828/idpr.2016.32
  • 发表时间:
    2017
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Phuc Nguyen;A. V. Westen;A. Zoomers
  • 通讯作者:
    A. Zoomers
Rethinking Image-based Table Recognition Using Weakly Supervised Methods
使用弱监督方法重新思考基于图像的表格识别
  • DOI:
    10.5220/0011682600003411
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    3.9
  • 作者:
    N. Ly;A. Takasu;Phuc Nguyen;H. Takeda
  • 通讯作者:
    H. Takeda

Phuc Nguyen的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Phuc Nguyen', 18)}}的其他基金

Collaborative Research: CCSS: Continuous Facial Sensing and 3D Reconstruction via Single-ear Wearable Biosensors
合作研究:CCSS:通过单耳可穿戴生物传感器进行连续面部传感和 3D 重建
  • 批准号:
    2132112
  • 财政年份:
    2021
  • 资助金额:
    $ 25万
  • 项目类别:
    Standard Grant
Nonlinear harmonic analysis and partial differential equations of Lane-Emden and Riccati type
非线性调和分析以及 Lane-Emden 和 Riccati 型偏微分方程
  • 批准号:
    0901083
  • 财政年份:
    2009
  • 资助金额:
    $ 25万
  • 项目类别:
    Standard Grant

相似国自然基金

支持二维毫米波波束扫描的微波/毫米波高集成度天线研究
  • 批准号:
    62371263
  • 批准年份:
    2023
  • 资助金额:
    52 万元
  • 项目类别:
    面上项目
腙的Heck/脱氮气重排串联反应研究
  • 批准号:
    22301211
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
水系锌离子电池协同性能调控及枝晶抑制机理研究
  • 批准号:
    52364038
  • 批准年份:
    2023
  • 资助金额:
    33 万元
  • 项目类别:
    地区科学基金项目
基于人类血清素神经元报告系统研究TSPYL1突变对婴儿猝死综合征的致病作用及机制
  • 批准号:
    82371176
  • 批准年份:
    2023
  • 资助金额:
    49 万元
  • 项目类别:
    面上项目
FOXO3 m6A甲基化修饰诱导滋养细胞衰老效应在补肾法治疗自然流产中的机制研究
  • 批准号:
    82305286
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目

相似海外基金

Collaborative Research: ECCS-CCSS Core: Resonant-Beam based Optical-Wireless Communication
合作研究:ECCS-CCSS核心:基于谐振光束的光无线通信
  • 批准号:
    2332172
  • 财政年份:
    2024
  • 资助金额:
    $ 25万
  • 项目类别:
    Standard Grant
Collaborative Research: ECCS-CCSS Core: Resonant-Beam based Optical-Wireless Communication
合作研究:ECCS-CCSS核心:基于谐振光束的光无线通信
  • 批准号:
    2332173
  • 财政年份:
    2024
  • 资助金额:
    $ 25万
  • 项目类别:
    Standard Grant
Collaborative Research: CCSS: When RFID Meets AI for Occluded Body Skeletal Posture Capture in Smart Healthcare
合作研究:CCSS:当 RFID 与人工智能相遇,用于智能医疗保健中闭塞的身体骨骼姿势捕获
  • 批准号:
    2245607
  • 财政年份:
    2023
  • 资助金额:
    $ 25万
  • 项目类别:
    Standard Grant
Collaborative Research: CCSS: Hierarchical Federated Learning over Highly-Dense and Overlapping NextG Wireless Deployments: Orchestrating Resources for Performance
协作研究:CCSS:高密度和重叠的 NextG 无线部署的分层联合学习:编排资源以提高性能
  • 批准号:
    2319780
  • 财政年份:
    2023
  • 资助金额:
    $ 25万
  • 项目类别:
    Standard Grant
Collaborative Research: CCSS: Hierarchical Federated Learning over Highly-Dense and Overlapping NextG Wireless Deployments: Orchestrating Resources for Performance
协作研究:CCSS:高密度和重叠的 NextG 无线部署的分层联合学习:编排资源以提高性能
  • 批准号:
    2319781
  • 财政年份:
    2023
  • 资助金额:
    $ 25万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了