EAT: A Reliable Eating Assessment Technology for Free-living Individuals.

EAT:针对自由生活个体的可靠饮食评估技术。

基本信息

  • 批准号:
    10280789
  • 负责人:
  • 金额:
    $ 70.17万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
  • 财政年份:
    2021
  • 资助国家:
    美国
  • 起止时间:
    2021-08-01 至 2026-07-31
  • 项目状态:
    未结题

项目摘要

Project Summary/Abstract Overeating and unhealthy eating are often associated with various health risk conditions such as obesity, high blood pressure, and some chronic diseases. To get a better understanding of overeating and unhealthy eating, researchers often rely on self-reports provided by individuals. Suggestions regarding changing lifestyle is often provided based on observations from these self-reports. However, it is well known that self-reports can be erroneous and subject to reporting biases. Thus, an objective way to measure the eating activity and validating self-reports is necessary. Recently, there has been growing interest in moving beyond self-reports and monitoring the eating activity automatically. To monitor automatically, and in real time, researchers have looked at using sensor data from wrist worn devices, neck-worn devices, or ear-worn devices to automatically detect eating. These devices often enable capturing the eating periods. However, these devices seldom capture images, thus limiting the possibility of visually confirming the consumed food and their quantity. With the increasing popularity of wearable cameras, it is gradually becoming possible to capture the eating activities and associated context automatically and without any user intervention. Advances in machine learning enables automatically extracting eating related information from these captured images. However, wearable cameras often capture more information than necessary, like capturing bystanders. This unnecessary information capturing reduces participant's willingness to wearing the camera. Currently, no camera exists that can capture the eating activity and at the same time limit capturing unnecessary information. Obfuscating the unnecessary information might increase participant's willingness to wear the camera. However, it is unclear if and which obfuscation technique will increase participant's willingness to don the wearable camera and at the same time ensure automatic context determination. In this project, we will determine the possibility of using machine learning to detect eating in videos and identify the obfuscation technique that can allow detecting the eating activity without collecting unnecessary information. To this end, first we will develop an activity detection algorithm that will allow detecting the eating activity using data from an IR sensor array and RGB images. Next, we will test various obfuscation methods in a cross-over trial and select the best obfuscation method based on the greatest participant acceptability. We will then deploy the eating detection algorithm with the best obfuscation approach on a novel wearable camera that has an infrared sensor array. We will use this camera to test the possibility of detecting eating in a real-world setting. To validate our algorithm, we will ask people to confirm or refute predicted eating and non-eating moments. We will compare the performance of this algorithm against both real-time user response and 24-hour dietary recall to objectively evaluate the algorithm's performance. Our proposed system will improve current research practices of evaluating dietary intake and pave the way for personalized interventions for behavioral medicine.
项目摘要/摘要 暴饮暴食和不健康的饮食通常与肥胖症等各种健康风险状况有关 血压和一些慢性疾病。为了更好地了解暴饮暴食和不健康的饮食, 研究人员通常依靠个人提供的自我报告。关于改变生活方式的建议通常是 根据这些自我报告的观察结果提供。但是,众所周知,自我报告可能是 错误,并受到报告偏见的约束。因此,一种测量饮食活动和验证的客观方法 自我报告是必要的。最近,人们对超越自我报告和 自动监视饮食活动。为了自动监视,并实时监视 使用腕部磨损设备,颈部磨损设备或耳朵戴的设备的传感器数据自动检测 吃。这些设备通常可以捕获饮食期。但是,这些设备很少捕获 图像,从而限制了视觉上确认食用食物及其数量的可能性。 随着可穿戴摄像机的越来越普及,捕捉饮食正逐渐成为可能 活动和关联的上下文自动而没有任何用户干预。机器学习的进步 启用从这些捕获的图像中自动提取相关信息。但是,可穿戴 相机通常捕获更多的信息,例如捕获旁观者。这是不必要的 捕获的信息减少了参与者佩戴相机的意愿。目前,没有相机 可以捕获饮食活动,同时限制捕获不必要的信息。混淆 不必要的信息可能会增加参与者佩戴相机的意愿。但是,目前尚不清楚是否 以及哪种混淆技术将增加参与者戴上可穿戴相机的意愿,并在 同时确保自动上下文确定。在这个项目中,我们将确定使用的可能性 机器学习以检测视频中的饮食并确定可以检测到的混淆技术 饮食活动而无需收集不必要的信息。 为此,首先,我们将开发一种活动检测算法,该算法将允许使用 来自红外传感器阵列和RGB图像的数据。接下来,我们将在交叉中测试各种混淆方法 试验并根据最大的参与者的可接受选择最佳的混淆方法。然后我们将部署 在具有最佳混淆方法的饮食检测算法上,具有新颖的可穿戴相机 红外传感器阵列。我们将使用此相机测试在现实环境中检测饮食的可能性。到 验证我们的算法,我们将要求人们确认或驳斥预测的饮食和无进食时刻。我们将 将该算法的性能与实时用户响应和24小时饮食召回的性能进行比较 客观地评估算法的性能。我们提出的系统将改善当前的研究实践 评估饮食摄入量,为行为医学的个性化干预铺平道路。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Nabil Alshurafa其他文献

Nabil Alshurafa的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Nabil Alshurafa', 18)}}的其他基金

EAT: A Reliable Eating Assessment Technology for Free-living Individuals.
EAT:针对自由生活个体的可靠饮食评估技术。
  • 批准号:
    10457404
  • 财政年份:
    2021
  • 资助金额:
    $ 70.17万
  • 项目类别:
EAT: A Reliable Eating Assessment Technology for Free-living Individuals.
EAT:针对自由生活个体的可靠饮食评估技术。
  • 批准号:
    10663089
  • 财政年份:
    2021
  • 资助金额:
    $ 70.17万
  • 项目类别:
BehaviorSight: Privacy enhancing wearable system to detect health risk behaviors in real-time.
BehaviourSight:增强隐私的可穿戴系统,可实时检测健康风险行为。
  • 批准号:
    10043674
  • 财政年份:
    2020
  • 资助金额:
    $ 70.17万
  • 项目类别:
SenseWhy: Overeating in Obesity Through the Lens of Passive Sensing.
SenseWhy:从被动感知的角度看肥胖症的暴饮暴食。
  • 批准号:
    10406434
  • 财政年份:
    2018
  • 资助金额:
    $ 70.17万
  • 项目类别:
SenseWhy: Overeating in Obesity Through the Lens of Passive Sensing
SenseWhy:通过被动传感的视角观察肥胖症的暴饮暴食
  • 批准号:
    10063429
  • 财政年份:
    2018
  • 资助金额:
    $ 70.17万
  • 项目类别:
SenseWhy: Overeating in Obesity Through the Lens of Passive Sensing
SenseWhy:通过被动传感的视角观察肥胖症的暴饮暴食
  • 批准号:
    10310490
  • 财政年份:
    2018
  • 资助金额:
    $ 70.17万
  • 项目类别:

相似国自然基金

无线供能边缘网络中基于信息年龄的能量与数据协同调度算法研究
  • 批准号:
    62372118
  • 批准年份:
    2023
  • 资助金额:
    50 万元
  • 项目类别:
    面上项目
CHCHD2在年龄相关肝脏胆固醇代谢紊乱中的作用及机制
  • 批准号:
    82300679
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
颗粒细胞棕榈酰化蛋白FXR1靶向CX43mRNA在年龄相关卵母细胞质量下降中的机制研究
  • 批准号:
    82301784
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
年龄相关性黄斑变性治疗中双靶向药物递释策略及其机制研究
  • 批准号:
    82301217
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
多氯联苯与机体交互作用对生物学年龄的影响及在衰老中的作用机制
  • 批准号:
    82373667
  • 批准年份:
    2023
  • 资助金额:
    49 万元
  • 项目类别:
    面上项目

相似海外基金

Couples Motivational Interviewing to reduce drug use and HIV risk in vulnerable male couples
夫妻动机访谈,以减少弱势男性夫妇的吸毒和艾滋病毒风险
  • 批准号:
    10757544
  • 财政年份:
    2023
  • 资助金额:
    $ 70.17万
  • 项目类别:
Community-based Medication Adherence Support for Older Adults Living with HIV and Hypertension (CBA Intervention)
为感染艾滋病毒和高血压的老年人提供基于社区的药物依从性支持(CBA 干预)
  • 批准号:
    10752723
  • 财政年份:
    2023
  • 资助金额:
    $ 70.17万
  • 项目类别:
Clinical Trial Readiness - Primary Ciliary Dyskinesia (CTR-PCD)
临床试验准备 - 原发性纤毛运动障碍 (CTR-PCD)
  • 批准号:
    10418833
  • 财政年份:
    2022
  • 资助金额:
    $ 70.17万
  • 项目类别:
Quantifying and Understanding Glaucoma Eye Drop Medication Instillation and Adherence
量化和了解青光眼滴眼剂药物滴注和依从性
  • 批准号:
    10338660
  • 财政年份:
    2022
  • 资助金额:
    $ 70.17万
  • 项目类别:
Quantifying and Understanding Glaucoma Eye Drop Medication Instillation and Adherence
量化和了解青光眼滴眼剂药物滴注和依从性
  • 批准号:
    10652250
  • 财政年份:
    2022
  • 资助金额:
    $ 70.17万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了