CAREER: Discriminative Spatiotemporal Models for Recognizing Humans, Objects, and their Interactions

职业:识别人类、物体及其交互的判别时空模型

基本信息

  • 批准号:
    1551290
  • 负责人:
  • 金额:
    $ 10.6万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Continuing Grant
  • 财政年份:
    2015
  • 资助国家:
    美国
  • 起止时间:
    2015-09-01 至 2016-05-31
  • 项目状态:
    已结题

项目摘要

One of the goals of computer vision is to build a system that can see people and recognize their activities. Human actions are rarely performed in isolation -- the surrounding environment, nearby objects, and nearby humans affect the nature of the performed activity.Examples include actions such as "eating" and "shaking hands." The research goal of this project is to approach human performance in understanding videos of activities defined by human-object and human-human interactions.This project makes use of structured, contextual representations to make predictions given spatiotemporal data. It does so by extending recent successful work on object recognition to the space-time domain, introducing extensions for spatiotemporal grouping and contextual modeling. Video enables the extraction of additional dynamic cues absent in static images, but this poses additional computational burdens that are addressed through algorithmic innovations for approximate parsing and large-scale discriminative learning.To place activity recognition on firm quantitative ground, the proposed models are evaluated using concrete metrics based on activities of daily living (ADL) and human proxemic models from the medical and anthropological communities. Examples include systems for automated monitoring of stroke patients interacting with everyday objects and automated analysis of crisis response team interactions during emergency drills. This project produces non-scripted, real-world, labeled action recognition datasets, of benefit to the research community as a whole.
计算机视觉的目标之一是建立一个可以看到人们并认识其活动的系统。人类的行为很少是孤立的 - 周围环境,附近的物体和附近的人类会影响执行活动的性质。例子包括诸如“饮食”和“握手”之类的动作。该项目的研究目标是在理解人类对象和人类互动定义的活动视频时达到人类绩效。本项目利用结构化的上下文表示,以给定时空数据做出预测。它通过将对象识别的最新成功工作扩展到时空域,从而引入了时空分组和上下文建模的扩展。视频可以在静态图像中提取其他动态提示,但这构成了其他计算负担,这些计算负担是通过算法创新来解决的,用于近似解析和大规模歧视性学习。将活动识别置于企业定量基础上,使用基于日常生活模型(ADLECTOM)和人类培训的混凝土量来评估所建议的模型,并在ADLECMONTIC上进行了培训(ADLECTIM)和人类的培训。例如,用于与日常对象相互作用的中风患者自动监测的系统以及在紧急演习期间对危机反应团队互动的自动分析。该项目产生了对整个研究界有益的未录制的,现实世界中标记的行动识别数据集。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Deva Ramanan其他文献

Using Segmentation to Verify Object Hypotheses
Recognizing Tiny Faces
识别小脸
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction
ViSER:用于铰接 3D 形状重建的视频特定表面嵌入
  • DOI:
  • 发表时间:
    2021
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Gengshan Yang;Deqing Sun;Varun Jampani;Daniel Vlasic;Forrester Cole;Ce Liu;Deva Ramanan
  • 通讯作者:
    Deva Ramanan
Discriminative Latent Variable Models for Object Detection
用于目标检测的判别性潜变量模型
  • DOI:
  • 发表时间:
    2010
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Pedro F. Felzenszwalb;Ross B. Girshick;David A. McAllester;Deva Ramanan
  • 通讯作者:
    Deva Ramanan
Safe Local Motion Planning with Self-Supervised Freespace Forecasting
具有自监督自由空间预测的安全局部运动规划

Deva Ramanan的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Deva Ramanan', 18)}}的其他基金

RI: Small: Probabilistic Hierarchical Models for Multi-Task Visual Recognition
RI:小型:多任务视觉识别的概率分层模型
  • 批准号:
    1618903
  • 财政年份:
    2016
  • 资助金额:
    $ 10.6万
  • 项目类别:
    Standard Grant
CAREER: Discriminative Spatiotemporal Models for Recognizing Humans, Objects, and their Interactions
职业:识别人类、物体及其交互的判别时空模型
  • 批准号:
    0954083
  • 财政年份:
    2010
  • 资助金额:
    $ 10.6万
  • 项目类别:
    Continuing Grant
RI-Small: Collaborative Research: Discriminative Latent Variable Object Detection
RI-Small:协作研究:判别性潜变量目标检测
  • 批准号:
    0812428
  • 财政年份:
    2008
  • 资助金额:
    $ 10.6万
  • 项目类别:
    Standard Grant

相似国自然基金

歧视性户籍政策、城市补贴竞争与中国不同技能劳动力的空间分布:机理、效应与政策分析
  • 批准号:
    72273129
  • 批准年份:
    2022
  • 资助金额:
    45 万元
  • 项目类别:
    面上项目
竞争性招标的最优机制设计:关于非歧视性条款和信息披露的两个问题
  • 批准号:
    71973040
  • 批准年份:
    2019
  • 资助金额:
    48 万元
  • 项目类别:
    面上项目

相似海外基金

Development of Discriminative Pattern Mining Techniques as a Foundation of Human-Centric Machine Learning
判别模式挖掘技术的发展作为以人为中心的机器学习的基础
  • 批准号:
    20K11941
  • 财政年份:
    2020
  • 资助金额:
    $ 10.6万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
How do we learn? Combining generative and discriminative models for visual and audio perception.
我们如何学习?
  • 批准号:
    488062-2016
  • 财政年份:
    2019
  • 资助金额:
    $ 10.6万
  • 项目类别:
    Postgraduate Scholarships - Doctoral
Construction of a computational model to deal with the cocktail-party problem for intelligent speech interface
智能语音界面鸡尾酒会问题计算模型的构建
  • 批准号:
    19K12035
  • 财政年份:
    2019
  • 资助金额:
    $ 10.6万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
Performance improvement of discriminative distributed Brillouin fiber sensing of temperature/strain
判别式分布式布里渊光纤温度/应变传感性能改进
  • 批准号:
    19K14999
  • 财政年份:
    2019
  • 资助金额:
    $ 10.6万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
Development of discriminative method for distinguishing between bleeding and thrombotic tendency in cases with prolonged aPTT
开发区分 aPTT 延长病例出血和血栓倾向的判别方法
  • 批准号:
    19K16962
  • 财政年份:
    2019
  • 资助金额:
    $ 10.6万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了