NRI: Rich Task Perception for Programming by Demonstration
NRI:演示编程的丰富任务感知
基本信息
- 批准号:1525251
- 负责人:
- 金额:$ 120万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2015
- 资助国家:美国
- 起止时间:2015-09-01 至 2019-08-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Robots that can work alongside humans and take on repetitive, time-consuming tasks could greatly improve productivity and reliability in task-oriented environments such as laboratories, manufacturing facilities, or commercial kitchens. One of the key challenges in realizing this vision is that every combination of environment, user, and task presents unique requirements for the robot's behavior and it is impractical to employ traditional approaches for programming these robots. Instead, the PIs envision robots that are programmable by their end-users in their particular operation environment and for the particular tasks they are needed for. To overcome limitations of existing approaches, the PIs propose to develop a framework for rich task perception, which is able to extract detailed task descriptions from intuitive human demonstrations. Building on recent advances in depth camera sensing, GPU-optimized visual processing, and language understanding, the proposed framework will track all objects and people in a scene, recognize their goals and task context, and parse speech to extract higher-level task structure from a demonstration. The PIs will also introduce new programming by demonstration techniques that take full advantage of such rich task information and enable users to program robots by demonstrating their desired behavior. The proposed research has the potential to advance national health, prosperity and welfare by developing research and commercial robotic systems for use in factories, laboratories, and households. It will be an enabling technology for a new generation of highly flexible robots that can be programmed on-the-job to increase the productivity of task environments, such as laboratories or manufacturing facilities. The proposed work will also promote the progress of science by enabling reliable documentation and replication of experiments performed in scientific research wet-labs. Through a new undergraduate capstone course, this project will educate students to develop and program this next generation of robots. To motivate participation in STEM careers, the PIs will demonstrate their work at yearly public outreach events at the University of Washington, and will organize a summer camp for K-16 students through the UW DawgBytes program.Co-robots that can take on repetitive, time-consuming tasks could greatly improve productivity and reliability in task-oriented environments currently occupied by human workers; such as laboratories, manufacturing facilities, or commercial kitchens. One of the key challenges in realizing this vision is that every combination of environment, user, and task presents unique requirements for the co-robot's behavior and it is impractical to employ traditional approaches for programming these robots. Instead, the PIs envision co-robots that are programmable by their end-users in their particular operation environment and for the particular tasks they are needed for. A popular end-user programming approach in robotics is Programming by Demonstration (PbD), which enables users to program robots by demonstrating their desired behavior. While state-of-the-art PbD techniques have generated impressive robotic behaviors, current approaches have limitations that prevent them from becoming practical and widely adopted. Many of these limitations are specifically related to perception, preventing robots from understanding the detailed context of human demonstrations. To overcome these limitations, The PIs propose to develop a framework for rich task perception, which is able to extract detailed task descriptions from intuitive human demonstrations. Building on recent advances in RGB-D camera sensing, GPU-optimized visual processing, and language grounding, the proposed framework will track all objects and people in a scene at a very fi ne granularity, and parse speech to extract higher-level task structure from a demonstration. The PIs will also introduce new PbD techniques that better take advantage of such rich task information both in the programming and execution of tasks.
可以与人类一起工作并承担重复性,耗时的任务的机器人可以大大提高面向任务的环境(例如实验室,制造设施或商业厨房)的生产率和可靠性。意识到这一愿景的关键挑战之一是,环境,用户和任务的每种组合都提出了机器人行为的独特要求,并且采用传统方法来编程这些机器人是不切实际的。取而代之的是,PIS设想的机器人在其特定的操作环境中以及其所需的特定任务中可以编程的机器人。为了克服现有方法的局限性,PIS建议为丰富的任务感知开发一个框架,该框架能够从直观的人类示范中提取详细的任务描述。在深度相机感测,GPU优化的视觉处理以及语言理解的最新进展的基础上,提出的框架将跟踪场景中的所有对象和人员,识别其目标和任务上下文,并解析语音以从演示中提取更高级别的任务结构。 PI还将通过演示技术来介绍新的编程,这些技术充分利用了此类丰富的任务信息,并使用户能够通过演示其所需的行为来编程机器人。拟议的研究有可能通过开发用于工厂,实验室和家庭使用的研究和商业机器人系统来促进国家健康,繁荣和福利。对于新一代高度灵活的机器人来说,这将是一项具有促成技术,可以在工作中编程,以提高任务环境(例如实验室或制造设施)的生产率。拟议的工作还将通过实现可靠的文档和在科学研究湿牌中进行的实验的复制来促进科学的进步。通过新的本科生顶峰课程,该项目将教育学生开发和编程下一代机器人。为了激励参加STEM职业,PIS将通过UW Dawgbytes计划在华盛顿大学举行的年度公开外展活动中展示他们的工作,并将组织一个夏令营。例如实验室,制造设施或商用厨房。意识到这一愿景的关键挑战之一是,环境,用户和任务的每种组合都提出了共同机器人行为的独特要求,并且采用传统方法来编程这些机器人是不切实际的。取而代之的是,PIS设想了其最终用户在其特定操作环境中可以编程的共同机器人以及所需的特定任务。机器人技术中一种流行的最终用户编程方法是通过演示(PBD)编程,该方法使用户能够通过演示其所需行为来编程机器人。尽管最先进的PBD技术已经产生了令人印象深刻的机器人行为,但当前的方法具有局限性,可以阻止它们变得实用并广泛采用。这些局限性中的许多局限性与感知特别相关,以阻止机器人理解人类示范的详细背景。为了克服这些局限性,PIS建议为丰富的任务感知开发一个框架,该框架能够从直观的人类示范中提取详细的任务描述。在RGB-D摄像机感测,GPU优化的视觉处理和语言接地的最新进展的基础上,提出的框架将在非常差异的情况下跟踪场景中的所有对象和人,并解析语音以从演示中提取高级任务结构。 PI还将引入新的PBD技术,以更好地利用任务编程和执行中的这种丰富任务信息。
项目成果
期刊论文数量(2)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Single Modality Performance on Visual Navigation & QA
视觉导航的单一模态性能
- DOI:
- 发表时间:2019
- 期刊:
- 影响因子:0
- 作者:Thomason, J.;Gordon, D.;Bisk, Y.
- 通讯作者:Bisk, Y.
What Should I Do Now? Marrying Reinforcement Learning and Symbolic Planning
我现在应该怎么做?
- DOI:
- 发表时间:2020
- 期刊:
- 影响因子:0
- 作者:Gordon, D.;Fox, D.;Farhadi, A.
- 通讯作者:Farhadi, A.
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Dieter Fox其他文献
Manipulate-Anything: Automating Real-World Robots using Vision-Language Models
操控一切:使用视觉语言模型实现现实世界机器人的自动化
- DOI:
- 发表时间:
2024 - 期刊:
- 影响因子:0
- 作者:
Jiafei Duan;Wentao Yuan;Wilbert Pumacay;Yi Ru Wang;Kiana Ehsani;Dieter Fox;Ranjay Krishna - 通讯作者:
Ranjay Krishna
Sonar-Based Mapping of Large-Scale Mobile Robot Environments using EM
使用 EM 基于声纳的大型移动机器人环境测绘
- DOI:
- 发表时间:
1999 - 期刊:
- 影响因子:0
- 作者:
Wolfram Burgard;Dieter Fox;Hauke Jans;Christian Matenar;Sebastian Thrun - 通讯作者:
Sebastian Thrun
RVT-2: Learning Precise Manipulation from Few Demonstrations
RVT-2:从少量演示中学习精确操作
- DOI:
- 发表时间:
2024 - 期刊:
- 影响因子:0
- 作者:
Ankit Goyal;Valts Blukis;Jie Xu;Yijie Guo;Yu;Dieter Fox - 通讯作者:
Dieter Fox
Fast Joint Space Model-Predictive Control for Reactive Manipulation
快速关节空间模型-反应操纵的预测控制
- DOI:
- 发表时间:
2021 - 期刊:
- 影响因子:0
- 作者:
M. Bhardwaj;Balakumar Sundaralingam;Arsalan Mousavian;Nathan D. Ratliff;Dieter Fox;Fabio Ramos;Byron Boots - 通讯作者:
Byron Boots
PerAct2: A Perceiver Actor Framework for Bimanual Manipulation Tasks
PerAct2:用于双手操作任务的感知者参与者框架
- DOI:
- 发表时间:
2024 - 期刊:
- 影响因子:0
- 作者:
Markus Grotz;Mohit Shridhar;Tamim Asfour;Dieter Fox - 通讯作者:
Dieter Fox
Dieter Fox的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Dieter Fox', 18)}}的其他基金
Collaborative Research: NRI: FND: Graph Neural Networks for Multi-Object Manipulation
合作研究:NRI:FND:用于多对象操作的图神经网络
- 批准号:
2024057 - 财政年份:2020
- 资助金额:
$ 120万 - 项目类别:
Standard Grant
NRI: Collaborative Research: Experiential Learning for Robots: From Physics to Actions to Tasks
NRI:协作研究:机器人的体验式学习:从物理到动作再到任务
- 批准号:
1637479 - 财政年份:2016
- 资助金额:
$ 120万 - 项目类别:
Standard Grant
NRI-Large: Collaborative Research: Purposeful Prediction: Co-robot Interaction via Understanding Intent and Goals
NRI-Large:协作研究:有目的的预测:通过理解意图和目标进行协作机器人交互
- 批准号:
1227234 - 财政年份:2012
- 资助金额:
$ 120万 - 项目类别:
Continuing Grant
RI-Small: Statistical Relational Models for Semantic Robot Mapping
RI-Small:语义机器人映射的统计关系模型
- 批准号:
0812671 - 财政年份:2008
- 资助金额:
$ 120万 - 项目类别:
Continuing Grant
Collaborative Research: BPC-A: ARTSI: Advancing Robotics Technology for Societal Impact
合作研究:BPC-A:ARTSI:推进机器人技术以产生社会影响
- 批准号:
0742075 - 财政年份:2007
- 资助金额:
$ 120万 - 项目类别:
Continuing Grant
CAREER: Probabilistic Methods for Multi-Robot Collaboration
职业:多机器人协作的概率方法
- 批准号:
0093406 - 财政年份:2001
- 资助金额:
$ 120万 - 项目类别:
Continuing Grant
相似国自然基金
前扣带回GTP酶激活蛋白RICH2介导Shank3-/-孤独症小鼠社交行为障碍的机制研究
- 批准号:82301350
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
整合素β1/RICH1复合体感应细胞外基质硬度信号调控乳腺癌侵袭转移的机制研究
- 批准号:82303462
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
转录因子NtMYB305通过AT-rich元件调控NtPMT表达及烟碱合成的分子机制研究
- 批准号:
- 批准年份:2021
- 资助金额:30 万元
- 项目类别:青年科学基金项目
转录因子NtMYB305通过AT-rich元件调控NtPMT表达及烟碱合成的分子机制研究
- 批准号:32101643
- 批准年份:2021
- 资助金额:24.00 万元
- 项目类别:青年科学基金项目
Rich1/Amot-p80/Merlin轴通过Hippo通路调控乳腺癌干细胞样特性的机制研究
- 批准号:
- 批准年份:2020
- 资助金额:24 万元
- 项目类别:青年科学基金项目
相似海外基金
IGF::OT::IGF LTRC CLINICAL CENTER - TASK ORDER 2 - 03/01/2017 - 02/28/2018 - CAN 17-8470204
IGF::OT::IGF LTRC 临床中心 - 任务单 2 - 03/01/2017 - 02/28/2018 - CAN 17-8470204
- 批准号:
9514439 - 财政年份:2017
- 资助金额:
$ 120万 - 项目类别:
IGF::OT::IGF LTRC CLINICAL CENTER - TASK ORDER 2 - 03/01/2017 - 02/28/2018 - CAN 17-8470204
IGF::OT::IGF LTRC 临床中心 - 任务单 2 - 03/01/2017 - 02/28/2018 - CAN 17-8470204
- 批准号:
9514443 - 财政年份:2017
- 资助金额:
$ 120万 - 项目类别:
IGF::OT::IGF LUNG TISSUE RESEARCH CONSORTIUM RENEWAL, TISSUE REPOSITORY, TASK ORDER 02, JANUARY 1, 2017 - DECEMBER 31, 2017
IGF::OT::IGF 肺组织研究联盟更新,组织存储库,任务令 02,2017 年 1 月 1 日 - 2017 年 12 月 31 日
- 批准号:
9514435 - 财政年份:2017
- 资助金额:
$ 120万 - 项目类别:
IGF::OT::IGF LTRC CLINICAL CENTER - TASK ORDER 001; SUBJECT CHARACTERIZATION AND TISSUE PROCUREMENT
IGF::OT::IGF LTRC 临床中心 - 任务单 001;
- 批准号:
9304632 - 财政年份:2016
- 资助金额:
$ 120万 - 项目类别:
IGF::OT::IGF LTRC CLINICAL CENTER - TASK ORDER 001; SUBJECT CHARACTERIZATION AND TISSUE PROCUREMENT
IGF::OT::IGF LTRC 临床中心 - 任务单 001;
- 批准号:
9304634 - 财政年份:2016
- 资助金额:
$ 120万 - 项目类别: