Modeling Diverse, Personalized and Expressive Animations for Virtual Characters through Motion Capture, Synthesis and Perception

通过动作捕捉、合成和感知为虚拟角色建模多样化、个性化和富有表现力的动画

基本信息

  • 批准号:
    RGPIN-2022-04920
  • 负责人:
  • 金额:
    $ 2.11万
  • 依托单位:
  • 依托单位国家:
    加拿大
  • 项目类别:
    Discovery Grants Program - Individual
  • 财政年份:
    2022
  • 资助国家:
    加拿大
  • 起止时间:
    2022-01-01 至 2023-12-31
  • 项目状态:
    已结题

项目摘要

Character animation plays a key role in delivering motions for virtual characters in game development, robotics, Virtual Reality (VR) and Augmented Reality (AR) applications. Despite the large effort in modeling motions to be natural and realistic, previous research has been focused on generic motion content learning where stylistic features of how each individual performs the content are largely ignored. Without individual styles, different virtual characters move the same way when the same motion content is needed, which is far from satisfactory to create a diverse virtual world. One major challenge to stylize motions is the lack of large scale motion style databases, and thus no sufficient knowledge of how to effectively model and transfer styles. This proposal focuses on motion style learning, synthesis and transfer. In the long term, our goal is to find ultimate solutions to generate stylized motions with variations that match the diversity in the real-world. The short-term objectives are: we will first establish large scale motion style databases through motion capture technique; from the data, we develop methods to learn effective motion representations and explore generative models to conditionally generate motions with desired styles; we will further discover style transfer models to edit styles while keeping the original motion content. In the five-year period, we will specifically capture, learn and model three stylistic features: demographic styles belonging to different groups of people, e.g. age, gender, and race; personalized styles resulting from personalities, body build for different individuals; expressive styles demonstrating varied emotional and physical states for the same individual under different scenarios. We will set up our databases to cover these style variations, publicize the database for open access, and provide labelling, documentation and technical support to the public. Research findings and source code will be published, solving problems of extracting style features, generating motion styles in a controllable manner, and transferring styles to novel motions. Beyond the five-year term, we will continue adding more styles to our databases, to model a broader picture of motion styles. Our database can directly be used in animating diverse characters in AR/VR and game scenes, it also facilitates other researchers to model motion styles, and stimulates interdisciplinary research in psychology, kinesiology, and art etc. Research findings in motion style modeling supports intelligent applications such as action recognition, style recognition and person identification from motion input. Style synthesis and transfer technology can also be widely applied in the game, entertainment industry, AR/VR applications in education, media, and social networks. By creating virtual characters that authentically embody people in the real world, this work can promote Diversity, Equity and Inclusion of the virtual world.
角色动画在游戏开发、机器人、虚拟现实 (VR) 和增强现实 (AR) 应用中为虚拟角色提供动作方面发挥着关键作用。尽管在模拟自然和真实的动作方面付出了巨大的努力,但以前的研究一直集中在通用动作内容学习上,其中每个人如何执行内容的风格特征在很大程度上被忽略。如果没有个性化的风格,当需要相同的动作内容时,不同的虚拟角色会以相同的方式移动,这远远不能满足创建多样化的虚拟世界的要求。运动风格化的一个主要挑战是缺乏大规模的运动风格数据库,因此对如何有效地建模和转移风格缺乏足够的了解。该提案侧重于运动风格的学习、合成和迁移。从长远来看,我们的目标是找到最终的解决方案来生成风格化的运动,其变化与现实世界的多样性相匹配。短期目标是:首先通过动作捕捉技术建立大规模的动作风格数据库;根据数据,我们开发了学习有效运动表示的方法,并探索生成模型以有条件地生成具有所需样式的运动;我们将进一步发现风格转移模型来编辑风格,同时保留原始的运动内容。 在五年期间,我们将专门捕捉、学习和建模三种风格特征:属于不同人群的人口统计风格,例如年龄、性别和种族;因不同人的性格、体型而产生的个性化风格;表现风格表现出同一个人在不同场景下的不同情绪和身体状态。我们将建立数据库来涵盖这些风格变化,公开数据库以供开放访问,并向公众提供标签、文档和技术支持。研究成果和源代码将被发布,解决提取风格特征、以可控方式生成运动风格以及将风格转化为新运动的问题。在五年期限之后,我们将继续向我们的数据库添加更多风格,以对更广泛的运动风格进行建模。我们的数据库可以直接用于AR/VR和游戏场景中的各种角色动画,也方便其他研究人员对运动风格进行建模,并刺激心理学、运动学和艺术等领域的跨学科研究。运动风格建模的研究成果支持智能应用例如动作识别、风格识别和动作输入中的人物识别。风格合成和传输技术还可以广泛应用于游戏、娱乐行业、教育、媒体和社交网络中的AR/VR应用。通过创建真实体现现实世界中的人的虚拟角色,这项工作可以促进虚拟世界的多样性、公平性和包容性。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Wang, Yingying其他文献

Low thermal budget lead zirconate titanate thick films integrated on Si for piezo-MEMS applications
用于压电 MEMS 应用的硅上集成的低热预算锆钛酸铅厚膜
  • DOI:
    10.1016/j.mee.2019.111145
  • 发表时间:
    2020-01-15
  • 期刊:
  • 影响因子:
    2.3
  • 作者:
    Wang, Yingying;Yan, Jing;Ouyang, Jun
  • 通讯作者:
    Ouyang, Jun
Development and Application of Well-Test Model after Injection Biological Nanomaterials
  • DOI:
    10.1155/2022/9717061
  • 发表时间:
    2022-04-11
  • 期刊:
  • 影响因子:
    1.7
  • 作者:
    Feng, Qing;Gao, Ping;Wang, Yingying
  • 通讯作者:
    Wang, Yingying
Covalent Organic Frameworks-Based Electrochemical Sensors for Food Safety Analysis.
  • DOI:
    10.3390/bios13020291
  • 发表时间:
    2023-02-17
  • 期刊:
  • 影响因子:
    5.4
  • 作者:
    Lu, Zhenyu;Wang, Yingying;Li, Gongke
  • 通讯作者:
    Li, Gongke
Two new species and records of Neoperla (Plecoptera, Perlidae) from Yunnan, China.
  • DOI:
    10.3897/zookeys.1092.78069
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    1.3
  • 作者:
    Wang, Yingying;Li, Wenliang;Li, Weihai
  • 通讯作者:
    Li, Weihai
Stellettin B Sensitizes Glioblastoma to DNA-Damaging Treatments by Suppressing PI3K-Mediated Homologous Recombination Repair.
  • DOI:
    10.1002/advs.202205529
  • 发表时间:
    2023-01
  • 期刊:
  • 影响因子:
    15.1
  • 作者:
    Peng, Xin;Zhang, Shaolu;Wang, Yingying;Zhou, Zhicheng;Yu, Zixiang;Zhong, Zhenxing;Zhang, Liang;Chen, Zhe-Sheng;Claret, Francois X.;Elkabets, Moshe;Wang, Feng;Sun, Fan;Wang, Ran;Liang, Han;Lin, Hou-Wen;Kong, Dexin
  • 通讯作者:
    Kong, Dexin

Wang, Yingying的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Wang, Yingying', 18)}}的其他基金

Modeling Diverse, Personalized and Expressive Animations for Virtual Characters through Motion Capture, Synthesis and Perception
通过动作捕捉、合成和感知为虚拟角色建模多样化、个性化和富有表现力的动画
  • 批准号:
    DGECR-2022-00415
  • 财政年份:
    2022
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Discovery Launch Supplement

相似海外基金

Data Integration Core
数据集成核心
  • 批准号:
    10555808
  • 财政年份:
    2023
  • 资助金额:
    $ 2.11万
  • 项目类别:
Linking Social-Behavior Contextual Factors and Allostatic Load to Chronic Diseases in Diverse Asian Americans: A Socioecological Approach to Advancing Precision Medicine and Health Equity
将社会行为背景因素和稳态负荷与不同亚裔美国人的慢性病联系起来:推进精准医疗和健康公平的社会生态学方法
  • 批准号:
    10799170
  • 财政年份:
    2023
  • 资助金额:
    $ 2.11万
  • 项目类别:
PAGE-G: Precision Approach combining Genes and Environment in Glaucoma
PAGE-G:青光眼基因与环境相结合的精准方法
  • 批准号:
    10797646
  • 财政年份:
    2023
  • 资助金额:
    $ 2.11万
  • 项目类别:
Machine Learning for Ventricular Arrhythmias
室性心律失常的机器学习
  • 批准号:
    10658931
  • 财政年份:
    2023
  • 资助金额:
    $ 2.11万
  • 项目类别:
Scalable and Interoperable framework for a clinically diverse and generalizable sepsis Biorepository using Electronic alerts for Recruitment driven by Artificial Intelligence (short title: SIBER-AI)
使用人工智能驱动的招募电子警报的临床多样化和通用脓毒症生物库的可扩展和可互操作框架(简称:SIBER-AI)
  • 批准号:
    10576015
  • 财政年份:
    2023
  • 资助金额:
    $ 2.11万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了