Swing in a crew boat, a good jazz riff, a fluid conversation: these tasks require extracting sensory information about how others flow in order to mimic and respond. To determine what factors influence coordination, we build an environment to manipulate incoming sensory information by combining virtual reality and motion capture. We study how people mirror the motion of a human avatar’s arm as we occlude the avatar. We efficiently map the transition from successful mirroring to failure using Gaussian process regression. Then, we determine the change in behaviour when we introduce audio cues with a frequency proportional to the speed of the avatar’s hand or train individuals with a practice session. Remarkably, audio cues extend the range of successful mirroring to regimes where visual information is sparse. Such cues could facilitate joint coordination when navigating visually occluded environments, improve reaction speed in human–computer interfaces or measure altered physiological states and disease.
在工作艇中摇晃、一段精彩的爵士即兴重复段、一场流畅的对话:这些活动都需要提取关于他人如何行动的感官信息,以便进行模仿和回应。为了确定哪些因素影响协调性,我们构建了一个环境,通过结合虚拟现实和动作捕捉来操控传入的感官信息。我们研究当我们遮挡虚拟人物时,人们如何模仿其手臂的动作。我们利用高斯过程回归有效地绘制出从成功模仿到失败的转变过程。然后,我们确定当我们引入与虚拟人物手部速度成比例的频率的音频提示,或者通过练习环节对个体进行训练时行为的变化。值得注意的是,音频提示将成功模仿的范围扩展到视觉信息稀少的情况。在视觉受阻的环境中导航时,此类提示可以促进联合协调,提高人机界面中的反应速度,或者用于测量生理状态变化和疾病。