Continuous authentication using biometrics is receiving renewed attention owing to recent advances in mobile technology. However, the context in which biometric inputs are acquired can affect the quality of information available for authentication. For example, in multi-speaker environments, face or gait could be better authenticators than voice. Unfortunately, existing fusion methods do not take this into account. In this paper, we propose a novel fusion method that accounts for context, and that can operate at both decision and score levels. Theoretical bounds on the proposed method are presented along with experiments on synthetic and real multi-modal biometric data. The results show that our proposed method is better than commonly used fusion methods, even when using state-of-the-art deep learners. Moreover, our method outperforms score-level fusion methods even at the decision-level, debunking the common myth that decision-level fusion is inferior, and showcasing the power of contextual learning.
由于移动技术的最新进展,使用生物识别技术的连续认证再次受到关注。然而,获取生物识别输入的环境会影响可用于认证的信息质量。例如,在多人说话的环境中,面部或步态可能比声音更适合作为认证方式。不幸的是,现有的融合方法没有考虑到这一点。在本文中,我们提出了一种考虑环境的新型融合方法,该方法可以在决策层和分数层进行操作。我们给出了所提方法的理论界限,并对合成的和真实的多模态生物识别数据进行了实验。结果表明,我们提出的方法优于常用的融合方法,即使使用最先进的深度学习算法也是如此。此外,我们的方法即使在决策层也优于分数层融合方法,打破了决策层融合较差的常见误解,并展示了情境学习的强大威力。