. Deep neural networks (DNNs), while increasingly deployed in many applications, struggle with robustness against anomalous and out-of-distribution (OOD) data. Current OOD benchmarks often oversimplify, focusing on single-object tasks and not fully representing complex real-world anomalies. This paper introduces a new, straightforward method employing graph structures and topological features to effectively detect both far-OOD and near-OOD data. We convert images into networks of interconnected human understandable features or visual concepts. Through extensive testing on two novel tasks, including ablation studies with large vocabularies and diverse tasks, we demonstrate the method’s effectiveness. This approach enhances DNN resilience to OOD data and promises improved performance in various applications.
深度神经网络(DNN)虽然越来越多地应用于许多领域,但在应对异常和分布外(OOD)数据时的鲁棒性方面存在困难。当前的OOD基准测试往往过于简化,侧重于单目标任务,无法完全代表复杂的现实世界中的异常情况。本文介绍了一种新的、简单的方法,利用图结构和拓扑特征有效地检测远分布外和近分布外数据。我们将图像转换为相互关联的、人类可理解的特征或视觉概念网络。通过在两个新任务上进行大量测试,包括对大量词汇和多种任务的消融研究,我们证明了该方法的有效性。这种方法提高了DNN对OOD数据的抵御能力,并有望在各种应用中提高性能。