Thanks to the success of object detection technology, we can retrieve objects of the specified classes even from huge image collections. However, the current state-of-the-art object detectors (such as Faster R-CNN) can only handle pre-specified classes. In addition, large amounts of positive and negative visual samples are required for training. In this paper, we address the problem of open-vocabulary object retrieval and localization, where the target object is specified by a textual query (e.g., a word or phrase). We first propose Query-Adaptive R-CNN, a simple extension of Faster R-CNN adapted to open-vocabulary queries, by transforming the text embedding vector into an object classifier and localization regressor. Then, for discriminative training, we then propose negative phrase augmentation (NPA) to mine hard negative samples which are visually similar to the query and at the same time semantically mutually exclusive of the query. The proposed method can retrieve and localize objects specified by a textual query from one million images in only 0.5 seconds with high precision.
得益于目标检测技术的成功,我们甚至能够从海量图像集合中检索出特定类别的目标。然而,当前最先进的目标检测器(如Faster R-CNN)只能处理预先指定的类别。此外,训练还需要大量的正负视觉样本。在本文中,我们探讨开放词汇目标检索与定位问题,其中目标对象由文本查询(例如一个单词或短语)指定。我们首先提出查询自适应R-CNN,这是对Faster R-CNN的一种简单扩展,通过将文本嵌入向量转换为目标分类器和定位回归器,使其适用于开放词汇查询。然后,为了进行有区分度的训练,我们提出负短语增强(NPA)方法,以挖掘在视觉上与查询相似、同时在语义上与查询相互排斥的难负样本。所提出的方法能够在仅0.5秒内,以高精度从一百万张图像中检索并定位由文本查询指定的目标。