In forming learning objectives, one oftentimes needs to aggregate a set of individual values to a single output. Such cases occur in the aggregate loss, which combines individual losses of a learning model over each training sample, and in the individual loss for multi-label learning, which combines prediction scores over all class labels. In this work, we introduce the sum of ranked range (SoRR) as a general approach to form learning objectives. A ranked range is a consecutive sequence of sorted values of a set of real numbers. The minimization of SoRR is solved with the difference of convex algorithm (DCA). We explore two applications in machine learning of the minimization of the SoRR framework, namely the AoRR aggregate loss for binary classification and the TKML individual loss for multi-label/multi-class classification. Our empirical results highlight the effectiveness of the proposed optimization framework and demonstrate the applicability of proposed losses using synthetic and real datasets.
在制定学习目标时,人们常常需要将一组单独的值聚合为一个单一的输出。这种情况出现在聚合损失中,它将学习模型在每个训练样本上的单个损失相结合,也出现在多标签学习的单个损失中,它将所有类别标签上的预测分数相结合。在这项工作中,我们引入排序范围之和(SoRR)作为制定学习目标的一种通用方法。一个排序范围是一组实数的排序值的连续序列。通过凸差算法(DCA)来解决SoRR的最小化问题。我们探讨了SoRR框架最小化在机器学习中的两个应用,即用于二分类的AoRR聚合损失和用于多标签/多分类的TKML单个损失。我们的实验结果强调了所提出的优化框架的有效性,并使用合成数据集和真实数据集证明了所提出的损失的适用性。