Continual learning and multi-task learning are commonly used machine learning techniques for learning from multiple tasks. However, existing literature assumes multi-task learning as a reasonable performance upper bound for various continual learning algorithms, without rigorous justification. Additionally, in a multi-task setting, a small subset of tasks may behave as adversarial tasks, negatively impacting overall learning performance. On the other hand, continual learning approaches can avoid the negative impact of adversarial tasks and maintain performance on the remaining tasks, resulting in better performance than multi-task learning. This paper introduces a novel continual self-supervised learning approach, where each task involves learning an invariant representation for a specific class of data augmentations. We demonstrate that this approach results in naturally contradicting tasks and that, in this setting, continual learning often outperforms multi-task learning on benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100.
持续学习和多任务学习是从多个任务中学习的常用机器学习技术。然而,现有文献在没有严格论证的情况下,将多任务学习假定为各种持续学习算法合理的性能上限。此外,在多任务设置中,一小部分任务可能表现为对抗性任务,对整体学习性能产生负面影响。另一方面,持续学习方法可以避免对抗性任务的负面影响,并在其余任务上保持性能,从而比多任务学习取得更好的性能。本文介绍了一种新颖的持续自监督学习方法,其中每个任务都涉及为特定类别的数据增强学习一种不变表示。我们证明了这种方法会导致自然相互矛盾的任务,并且在这种设置下,在包括MNIST、CIFAR - 10和CIFAR - 100在内的基准数据集上,持续学习往往优于多任务学习。