Tensor robust principal component analysis (RPCA), which seeks to separate a low-rank tensor from its sparse corruptions, has been crucial in data science and machine learning where tensor structures are becoming more prevalent. While powerful, existing tensor RPCA algorithms can be difficult to use in practice, as their performance can be sensitive to the choice of additional hyperparameters, which are not straightforward to tune. In this paper, we describe a fast and simple self-supervised model for tensor RPCA using deep unfolding by only learning four hyperparameters. Despite its simplicity, our model expunges the need for ground truth labels while maintaining competitive or even greater performance compared to supervised deep unfolding. Furthermore, our model is capable of operating in extreme data-starved scenarios. We demonstrate these claims on a mix of synthetic data and real-world tasks, comparing performance against previously studied supervised deep unfolding methods and Bayesian optimization baselines.
张量鲁棒的主成分分析(RPCA)试图将低量张量与稀疏腐败区分开来,在数据科学和机器学习中至关重要,在数据科学和机器学习中,张量结构变得越来越普遍。尽管在实践中可能难以使用强大的,现有的RPCA算法,因为它们的性能可能对选择其他超参数的选择很敏感,而这些超参数并非直接调整。在本文中,我们仅通过学习四个超参数来描述一个快速而简单的自我监督模型,用于张量RPCA。尽管它很简单,但与受监督的深入展开相比,我们的模型却消除了对地面真相标签的需求,同时保持竞争性甚至更高的性能。此外,我们的模型能够在极端数据饥饿的方案中运行。我们将这些主张与合成数据和现实世界任务的组合结合在一起,将绩效与先前研究的监督深层展开方法和贝叶斯优化基准进行了比较。