喵ID:3KpSpT免责声明

Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation from Incomplete Measurements

基本信息

DOI:
--
发表时间:
2021-04
期刊:
J. Mach. Learn. Res.
影响因子:
--
通讯作者:
Tian Tong;Cong Ma;Ashley Prater-Bennette;Erin E. Tripp;Yuejie Chi
中科院分区:
其他
文献类型:
--
作者: Tian Tong;Cong Ma;Ashley Prater-Bennette;Erin E. Tripp;Yuejie Chi研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Tensors, which provide a powerful and flexible model for representing multi-attribute data and multi-way interactions, play an indispensable role in modern data science across various fields in science and engineering. A fundamental task is to faithfully recover the tensor from highly incomplete measurements in a statistically and computationally efficient manner. Harnessing the low-rank structure of tensors in the Tucker decomposition, this paper develops a scaled gradient descent (ScaledGD) algorithm to directly recover the tensor factors with tailored spectral initializations, and shows that it provably converges at a linear rate independent of the condition number of the ground truth tensor for two canonical problems -- tensor completion and tensor regression -- as soon as the sample size is above the order of $n^{3/2}$ ignoring other parameter dependencies, where $n$ is the dimension of the tensor. This leads to an extremely scalable approach to low-rank tensor estimation compared with prior art, which suffers from at least one of the following drawbacks: extreme sensitivity to ill-conditioning, high per-iteration costs in terms of memory and computation, or poor sample complexity guarantees. To the best of our knowledge, ScaledGD is the first algorithm that achieves near-optimal statistical and computational complexities simultaneously for low-rank tensor completion with the Tucker decomposition. Our algorithm highlights the power of appropriate preconditioning in accelerating nonconvex statistical estimation, where the iteration-varying preconditioners promote desirable invariance properties of the trajectory with respect to the underlying symmetry in low-rank tensor factorization.
张量提供了一个强大而灵活的模型,用于代表多属性数据和多路交互,在科学和工程领域的各个领域的现代数据科学中起着必不可少的作用。一项基本任务是忠实地以统计和计算有效的方式从高度不完整的测量中恢复张量。本文利用塔克分解中张量的低排时结构,开发出缩放的梯度下降(scaledgd)算法,以直接恢复具有量身定制的频谱初始化的张量因子,并表明它在线性速率上与条件数量无关一旦样本量超过了, $ n^{3/2} $忽略其他参数依赖项,其中$ n $是张量的尺寸。与先前的ART相比,这导致了一种极其可扩展的低量张量估计方法,该方法至少有以下缺点之一:对不良条件的极端敏感性,在记忆和计算方面或较差样本复杂性保证。据我们所知,ScaledGD是第一种与Tucker分解的低级张量完成的近似统计和计算复杂性的算法。我们的算法强调了加速非凸统计估计的适当预处理的力量,其中与迭代相反的预处理促进了该轨迹相对于低率张力分解的基础对称性的理想不变性特性。
参考文献(86)
被引文献(28)

数据更新时间:{{ references.updateTime }}

Tian Tong;Cong Ma;Ashley Prater-Bennette;Erin E. Tripp;Yuejie Chi
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓