We give lower bounds on the amount of memory required by one-pass streaming algorithms for solving several natural learning problems. In a setting where examples lie in $\{0,1\}^d$ and the optimal classifier can be encoded using $\kappa$ bits, we show that algorithms which learn using a near-minimal number of examples, $\tilde O(\kappa)$, must use $\tilde \Omega( d\kappa)$ bits of space. Our space bounds match the dimension of the ambient space of the problem's natural parametrization, even when it is quadratic in the size of examples and the final classifier. For instance, in the setting of $d$-sparse linear classifiers over degree-2 polynomial features, for which $\kappa=\Theta(d\log d)$, our space lower bound is $\tilde\Omega(d^2)$. Our bounds degrade gracefully with the stream length $N$, generally having the form $\tilde\Omega\left(d\kappa \cdot \frac{\kappa}{N}\right)$. Bounds of the form $\Omega(d\kappa)$ were known for learning parity and other problems defined over finite fields. Bounds that apply in a narrow range of sample sizes are also known for linear regression. Ours are the first such bounds for problems of the type commonly seen in recent learning applications that apply for a large range of input sizes.
我们给出了一次性流算法解决几个自然学习问题所需内存量的下界。在示例位于\(\{0,1\}^d\)且最优分类器可以用\(\kappa\)位编码的设定下,我们表明使用接近最小数量的示例\(\tilde{O}(\kappa)\)进行学习的算法必须使用\(\tilde{\Omega}(d\kappa)\)位空间。我们的空间界限与问题自然参数化的环境空间维度相匹配,即使它在示例大小和最终分类器中是二次的。例如,在关于二次多项式特征的\(d\) - 稀疏线性分类器的设定中,对于该设定\(\kappa = \Theta(d\log d)\),我们的空间下界是\(\tilde{\Omega}(d^2)\)。我们的界限随着流长度\(N\)优雅地降低,通常具有\(\tilde{\Omega}\left(d\kappa\cdot\frac{\kappa}{N}\right)\)的形式。对于学习奇偶性和在有限域上定义的其他问题,\(\Omega(d\kappa)\)形式的界限是已知的。在线性回归中,也已知适用于窄范围样本大小的界限。我们的是针对近期学习应用中常见类型的问题的首批此类界限,适用于大范围的输入大小。