We give lower bounds on the amount of memory required by one-pass streaming algorithms for solving several natural learning problems. In a setting where examples lie in $\{0,1\}^d$ and the optimal classifier can be encoded using $\kappa$ bits, we show that algorithms which learn using a near-minimal number of examples, $\tilde O(\kappa)$, must use $\tilde \Omega( d\kappa)$ bits of space. Our space bounds match the dimension of the ambient space of the problem's natural parametrization, even when it is quadratic in the size of examples and the final classifier. For instance, in the setting of $d$-sparse linear classifiers over degree-2 polynomial features, for which $\kappa=\Theta(d\log d)$, our space lower bound is $\tilde\Omega(d^2)$. Our bounds degrade gracefully with the stream length $N$, generally having the form $\tilde\Omega\left(d\kappa \cdot \frac{\kappa}{N}\right)$. Bounds of the form $\Omega(d\kappa)$ were known for learning parity and other problems defined over finite fields. Bounds that apply in a narrow range of sample sizes are also known for linear regression. Ours are the first such bounds for problems of the type commonly seen in recent learning applications that apply for a large range of input sizes.
我们给出了解决几个自然学习问题的一遍流算法所需内存量的下界。在样本位于$0,1^d$中并且最优分类器可以使用$kappa$比特编码的情况下,我们证明了使用接近最小数量的样本学习的算法必须使用$\tilde O(\kappa)$,空间为$\tilde\Omega(d\kappa)$比特。我们的空间界限与问题的自然参数化的环境空间的维度相匹配,即使它在样本大小和最终分类器的大小方面是二次的。例如,在2次多项式特征上的$d$-稀疏线性分类器的设置中,对于$\kappa=\theta(d\logd)$,我们的空间下界是$\tilde\omega(d^2)$。我们的边界随流长度$N$优雅地退化,通常具有$\tilde\omega\Left(d\kappa\cdot\frac{\kappa}{N}\right)$的形式。形式为$\Omega(d\kappa)$的界用于学习有限域上定义的奇偶性和其他问题。对于线性回归来说,适用于小样本范围的界限也是众所周知的。对于最近的学习应用程序中常见的适用于大范围输入大小的问题,我们的边界是第一个这样的边界。