We consider the problem of one-dimensional function approximation using shallow neural networks (NN) with a rectified linear unit (ReLU) activation function and compare their training with traditional methods such as univariate Free Knot Splines (FKS). ReLU NNs and FKS span the same function space, and thus have the same theoretical expressivity. In the case of ReLU NNs, we show that their ill-conditioning degrades rapidly as the width of the network increases. This often leads to significantly poorer approximation in contrast to the FKS representation, which remains well-conditioned as the number of knots increases. We leverage the theory of optimal piecewise linear interpolants to improve the training procedure for a ReLU NN. Using the equidistribution principle, we propose a two-level procedure for training the FKS by first solving the nonlinear problem of finding the optimal knot locations of the interpolating FKS. Determining the optimal knots then acts as a good starting point for training the weights of the FKS. The training of the FKS gives insights into how we can train a ReLU NN effectively to give an equally accurate approximation. More precisely, we combine the training of the ReLU NN with an equidistribution based loss to find the breakpoints of the ReLU functions, combined with preconditioning the ReLU NN approximation (to take an FKS form) to find the scalings of the ReLU functions, leads to a well-conditioned and reliable method of finding an accurate ReLU NN approximation to a target function. We test this method on a series or regular, singular, and rapidly varying target functions and obtain good results realising the expressivity of the network in this case.
我们考虑使用具有修正线性单元(ReLU)激活函数的浅层神经网络(NN)进行一维函数逼近的问题,并将其训练与传统方法(如一元自由节点样条(FKS))进行比较。ReLU神经网络和FKS跨越相同的函数空间,因此具有相同的理论表达能力。对于ReLU神经网络,我们表明随着网络宽度的增加,其病态性迅速恶化。这通常导致与FKS表示相比,逼近效果明显更差,而FKS随着节点数量的增加仍保持良好的条件性。我们利用最优分段线性插值理论来改进ReLU神经网络的训练过程。利用等分布原理,我们提出了一种两级的FKS训练过程,首先解决寻找插值FKS的最优节点位置这一非线性问题。确定最优节点然后作为训练FKS权重的一个良好起点。FKS的训练为我们如何有效地训练ReLU神经网络以给出同样准确的逼近提供了见解。更准确地说,我们将ReLU神经网络的训练与基于等分布的损失相结合以找到ReLU函数的断点,再结合对ReLU神经网络逼近进行预处理(使其具有FKS形式)以找到ReLU函数的缩放比例,从而得到一种条件良好且可靠的方法来找到对目标函数的准确ReLU神经网络逼近。我们在一系列正则、奇异和快速变化的目标函数上测试了这种方法,并获得了良好的结果,实现了在这种情况下网络的表达能力。