喵ID:fDBc4P免责声明

Equidistribution-based training of Free Knot Splines and ReLU Neural Networks

基于等分布的自由结样条和 ReLU 神经网络的训练

基本信息

DOI:
--
发表时间:
2024
期刊:
影响因子:
--
通讯作者:
L. Kreusser
中科院分区:
文献类型:
--
作者: Simone Appella;S. Arridge;Chris Budd;Teo Deveney;L. Kreusser研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

We consider the problem of one-dimensional function approximation using shallow neural networks (NN) with a rectified linear unit (ReLU) activation function and compare their training with traditional methods such as univariate Free Knot Splines (FKS). ReLU NNs and FKS span the same function space, and thus have the same theoretical expressivity. In the case of ReLU NNs, we show that their ill-conditioning degrades rapidly as the width of the network increases. This often leads to significantly poorer approximation in contrast to the FKS representation, which remains well-conditioned as the number of knots increases. We leverage the theory of optimal piecewise linear interpolants to improve the training procedure for a ReLU NN. Using the equidistribution principle, we propose a two-level procedure for training the FKS by first solving the nonlinear problem of finding the optimal knot locations of the interpolating FKS. Determining the optimal knots then acts as a good starting point for training the weights of the FKS. The training of the FKS gives insights into how we can train a ReLU NN effectively to give an equally accurate approximation. More precisely, we combine the training of the ReLU NN with an equidistribution based loss to find the breakpoints of the ReLU functions, combined with preconditioning the ReLU NN approximation (to take an FKS form) to find the scalings of the ReLU functions, leads to a well-conditioned and reliable method of finding an accurate ReLU NN approximation to a target function. We test this method on a series or regular, singular, and rapidly varying target functions and obtain good results realising the expressivity of the network in this case.
我们考虑使用具有修正线性单元(ReLU)激活函数的浅层神经网络(NN)进行一维函数逼近的问题,并将其训练与传统方法(如一元自由节点样条(FKS))进行比较。ReLU神经网络和FKS跨越相同的函数空间,因此具有相同的理论表达能力。对于ReLU神经网络,我们表明随着网络宽度的增加,其病态性迅速恶化。这通常导致与FKS表示相比,逼近效果明显更差,而FKS随着节点数量的增加仍保持良好的条件性。我们利用最优分段线性插值理论来改进ReLU神经网络的训练过程。利用等分布原理,我们提出了一种两级的FKS训练过程,首先解决寻找插值FKS的最优节点位置这一非线性问题。确定最优节点然后作为训练FKS权重的一个良好起点。FKS的训练为我们如何有效地训练ReLU神经网络以给出同样准确的逼近提供了见解。更准确地说,我们将ReLU神经网络的训练与基于等分布的损失相结合以找到ReLU函数的断点,再结合对ReLU神经网络逼近进行预处理(使其具有FKS形式)以找到ReLU函数的缩放比例,从而得到一种条件良好且可靠的方法来找到对目标函数的准确ReLU神经网络逼近。我们在一系列正则、奇异和快速变化的目标函数上测试了这种方法,并获得了良好的结果,实现了在这种情况下网络的表达能力。
参考文献(1)
被引文献(0)

数据更新时间:{{ references.updateTime }}

L. Kreusser
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓