We propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, i.e., low-labeling rate. GRAND++ is a class of continuous-depth graph deep learning architectures whose theoretical underpinning is the diffusion process on graphs with a source term. The source term guarantees two interesting theoretical properties of GRAND++: (i) the representation of graph nodes, under the dynamics of GRAND++, will not converge to a constant vector over all nodes even as the time goes to infinity, which mitigates the over-smoothing issue of graph neural networks and enables graph learning in very deep architectures. (ii) GRAND++ can provide accurate classification even when the model is trained with a very limited number of labeled training data. We experimentally verify the above two advantages on various graph deep learning benchmark tasks, showing a significant improvement over many existing graph neural networks.
我们针对标记节点数量有限(即标记率低)的图深度学习提出了带有源项的图神经网络扩散(GRAND++)。GRAND++是一类连续深度图深度学习架构,其理论基础是带有源项的图上的扩散过程。源项保证了GRAND++的两个有趣的理论特性:(i)在GRAND++的动态作用下,图节点的表示即使时间趋于无穷也不会在所有节点上收敛到一个常数向量,这缓解了图神经网络的过平滑问题,并使图学习能够在非常深的架构中进行。(ii)即使模型使用非常有限的标记训练数据进行训练,GRAND++也能提供准确的分类。我们通过实验在各种图深度学习基准任务上验证了上述两个优势,结果显示相较于许多现有的图神经网络有显著的改进。