NVIDIA Tensor Cores and AMD Matrix Cores (together called Matrix Accelerators) are of growing interest in high-performance computing and machine learning owing to their high performance. Unfortunately, their numerical behaviors are not publicly documented, including the number of extra precision bits maintained, the accumulation order of addition, and predictable subnormal number handling during computations. This makes it impossible to reliably port codes across these differing accelerators. This paper contributes a collection of {\em Feature Targeted Tests for Numerical Properties} that that help determine these features across five floating-point formats, four rounding modes and additional that highlight the rounding behaviors and preservation of extra precision bits. To show the practical relevance of FTTN, we design a simple matrix-multiplication test designed with insights gathered from our feature-tests. We executed this very simple test on five platforms, producing different answers: V100, A100, and MI250X produced 0, MI100 produced 255.875, and Hopper H100 produced 191.875. Our matrix multiplication tests employ patterns found in iterative refinement-based algorithms, highlighting the need to check for significant result variability when porting code across GPUs.
NVIDIA张量核心和AMD矩阵核(称为矩阵加速器)在高性能计算和机器学习中越来越感兴趣,因为它们的高性能。不幸的是,他们的数值行为未公开记录,包括维护的额外精度数量,添加的累积顺序以及计算过程中可预测的亚正态数量处理。这使得无法在这些不同的加速器上可靠地端口代码。本文贡献了{\ em特征的数值属性靶向测试的集合},该测试有助于确定五种浮点格式,四种圆形模式以及其他突出圆形行为和保留额外精确度的范围的这些功能。为了显示FTTN的实际相关性,我们设计了一个简单的矩阵 - 培养基测试,设计了从我们的功能测试中收集的见解。我们在五个平台上执行了非常简单的测试,产生了不同的答案:V100,A100和MI250X生产0,MI100生产255.875,而Hopper H100产生了191.875。我们的矩阵乘法测试采用基于迭代改进的算法中发现的模式,突显了在跨GPU移植代码时检查有显着结果可变性的需求。