Most modern high performance computing platforms can be described as clusters of multi-core compute nodes. The trend for compute nodes is towards greater numbers of lower power cores, with a decreasing memory to core ratio. This is imposing a strong evolutionary pressure on numerical algorithms and software to efficiently utilise the available memory and network bandwidth. Unstructured finite elements codes have been effectively parallelised with domain decomposition methods by using libraries such as the Message Passing Interface for a long time. However, there are many algorithmic and implementation optimisation opportunities when threading is used for intra-node parallelisation for the latest multi-core/many-core platforms. For example, reduced memory requirements, cache sharing, reduced number of partitions and less MPI communication. While OpenMP is promoted as being easy to use and allows incremental parallelisation of codes, naive implementations frequently yield poor performance. In practice, as with MPI, equal care and attention should be exercised over algorithm and hardware details when programming with OpenMP. In this paper, we report progress implementing hybrid OpenMP-MPI for finite element matrix assembly within the unstructured finite element application software named Fluidity. The OpenMP parallel algorithm uses graph colouring to identify independent sets of elements that can be assembled simultaneously with no race conditions. Unstructured finite element codes are well known to be memory bound, therefore, particular attention is paid to ccNUMA architectures where data locality is particularly important to achieve good intranode scaling characteristics. The profiling and the benchmark results on the latest CRAY platforms show that the best performance can be achieved by pure OpenMP within a node. Keywords-Fluidity; FEM; OpenMP; MPI; ccNUMA; Graph Colouring;
大多数现代高性能计算平台可被描述为多核计算节点的集群。计算节点的趋势是朝着更多数量的低功耗核心发展,内存与核心的比率不断下降。这对数值算法和软件施加了强大的进化压力,以有效利用可用的内存和网络带宽。长期以来,通过使用诸如消息传递接口(Message Passing Interface)之类的库,非结构化有限元代码已通过区域分解方法有效地并行化。然而,当针对最新的多核/众核平台使用线程进行节点内并行化时,存在许多算法和实现优化的机会。例如,降低内存需求、缓存共享、减少分区数量以及减少MPI通信。虽然OpenMP被宣传为易于使用且允许对代码进行增量并行化,但幼稚的实现往往性能不佳。在实践中,与MPI一样,使用OpenMP编程时应对算法和硬件细节给予同等的关注。在本文中,我们报告了在名为Fluidity的非结构化有限元应用软件内实现用于有限元矩阵组装的混合OpenMP - MPI的进展。OpenMP并行算法使用图着色来识别可以在无竞争条件下同时组装的独立元素集。众所周知,非结构化有限元代码受内存限制,因此,对于ccNUMA架构要特别关注,在该架构中,数据局部性对于实现良好的节点内扩展特性尤为重要。在最新的CRAY平台上的性能分析和基准测试结果表明,在节点内纯OpenMP可实现最佳性能。
关键词 - Fluidity;有限元法(FEM);OpenMP;MPI;ccNUMA;图着色