喵ID:JebeUC免责声明

Developing Hybrid OpenMP-MPI Parallelism for Fluidity-Next Generation Geophysical Fluid Modelling Technology

开发混合 OpenMP-MPI 并行性以实现流动性 - 下一代地球物理流体建模技术

基本信息

DOI:
--
发表时间:
2012
期刊:
影响因子:
--
通讯作者:
M. Ashworth
中科院分区:
文献类型:
--
作者: Xiaohu Guo;G. Gorman;A. Sunderland;M. Ashworth研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Most modern high performance computing platforms can be described as clusters of multi-core compute nodes. The trend for compute nodes is towards greater numbers of lower power cores, with a decreasing memory to core ratio. This is imposing a strong evolutionary pressure on numerical algorithms and software to efficiently utilise the available memory and network bandwidth. Unstructured finite elements codes have been effectively parallelised with domain decomposition methods by using libraries such as the Message Passing Interface for a long time. However, there are many algorithmic and implementation optimisation opportunities when threading is used for intra-node parallelisation for the latest multi-core/many-core platforms. For example, reduced memory requirements, cache sharing, reduced number of partitions and less MPI communication. While OpenMP is promoted as being easy to use and allows incremental parallelisation of codes, naive implementations frequently yield poor performance. In practice, as with MPI, equal care and attention should be exercised over algorithm and hardware details when programming with OpenMP. In this paper, we report progress implementing hybrid OpenMP-MPI for finite element matrix assembly within the unstructured finite element application software named Fluidity. The OpenMP parallel algorithm uses graph colouring to identify independent sets of elements that can be assembled simultaneously with no race conditions. Unstructured finite element codes are well known to be memory bound, therefore, particular attention is paid to ccNUMA architectures where data locality is particularly important to achieve good intranode scaling characteristics. The profiling and the benchmark results on the latest CRAY platforms show that the best performance can be achieved by pure OpenMP within a node. Keywords-Fluidity; FEM; OpenMP; MPI; ccNUMA; Graph Colouring;
大多数现代高性能计算平台可被描述为多核计算节点的集群。计算节点的趋势是朝着更多数量的低功耗核心发展,内存与核心的比率不断下降。这对数值算法和软件施加了强大的进化压力,以有效利用可用的内存和网络带宽。长期以来,通过使用诸如消息传递接口(Message Passing Interface)之类的库,非结构化有限元代码已通过区域分解方法有效地并行化。然而,当针对最新的多核/众核平台使用线程进行节点内并行化时,存在许多算法和实现优化的机会。例如,降低内存需求、缓存共享、减少分区数量以及减少MPI通信。虽然OpenMP被宣传为易于使用且允许对代码进行增量并行化,但幼稚的实现往往性能不佳。在实践中,与MPI一样,使用OpenMP编程时应对算法和硬件细节给予同等的关注。在本文中,我们报告了在名为Fluidity的非结构化有限元应用软件内实现用于有限元矩阵组装的混合OpenMP - MPI的进展。OpenMP并行算法使用图着色来识别可以在无竞争条件下同时组装的独立元素集。众所周知,非结构化有限元代码受内存限制,因此,对于ccNUMA架构要特别关注,在该架构中,数据局部性对于实现良好的节点内扩展特性尤为重要。在最新的CRAY平台上的性能分析和基准测试结果表明,在节点内纯OpenMP可实现最佳性能。 关键词 - Fluidity;有限元法(FEM);OpenMP;MPI;ccNUMA;图着色
参考文献(0)
被引文献(7)

数据更新时间:{{ references.updateTime }}

M. Ashworth
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓