喵ID:vju4rD免责声明

Anatomical Attention Guided Deep Networks for ROI Segmentation of Brain MR Images

用于大脑 MR 图像 ROI 分割的解剖注意引导深度网络

基本信息

DOI:
10.1109/tmi.2019.2962792
发表时间:
2020-06-01
影响因子:
10.6
通讯作者:
Liu, Mingxia
中科院分区:
工程技术1区
文献类型:
Article
作者: Sun, Liang;Shao, Wei;Liu, Mingxia研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Brain region-of-interest (ROI) segmentation based on structural magnetic resonance imaging (MRI) scans is an essential step for many computer-aid medical image analysis applications. Due to low intensity contrast around ROI boundary and large inter-subject variance, it has been remaining a challenging task to effectively segment brain ROIs from structural MR images. Even though several deep learning methods for brain MR image segmentation have been developed, most of them do not incorporate shape priors to take advantage of the regularity of brain structures, thus leading to sub-optimal performance. To address this issue, we propose an anatomical attention guided deep learning framework for brain ROI segmentation of structural MR images, containing two subnetworks. The first one is a segmentation subnetwork, used to simultaneously extract discriminative image representation and segment ROIs for each input MR image. The second one is an anatomical attention subnetwork, designed to capture the anatomical structure information of the brain from a set of labeled atlases. To utilize the anatomical attention knowledge learned from atlases, we develop an anatomical gate architecture to fuse feature maps derived from a set of atlas label maps and those from the to-be-segmented image for brain ROI segmentation. In this way, the anatomical prior learned from atlases can be explicitly employed to guide the segmentation process for performance improvement. Within this framework, we develop two anatomical attention guided segmentation models, denoted as anatomical gated fully convolutional network (AG-FCN) and anatomical gated U-Net (AG-UNet), respectively. Experimental results on both ADNI and LONI-LPBA40 datasets suggest that the proposed AG-FCN and AG-UNet methods achieve superior performance in ROI segmentation of brain MR images, compared with several state-of-the-art methods.
基于结构磁共振成像(MRI)扫描的大脑感兴趣区域(ROI)分割是许多计算机辅助医学图像分析应用的关键步骤。由于ROI边界周围的强度对比度低以及个体间差异大,从结构磁共振图像中有效分割大脑ROI一直是一项具有挑战性的任务。尽管已经开发了几种用于大脑磁共振图像分割的深度学习方法,但大多数方法没有结合形状先验来利用大脑结构的规律性,从而导致性能欠佳。为了解决这个问题,我们提出了一种用于结构磁共振图像大脑ROI分割的解剖学注意力引导深度学习框架,该框架包含两个子网络。第一个是分割子网络,用于为每个输入的磁共振图像同时提取有判别力的图像表示并分割ROI。第二个是解剖学注意力子网络,旨在从一组标记的图谱中获取大脑的解剖结构信息。为了利用从图谱中学到的解剖学注意力知识,我们开发了一种解剖学门控架构,将从一组图谱标签图中导出的特征图与来自待分割图像的特征图融合,用于大脑ROI分割。通过这种方式,可以明确地利用从图谱中学到的解剖学先验来指导分割过程以提高性能。在这个框架内,我们开发了两种解剖学注意力引导的分割模型,分别称为解剖学门控全卷积网络(AG - FCN)和解剖学门控U - Net(AG - UNet)。在ADNI和LONI - LPBA40数据集上的实验结果表明,与几种最先进的方法相比,所提出的AG - FCN和AG - UNet方法在大脑磁共振图像的ROI分割中取得了更优的性能。
参考文献(54)
被引文献(0)

数据更新时间:{{ references.updateTime }}

Liu, Mingxia
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓