首页|基于边缘信息增强的前列腺MR图像分割网络

基于边缘信息增强的前列腺MR图像分割网络

扫码查看
目的 前列腺图像精确分割对评估患者健康和制定治疗方案至关重要.然而传统U-Net模型在前列腺MR(magnetic resonance)图像分割上存在过拟合、边缘信息丢失等问题.针对上述问题,提出一种改进的U-Net 2D分割模型,旨在增强边缘信息、降低噪声影响,进而提高前列腺分割效果.方法 为缓解过拟合现象,新模型通过对标准U-Net架构进行修改,将普通卷积替换为深度可分离卷积,重新设计编码器和解码器结构,降低模型参数量;为保存分割结果中的边缘信息,新模型通过ECA(efficient channel attention)注意力机制对U-Net解码器特征进行优化,以放大并保存小尺度目标的信息,并提出边缘信息模块和边缘信息金字塔模块,恢复并增强边缘信息,以缓解频繁下采样带来的边缘信息衰退以及编码器和解码器特征之间的语义差距问题;利用空洞空间金字塔池化(atrous spatial pyramid pooling,ASPP)模块对特征进行重采样,扩大感受野,以消除特征噪声.结果 在PROMISE 12(prostate MR image segmentation 2012)数据集上验证模型的有效性,并与6种基于U-Net的图像分割方法进行对比,实验证明其分割结果在 Dice 系数(Dice coefficient,DC)、HD95(95%Hausdorff distance)、召回率(recall)、Jaccard 系数和准确度(accuracy)等指标上均有提高,DC较U-Net提高了 8.87%,HD95较U-Net++和 Attention U-Net分别降低了 12.04 mm和3.03 mm.结论 提出一种基于边缘信息增强的前列腺MR图像分割网络(attention mechanism and marginal infor-mation fusion U-Net,AIM-U-Net),其生成的分割图像具有丰富的边缘信息和空间信息,其主观效果和客观评价指标均优于其他同类方法,为提高临床诊断的准确度提供帮助.
Prostate MR image segmentation network with edge information enhancement
Objective Prostate cancer,which is an epithelial malignancy that occurs in the prostate,is one of the most com-mon malignant diseases.Early detection of potentially cancerous prostate is important to reduce the prostate cancer mortal-ity.Magnetic resonance imaging(MRI)is one of the most commonly used imaging methods for detecting prostate in clini-cal practice and commonly used for the detection,localization,and segmentation of prostate cancer.Formulating suitable medical plans for patients and postoperative record is important.In computer-aided diagnosis,extracting the prostate region from the image and further calculating the corresponding characteristics are often necessary for physiological analysis and pathological research to assist clinicians in making accurate judgments.The current methods of MR1 prostate segmenta-tion can be divided into two categories:traditional and deep-learning-based methods.The traditional segmentation method is based on the analysis of the features extracted from the image with knowledge of image processing.The effect of this kind of method depends on the performance of the extracted features,and this method sometimes requires manual interaction.In recent years,deep learning technology has been widely applied to image segmentation with the continuous development of computer technology.Unlike visible light images,medical images have special characteristics:large grayscale range,unclear boundaries,and the human organs have a relatively stable distribution in the human body.Considering these char-acteristics of medical images,a fully convolution-based U-Net was first proposed in 2015,which is a neural network model for solving the problem of medical image segmentation.Compared with other networks,U-Net has obvious advantages for medical image segmentation,but it still has some weaknesses that must be overcome.On the one hand,the dataset of medi-cal images is not huge,but the traditional U-Net model has numerous parameters,which can easily lead to network overfit-ting.On the other hand,during feature extraction,the edge information of the image is lost.Furthermore,the small-scale information of the target object is difficult to save.The feature map obtained by U-Net's skip connections usually contains noise,resulting in low model segmentation accuracy.To solve the above problems,this paper proposes an improved U-Net 2D prostate segmentation model,that is,AIM-U-Net,which can enhance the edge information between tissues and organs.AIM-U-Net can also reduce the influence of image noise,thereby improving the effect of prostate segmentation.Method To solve the overfitting problem,we redesign the encoder and decoder structure of the original U-Net,and the ordinary convo-lution is replaced with deep separable convolution.Deep separable convolution can effectively reduce the number of param-eters in the network,thereby improving the computational efficiency,generalization ability,and accuracy of the model.In addition,we optimize the decoder features through the efficient channel attention module to amplify and retain information on small-scale targets.Moreover,edge information can provide fine-grained constraints to guide the feature extraction dur-ing segmentation.The features of shallow coding units retain sufficient edge information due to their high resolution,while the features extracted by the deep coding unit capture global feature information.Therefore,we designed the edge informa-tion module(EIM)to integrate the shallow features of the encoder and the high-level semantic information to obtain and enhance the edge information.Therefore,the obtained feature map has rich edge information and advanced semantic infor-mation.The EIM has two main functions.First,it can provide edge information and guide the segmentation process in the decoding path.Second,the edge detection loss of early convolutional layers is supervised by adopting a deep supervision mechanism.Moreover,the features extracted from different modules have their own advantage.The features of the deep coding unit can capture the global high-level discriminant feature information of the prostate,which is extremely helpful for the segmentation of small lesions.The multi-scale feature of the decoding unit has rich spatial semantic information,which can improve the accuracy of segmentation.The fusion information obtained by the EIM has rich edge information and advanced semantic information.Therefore,we design an edge information pyramid module(EIPM),which comprehen-sively uses the above different information by fusing the edge information,the deep features of the coding unit,and the multi-scale features of the decoding unit,so that the segmentation model can understand the image more comprehensively and improve the accuracy and robustness of segmentation.The EIPM can guide the segmentation process in the decoding path by fusing multi-scale information and can supervise the region segmentation loss of the decoder's convolutional layer using the deep supervision mechanism.In the neural network segmentation task,the feature map obtained by feature fusion usually contains noise,decreasing the segmentation accuracy.To solve this problem,we use the atrous spatial pyramid pooling(ASPP)to process the enhanced edge feature map obtained by the EIPM,and the obtained multi-scale features are concatenated.ASPP resamples the fusion feature map through dilation convolution with different dilation rates,which can capture multi-scale context information,eliminate the noise of multi-scale features,and obtain a more accurate prostate representation.Hence,the segmentation result is obtained by 1 × 1 convolution with one output channels,whose dimen-sion is the same as that of the input image.Finally,to accelerate the convergence speed of the network,we design a deep supervision mechanism to improve the convergence speed of the model and realize deep supervision mechanism through 1 × 1 convolution and activation function.Regarding the loss function of the whole model,we used a hybrid function of Dice loss and cross entropy loss.The total loss of the model includes the final segmentation loss,the edge segmentation loss,and the four region segmentation losses.Result We use the PROMISE 12 dataset to verify the effectiveness of the model and compare the result with those of six other medical image segmentation methods based on U-Net.The experimen-tal results show that the segmented images are remarkably improved in Dice coefficient(DC),95%Hausdorff distance(HD95),recall,Jaccard coefficient(Jac),and accuracy.The DC is 8.87%higher than that of U-Net,and the HD95 value is 12.04 mm and 3.03 mm lower than those of U-Net++and Attention U-Net,respectively.Conclusion The edge of the segmented prostate is more refined using our proposed AIM-U-Net than that using other methods.AIM-U-Net can extract more edge details of the prostate by utilizing the EIM and the EIPM and effectively suppress similar background information and the noise surrounding the prostate.

medical image segmentationprostatemagnetic resonance images(MRI)U-Netedge information

张蝶、黄慧、马燕、黄丙仓、陆炜平

展开 >

上海师范大学信息与机电工程学院,上海 201400

上海市浦东新区公利医院影像科,上海 200120

医学图像分割 前列腺 MR图像 U-Net 边缘信息

国家自然科学基金青年科学基金

61501297

2024

中国图象图形学报
中国科学院遥感应用研究所,中国图象图形学学会 ,北京应用物理与计算数学研究所

中国图象图形学报

CSTPCD北大核心
影响因子:1.111
ISSN:1006-8961
年,卷(期):2024.29(3)
  • 27