Prostate MR image segmentation network with edge information enhancement
Objective Prostate cancer,which is an epithelial malignancy that occurs in the prostate,is one of the most com-mon malignant diseases.Early detection of potentially cancerous prostate is important to reduce the prostate cancer mortal-ity.Magnetic resonance imaging(MRI)is one of the most commonly used imaging methods for detecting prostate in clini-cal practice and commonly used for the detection,localization,and segmentation of prostate cancer.Formulating suitable medical plans for patients and postoperative record is important.In computer-aided diagnosis,extracting the prostate region from the image and further calculating the corresponding characteristics are often necessary for physiological analysis and pathological research to assist clinicians in making accurate judgments.The current methods of MR1 prostate segmenta-tion can be divided into two categories:traditional and deep-learning-based methods.The traditional segmentation method is based on the analysis of the features extracted from the image with knowledge of image processing.The effect of this kind of method depends on the performance of the extracted features,and this method sometimes requires manual interaction.In recent years,deep learning technology has been widely applied to image segmentation with the continuous development of computer technology.Unlike visible light images,medical images have special characteristics:large grayscale range,unclear boundaries,and the human organs have a relatively stable distribution in the human body.Considering these char-acteristics of medical images,a fully convolution-based U-Net was first proposed in 2015,which is a neural network model for solving the problem of medical image segmentation.Compared with other networks,U-Net has obvious advantages for medical image segmentation,but it still has some weaknesses that must be overcome.On the one hand,the dataset of medi-cal images is not huge,but the traditional U-Net model has numerous parameters,which can easily lead to network overfit-ting.On the other hand,during feature extraction,the edge information of the image is lost.Furthermore,the small-scale information of the target object is difficult to save.The feature map obtained by U-Net's skip connections usually contains noise,resulting in low model segmentation accuracy.To solve the above problems,this paper proposes an improved U-Net 2D prostate segmentation model,that is,AIM-U-Net,which can enhance the edge information between tissues and organs.AIM-U-Net can also reduce the influence of image noise,thereby improving the effect of prostate segmentation.Method To solve the overfitting problem,we redesign the encoder and decoder structure of the original U-Net,and the ordinary convo-lution is replaced with deep separable convolution.Deep separable convolution can effectively reduce the number of param-eters in the network,thereby improving the computational efficiency,generalization ability,and accuracy of the model.In addition,we optimize the decoder features through the efficient channel attention module to amplify and retain information on small-scale targets.Moreover,edge information can provide fine-grained constraints to guide the feature extraction dur-ing segmentation.The features of shallow coding units retain sufficient edge information due to their high resolution,while the features extracted by the deep coding unit capture global feature information.Therefore,we designed the edge informa-tion module(EIM)to integrate the shallow features of the encoder and the high-level semantic information to obtain and enhance the edge information.Therefore,the obtained feature map has rich edge information and advanced semantic infor-mation.The EIM has two main functions.First,it can provide edge information and guide the segmentation process in the decoding path.Second,the edge detection loss of early convolutional layers is supervised by adopting a deep supervision mechanism.Moreover,the features extracted from different modules have their own advantage.The features of the deep coding unit can capture the global high-level discriminant feature information of the prostate,which is extremely helpful for the segmentation of small lesions.The multi-scale feature of the decoding unit has rich spatial semantic information,which can improve the accuracy of segmentation.The fusion information obtained by the EIM has rich edge information and advanced semantic information.Therefore,we design an edge information pyramid module(EIPM),which comprehen-sively uses the above different information by fusing the edge information,the deep features of the coding unit,and the multi-scale features of the decoding unit,so that the segmentation model can understand the image more comprehensively and improve the accuracy and robustness of segmentation.The EIPM can guide the segmentation process in the decoding path by fusing multi-scale information and can supervise the region segmentation loss of the decoder's convolutional layer using the deep supervision mechanism.In the neural network segmentation task,the feature map obtained by feature fusion usually contains noise,decreasing the segmentation accuracy.To solve this problem,we use the atrous spatial pyramid pooling(ASPP)to process the enhanced edge feature map obtained by the EIPM,and the obtained multi-scale features are concatenated.ASPP resamples the fusion feature map through dilation convolution with different dilation rates,which can capture multi-scale context information,eliminate the noise of multi-scale features,and obtain a more accurate prostate representation.Hence,the segmentation result is obtained by 1 × 1 convolution with one output channels,whose dimen-sion is the same as that of the input image.Finally,to accelerate the convergence speed of the network,we design a deep supervision mechanism to improve the convergence speed of the model and realize deep supervision mechanism through 1 × 1 convolution and activation function.Regarding the loss function of the whole model,we used a hybrid function of Dice loss and cross entropy loss.The total loss of the model includes the final segmentation loss,the edge segmentation loss,and the four region segmentation losses.Result We use the PROMISE 12 dataset to verify the effectiveness of the model and compare the result with those of six other medical image segmentation methods based on U-Net.The experimen-tal results show that the segmented images are remarkably improved in Dice coefficient(DC),95%Hausdorff distance(HD95),recall,Jaccard coefficient(Jac),and accuracy.The DC is 8.87%higher than that of U-Net,and the HD95 value is 12.04 mm and 3.03 mm lower than those of U-Net++and Attention U-Net,respectively.Conclusion The edge of the segmented prostate is more refined using our proposed AIM-U-Net than that using other methods.AIM-U-Net can extract more edge details of the prostate by utilizing the EIM and the EIPM and effectively suppress similar background information and the noise surrounding the prostate.
medical image segmentationprostatemagnetic resonance images(MRI)U-Netedge information