首页|基于并行U-Net模型的眼底微血管图像分割方法

基于并行U-Net模型的眼底微血管图像分割方法

扫码查看
眼底血管是医学上唯一可以无创直接观察到的组织,眼底图像不仅能直接反映眼部疾病状况,在监测全身血管疾病上也具有一定的临床价值。在眼底图像的智能化医学诊断技术中,视网膜血管分割是一项基础任务。针对眼底图像中微血管对比度较低、边界不清、分割灵敏度不高的问题,本文设计了一种基于改进U-Net的并行网络微血管分割模型,该模型分为主网络和微血管特征提取辅助网络两部分。设计了一种形态学图像处理方法,以获取微血管标签,提升微血管特征提取能力。为了增加特征空间的上下文信息量,在主网络中引入了多尺度特征混洗融合模块,将微血管特征信息融合到主网络特征信息流中,以增强其特征表达,提升微血管分割灵敏度。基于公开数据集DRIVE、CHASE_DB1和STARE的评估结果表明,所提方法在眼底微血管分割上展现出了良好的性能,在上述三个数据集上的准确度指标分别达到了 0。9710、0。9764和0。9768。
Fundus Microvascular Image Segmentation Method Based on Parallel U-Net Model
Objective The fundus is the only part of the human body where arteries,veins,and capillaries can be directly observed.Information on the vascular structure of the retina plays an important role in the diagnosis of fundus diseases and exhibits a close relationship with systemic vascular diseases such as diabetes,hypertension,and cardiovascular and cerebrovascular diseases.The accurate segmentation of blood vessels in retinal images can aid in analyzing the geometric parameters of retinal blood vessels and consequently evaluating systemic diseases.Deep learning algorithms have strong adaptability and generalization and have been widely used in fundus retinal blood vessel segmentation in recent years.Digital image processing technology based on deep learning can extract blood vessels from fundus images more quickly;however,the contrast of fundus images is mostly low at the boundary of blood vessels and microvasculature,and the extraction error of blood vessels is large.In particular,the microvasculature,which is similar in color to the background and has a smaller diameter,renders it more difficult to extract less vascular areas from the background.To solve this problem,this study improves the classical medical-image semantic segmentation U-Net.To effectively extract the spatial context information of color fundus images,a multiscale feature mixing and fusion module is designed to alleviate the limitations of local feature extraction by the convolution kernel.Moreover,to solve the problem of low contrast of the microvessels in color fundus images,a microvessel feature extraction auxiliary network is designed to facilitate the network in learning more detailed microvessel information and improve the performance of the network's blood vessel segmentation.Methods A microvascular segmentation model of a parallel network based on U-Net(MPU-Net)was designed based on microvascular detail information loss and limitations of the convolution kernels.The U-Net network model was improved.First,the U-Net network was paralleled with an auxiliary network for microvascular feature extraction(Mic-Net).Microvascular labels on the gold standard images of fundus blood vessels were obtained via morphological processing,and they were used in the auxiliary network of microvascular feature extraction to learn microvascular feature information.Second,the main network was introduced in a multiscale feature shuffling and fusion module(MSF).Through learning,more receptive field characteristic information can be used to relieve the convolution kernels under space limitations.In contrast,a channel-mixing mechanism was used to increase the interaction between channels to better integrate the characteristics of different receptive field sizes and microvascular characteristics.MPU-Net comprised two parallel U-Net branches:the main and microvascular feature extraction auxiliary networks.The network that used the whole blood vessel label to calculate the loss function is the main network,whereas the Mic-Net used the microvessel label to calculate the loss function.Each network branch had one lesser layer of upper sampling on the U-Net architecture to reduce the loss of detail.A multiscale feature shuffle fusion module was introduced into the main network to alleviate the limitation of obtaining local information by convolution and to fuse microvessel feature information into the main network more effectively.In this study,a multi-scale feature-mixing fusion module MSF was designed.The module had two input features.The first was the encoder output feature,which contained more spatial details and exhibited a better expression ability on thick blood vessels.The other was the decoder feature or the microvascular feature output by the decoder in Mic-Net,which contained more high-level semantic information.Results and Discussions We use three publicly available datasets—DRIVE,CHASE_DB1,and STARE—to validate the proposed MPU-Net.The comparison results(see Table 1,Table 2 and Table 3)show that the MPU-Net proposed in this study performs well in terms of accuracy.As presented in Table 1,for the DRIVE test set,the accuracy,sensitivity,specificity,and AUC of the proposed MPU-Net are 0.9710,0.8243,0.9853,and 0.9889,respectively.Compared with existing segmentation method(TDCAU-Net),MPU-Net obtains the highest accuracy,sensitivity,specificity,and AUC,which are improved by 0.0154,0.0056,0.0097,and 0.0094,respectively.Further,compared with DG-Net,which exhibits a better overall segmentation performance,MPU-Net increases the values by 0.0106,0.0629,0.0016,and 0.0043,respectively.These results indicate that the MPU-Net proposed in this study performs well on the DRIVE dataset and is beneficial for improving the vascular segmentation accuracy of the DRIVE dataset from the perspective of microvascular feature extraction and multi-scale feature wash-and-wash fusion.As presented in Table 2,for the CHASE_DB1 test set,the accuracy,sensitivity,specificity,and AUC of the proposed MPU-Net are 0.9764,0.8593,0.9844,and 0.9913,respectively.Compared with the existing segmentation method(TDCAU-Net),MPU-Net obtains the highest accuracy,sensitivity,and AUC,which are increased by 0.0026,0.0350,0.0035,respectively.Further,compared with ACCA-MLA-D-U-Net,which exhibits a better sensitivity performance,it increases the values by 0.0091,0.0191,and 0.0039,respectively.These results show that MPU-Net has a better segmentation performance on the CHASE_DB1 datasets,although the performance of MPU-Net is slightly lower than that reported by Mao et al.on specificity,but 0.0352 and 0.0020 higher than that in sensitivity and AUC,respectively.As shown in Table 3,for the STARE test set,the proposed MPU-Net values are 0.9768,0.7844,0.9907,and 0.9905 for accuracy,sensitivity,specificity,and AUC,respectively.Compared with the existing segmentation method(LUVS-Net),MPU-Net obtains the highest accuracy,specificity,and AUC,which are increased by 0.0015,0.0046,and 0.1718,respectively.Further,compared with CS2-Net,which had the best sensitivity performance,it increases the values by 0.0098,0.0094,and 0.0030,respectively.These results show that the proposed MPU-Net is better than the existing mainstream methods in terms of accuracy,specificity,and AUC,but the performance in the sensitivity index is not sufficiently good.In addition,there is a certain gap compared with CS2-Net,but the other indicators are better than those of CS2-Net.This indicates that on the STARE dataset,the model algorithm is significantly affected by the imbalance of vascular pixels and background pixel samples,and will improve the specificity by sacrificing the sensitivity.However,from the perspective of the overall evaluation indices of the model,namely accuracy and AUC,the MPU-Net model exhibits better performance.Further,from the perspective of the overall segmentation performance,MPU-Net is superior to the existing mainstream methods on the STARE dataset.This proves that it is helpful for the overall segmentation performance on the STARE dataset from the perspective of microvascular feature extraction and multi-scale feature shuffling and fusion.From the analysis of the three datasets,MPU-Net is confirmed to be better than the existing mainstream methods in terms of the accuracy and AUC indicators,indicating that the proposed method is beneficial for improving the overall segmentation performance of the model and has a certain generalization ability.For both the DRIVE and CHASE_DB1 datasets,the sensitivity index is superior to existing mainstream methods,indicating that the MPU-Net model can further improve the segmentation sensitivity of blood vessels.Thus,this study effectively improves the vascular segmentation performance of color fundus images from the perspectives of microvascular feature extraction and multiscale feature mixing and fusion.Conclusions In this study,from the perspective of retinal vascular segmentation,microvascular lesions are found to have an important reference value for systemic vascular diseases diagnose.However,there are still certain difficulties in microvascular segmentation tasks.Therefore,in the vascular segmentation task,the shortcomings of deep convolutional neural network for microvascular segmentation are studied,and a parallel network microvascular segmentation model based on U-Net is proposed for vascular segmentation tasks.To alleviate the limitations of feature extraction of convolutional neural networks,a multiscale feature-shuffling fusion module is used to exploit the feature information extracted by the convolutional neural network,and the continuity of vascular segmentation is effectively improved by increasing the interaction between channels and combining spatial multiscale information.To alleviate the loss of detailed information during feature extraction caused by the pooling operation in the U-Net encoder,a microvascular feature extraction auxiliary network was proposed to further extract microvascular feature information.The test results for the DRIVE,CHASE_DB1,and STARE validation sets demonstrate that the proposed network can effectively improve the vascular segmentation performance compared with existing networks with better performance.In the future,further research should be conducted based on the auxiliary network of microvascular feature extraction to extract more refined and comprehensive microvascular features.

retinal vessel segmentationmicrovascular feature extractiondeep learningmorphological processingmulti-scale feature mixing and fusion

刘新娟、韩旭、方二喜

展开 >

苏州大学电子信息学院,江苏苏州 215006

视网膜血管分割 微血管特征提取 深度学习 形态学处理 多尺度特征混洗融合

2024

中国激光
中国光学学会 中科院上海光机所

中国激光

CSTPCD北大核心
影响因子:2.204
ISSN:0258-7025
年,卷(期):2024.51(21)