首页|基于多通道交叉卷积UCTransNet的双能CT基材料分解方法

基于多通道交叉卷积UCTransNet的双能CT基材料分解方法

扫码查看
提出一种基于多通道交叉卷积UCTransNet(MC-UCTransNet)的图像域双材料分解方法。该网络以UCTransNet为基础架构,采用通道交叉融合转换器和通道交叉注意模块来提高基材料分解性能,实现双输入双输出的端到端映射。网络中通道交叉融合模块和通道交叉注意模块可更好地捕捉复杂的通道信号相关性,以更充分地进行特征提取与融合,实现基材料生成路径之间的信息交换。为进一步提高模型的拟合性能,网络训练时采用混合损失及Sigmoid函数的归一化方法。实验结果表明,在骨骼基材料及软组织碘基材料分解任务中,所提方法能获得优质的基材料图像,与对比方法相比,其分解后的基材料图像在准确度及噪声伪影抑制上表现更好。
Dual-Energy CT Base Material Decomposition Method Based on Multi-Channel Cross-Convolution UCTransNet
Objective Dual-energy computed tomography(DECT)is a medical imaging technology that provides richer tissue contrast and material decomposition capabilities by simultaneously acquiring X-ray absorption information at two different energy levels,and it is increasingly widely used.In DECT,based on the energy absorption differences of different materials,the scanning objects can be decomposed into different base material components,such as bone and soft tissue.However,accurate decomposition and reconstruction of base material images remain a challenging problem due to factors such as noise,artifacts,and overlap.Therefore,we aim to improve the quality and accuracy of base material decomposition in DECT imaging.Current base material decomposition methods may have some limitations in complex scenarios,such as the failure to accurately decompose overlapping materials,vulnerability to noise interference,and poor image quality.To solve these problems and improve the properties of base material decomposition,a new base material decomposition method is proposed in this study.Methods We aim to improve the quality and accuracy of base material decomposition in DECT images.To achieve this goal,we propose a method based on the multi-channel cross-convolutional UCTransNet(MC-UCTransNet),which is performed by fitting the mapping function in DECT.The network is designed to be a double-in-double-out architecture based on UCTransNet.During training,with the real decomposition image as labels,a pair of double energy images as input,and its concating into the form of multi-channel,our multi-channel network structure aims to realize the information exchange between two material generation paths in the network.The channel cross-fusion converter and channel cross-attention module are used to improve the decomposition of base materials,realizing double-input-double-output and end-to-end mapping.Further,the channel cross-fusion module and the channel cross-attention module can better capture the complex channel correlation to more fully conduct feature extraction and fusion and realize the information exchange between the generation paths of base materials.To improve the model fitting performance,the network is trained using a hybrid loss.Meanwhile,in order to better adapt to the particularity of CT image data,the model uses the normalization method based on the Sigmoid function to preprocess the network input data and improve the model fitting performance.Results and Discussions In order to verify the decomposition accuracy of each method,we not only compare the base material images decomposed by various methods but also reconstruct the base material images to the low energy image,and we compare them with the original low energy image.By obtaining the difference map to intuitively feel the decomposition effect of each method,the experimental results show that the proposed method is able to obtain images of water and soft tissue.Compared with the contrast method,the decomposed images perform better in accuracy and noise contrast suppression.Meanwhile,the results of the ablation experiments also demonstrate the attention mechanism,the mixed loss,and the effectiveness of the Sigmoid normalization method in this task.The introduction of the attention mechanism enables the network to better capture the information of key features in the image and improves the accuracy of decomposition.The mixed loss function of mean absolute error(MAE)and structural similarity index measure(SSIM)is used to improve the network decomposition effect and performance.In addition,the application of the Sigmoid normalization method can better adapt to the particularity of CT image data.On the premise of maintaining the distribution characteristics of the data,the interference of abnormal data to the model can be reduced,and the stability and accuracy of the model can be improved.The loss and peak signal-to-noise ratio(PSNR)values of the proposed method are superior in both the training and validation sets,with fast convergence and good stability,as well as a good decomposition effect on different test sets,showing strong generalization ability.This indicates that the dual energy-based MC-UCTransNet method has high utility in the base material decomposition task of DECT imaging.Conclusions We aim to improve the quality and accuracy of base material decomposition in DECT,and remarkable progress is made by proposing a dual material decomposition method based on MC-UCTransNet.Our study innovatively adopts the MC-UCTransNet network to integrate multi-channel cross-convolution with cross-attention mechanism modules to better capture the correlation among complex channels and realize information exchange between generation pathways of base materials.Moreover,the multi-channel cross structure avoids the use of multi-network for high and low energy information extraction,which makes the network model more convenient.In addition,we further improve the fitting performance of the model by the use of mixed loss and normalization methods based on the Sigmoid function.The experimental results show that the proposed method can ensure a promising improvement in water bone-based material and soft tissue iodine-based material decomposition tasks.

machine visiondual-energy computed tomographybase material decompositionmulti-channel cross convolutionattentionnoise suppression

吴凡、金潼、詹郭睿、解晶晶、刘进、张谊坤

展开 >

安徽工程大学计算机与信息学院,安徽芜湖 241000

计算机网络和信息集成教育部重点实验室(东南大学),江苏南京 210096

东南大学影像科学与技术实验室,江苏南京 210096

机器视觉 双能计算机断层成像 基材料分解 多通道交叉卷积 注意力 噪声抑制

国家自然科学基金安徽省高等学校科学研究项目

618010032022AH050968

2024

光学学报
中国光学学会 中国科学院上海光学精密机械研究所

光学学报

CSTPCD北大核心
影响因子:1.931
ISSN:0253-2239
年,卷(期):2024.44(5)
  • 7