Synthesis of dual-energy material decomposition images based on sparse-view cone beam CT reconstruction and deep learning
Objective To synthesize dual-energy material decomposition images(MDI)of anatomies consistent with the low-dose single-energy cone beam CT(CBCT)images on the treatment day,aiming to provide quantitative images for clinical application scenarios such as online adaptive radiotherapy(ART)and dose reconstruction.Methods Anthropomorphic data on 70 cases of males and females were generated by changing the anatomical input parameters of a 4D extended cardiac-torso(XCAT)phantom.These data were categorized into a training set,a validation set,and an independent test set at a ratio of 5∶1∶1.Each set consisted of pre-treatment dual-energy CT(DECT)images and post-physiological deformation CBCT images,which reflected changes in the patients during radiotherapy.An iterative decomposition algorithm was employed to conduct material decomposition of DECT images to derive material decomposition images of bone(MDIB)and material decomposition images of soft tissues(MDIST).A 2D CycleGAN network based on tomographic images was constructed to convert CBCT images to MDI while preserving the real anatomies on the treatment day reflected in CBCT images.CBCT images,MDIB,and MDIST were input into the network,which then output MDIB and MDIST on the treatment day.DECT images consistent with anatomies reflected in the CBCT images were constructed for the patients in the independent test set.As a result,MDIB and MDIST were synthesized,serving as the ground true images used to quantitatively evaluate the model performance in synthesizing dual-energy MDI.Results Under merely about 13.8%of the conventional projections and radiation dose,the model successfully converted 10 sets of single-energy CBCT images derived from sparse-view reconstruction in the test set into the MDIB and MDIST consistent with anatomies on the treatment day.Compared with the ground true images,the synthesized MDIB and MDIST showed structural similarity index(SSIM)values of 0.983±0.006 and 0.988±0.005,root-mean-square error(RMSE)values of 0.017±0.005 and 0.019±0.004,and peak signal-to-noise ratio(PSNR)values of 35.515±2.081 and 34.409±1.510,respectively.It took about 18 hours and 51 minutes to train the model and about 0.65 s to synthesize each MDI.Conclusions The 2D CycleGAN network developed in this study can synthesize cross-modal and high-fidelity dual-energy MDIs based on low-dose CBCT images derived from sparse-view reconstruction.Thus,using existing clinical platforms,it is expected to provide a novel smart imaging approach for clinical applications such as online adaptive radiotherapy,ion therapy planning,and dose reconstruction and monitoring.