A novel approach for intraoperative X-ray and preoperative CT image registration
Objective The aim of this study is to register X-ray and preoperative CT images in thoracic endovascular aortic repair (TEVAR) procedures to provide accurate and safe navigation for the implantation of TEVAR stents. However,existing registration algorithms face challenges in effectively bridging the domain gap between digitally reconstructed radiograph (DRR) images generated from CT and X-ray images,as well as the difficulty in obtaining image segmentation labels. Therefore,it is necessary to propose a new method to address these issues. Methods We propose a novel registration framework that combines a domain-adaptive network based on generative adversarial network (GAN) and a registration network based on Transformer. The GAN-based domain-adaptive network transfers the style of X-ray images onto DRR images,making them more similar in terms of image style. The registration network based on Transformer adopts a combination of CNN and cross modal transformer (CMT) , allowing direct registration of X-ray and CT images without the need for image segmentation. Results We validate the new registration method on 208 pairs of preoperative X-ray and CT images obtained from patients who underwent TEVAR. Compared to other domain adaptation methods,our use of CycleGAN as the style transfer module effectively reduces the inter-domain discrepancy between DRR images and X-ray images. Further results from ablation experiments demonstrate the significant role of the global-local perception module (GLPM) in improving registration accuracy and the effectiveness of the spatial reduction (SR) block in reducing registration time. As comparing the registration performance of our method with existing methods on X-ray and CT image pairs of real patients,our method demonstrates superior performance in terms of registration accuracy and success rate. Conclusions Our proposed novel method for X-ray and CT image registration effectively overcomes the domain gap and difficulty in obtaining segmentation labels faced by existing methods.