IDDNet:a deep interactive dual-domain convolutional neural network with auxiliary modality for fast MRI reconstruction
Reconstructing a complete image accurately from an undersampled k-space matrix is a viable approach for mag-netic resonance imaging(MRI)acceleration.In recent years,numerous deep learning(DL)-based methods have been em-ployed to improve MRI reconstruction.Among these methods,the cross-domain method has been proven to be effective.However,existing cross-domain reconstruction algorithms sequentially link the image domain and k-space networks,dis-regarding the interplay between different domains,consequently leading to a deficiency in reconstruction accuracy.In this work,we propose a deep interactive dual-domain network(IDDNet)with an auxiliary modality for accelerating MRI re-construction to effectively extract pertinent information from multiple MR domains and modalities.The IDDNet first ex-tracts shallow features from low-resolution target modalities in the image domain to obtain visual representation informa-tion.In the following feature processing,a parallel interactive architecture with dual branches is designed to extract deep features from relevant information of dual domains simultaneously to avoid redundant priority priors in sequential links.Furthermore,the model uses additional information from the auxiliary modality to refine the structure and improve the re-construction accuracy.Numerous experiments at different sampling masks and acceleration rates on the MICCAI BraTS 2019 brain and fastMRI knee datasets show that IDDNet achieves excellent accelerated MRI reconstruction performance.