Counterfactual reasoning model for Alzheimer's disease diagnosis and pathological region detection
Objective Alzheimer's disease(AD)is a neurodegenerative disease commonly occurring in middle-aged and elderly populations and is accompanied by cognitive impairment and memory loss.With the increasing trend of global popu-lation aging,timely diagnosis of AD and visualization of pathological regions and its accurate localization are of consider-able clinical importance.In current research,one conventional approach is to extract patch-level features based on voxel morphology and prior knowledge for detecting structural changes and identifying AD-related voxel structures.Another approach is to learn AD-related pathological regions by focusing the network on specific brain regions of interest(e.g.,cor-tical and hippocampal regions)based on regional features.However,these approaches ignore other pathological locations in the brain and fail to obtain accurate global structural information for the diagnosis of AD.A joint learning model for local-ization and diagnosis of AD pathological regions is proposed using the idea of counterfactual reasoning to obtain a convinc-ing model architecture and increase interpretability of the output,highlighting the information of pathological regions.An attention-guided cycle generative adversarial network(ACGAN)is constructed based on foreground-background attention mask.Method In the vast majority of image classification methods,the network model aims to find which part of the input is X and which part influences the decision of the classifier to determine the final result as Y.From another viewpoint,in a hypothetical scenario where the input X is C,would the result be Z instead of Y?This idea is defined as counterfactual rea-soning.The AD classification model was first trained as a classifier in the hypothetical scenario to construct its output,and the pathological features of AD were then obtained.The hypothetical scenario was constructed using a generative adver-sarial network to learn the mapping of images from the source domain to the target field.However,achieving good results by directly generating image to image transformation is difficult due to the complexity of whole brain structural magnetic resonance imaging(sMRI)images and the considerable amount of information in 3D space.Drawing inspiration from two models,namely CycleGAN and AttentionGAN,the image can be mapped from the source to the target domain by changing the region in the original image that affects the category judgment and using the foreground-background attention to guide the model to focus on the dynamic change region,which reduces the complexity of the model and facilitates easy model fit-ting.Therefore,this paper proposes an attention-guided recurrent generative adversarial network to construct a counterfac-tual mapping model for AD,thereby outputting the corresponding pathological regions.If a counterfactual map conditional on the target label(i.e.,hypothetical scenario)is generated,then this counterfactual map is added to the input image to diagnose the transformed image as the target type.For example,when the counterfactual map is added to the sMRI image of a subject with AD,modifying the corresponding region changes the input sMRI image and facilitates the diagnosis by the classifier as a normal subject.The pathological regions represented by the counterfactual map were used as privileged infor-mation(i.e.,the location information of the counterfactual map influenced the category determination)to further optimize the diagnostic model.Therefore,the diagnostic model focused on learning and discovering disease-related discriminative regions to combine the pathological region generation and AD diagnostic models.Result The proposed model was evaluated against traditional convolutional neural network(CNN)models and several highly advanced AD diagnostic models on a pub-licly available ADNI dataset using quantitative evaluation metrics,including accuracy(ACC),F1-score,and area under curve(AUC).Experimental results showed that the model improved ACC,F1-score,and AUC by 3.60%,5.02%,and 1.94%,respectively,compared with the best performing method.The generated pathological region images were also qualitatively and quantitatively evaluated,and the normalized correlation scores and peak signal-to-noise ratios of the patho-logical region images obtained by the method were better than those of the compared methods.More importantly,the pro-posed AD diagnostic model visualized the global features and finegrained discriminative regions of the pathological regions compared with the benchmark model,and the average accuracy after three iterations was improved by+4.90%,+11.03%,and+11.08%compared with the benchmark method.Conclusion Compared with existing methods,this ACGAN model can learn the transformation of sMRI images between source and target fields and accurately capture global features and pathological regions.The learned knowledge of the pathological region is used for the improvement of AD diagnosis mod-els.Therefore,the classification diagnosis model achieves excellent performance.