Deep feature-guided multi-scale context-aggregated image change detection network
Multi-scale fusion methods have been extensively studied in the field of image change detection,but direct feature fusion cannot accurately detect target changes due to the imbalance between features at different scales.The detection of target changes still suffers from missing and misdetection in complex scenes in particular.In this paper,a deep feature-guided multi-scale context aggregation network(DF-MCANet)was proposed for the change detection task,which improved the network's understanding of targets at different scales by guiding different stages of feature fusion through deep features.The network contained two key modules:feature fusion module(FFM)and feature correction module(FCM).The FFM,combined with contextual features,was used for the extraction and enhancement of change information and the FCM,utilizing deep semantic features,guided the features extracted by the FFM at various stages to perform the fusion of semantic,detailed,and contextual representations.The experimental results show that DF-MCANet improves the F1 metrics by 0.73%on the CDD dataset and 1.43%on the DSIFN dataset compared to the current optimal model A2Net.