Multi-branch and Dual-task Method for Road Extraction from Multimodal Remote Sensing Images
Optical images and SAR images have rich complementary attributes,and an effective data fusion strategy can provide a solid information base for objects interpretation.Roads,as strip features,their topology,distribution patterns,and application scenarios often pose challenges to the interpretation results.Based on this,this paper proposes a multi-branch and dual-task method for road extraction from multimodal remote sensing images.First,encoding-decoding networks with the same structure but independent parameters are constructed for feature extraction of optical and SAR images,respectively,and road surface segmentation labels are used for supervised training.Second,the coding layer features of the SAR images are introduced for road edge detection,and their intermediate features are input to the decoding layer features of the SAR image,so as to optimize the discrimination effect between the road and the background.Finally,the designed Channel Attention-Strip Spatial Attention(CA-SSA)is utilized to fully fuse the shallow and deep features of optical and SAR images to predict the final road extraction results.In the experiment,using the Dongying data set as the reference,it is proved that the method of this paper is superior to the comparative methods based on quantitative evaluation metrics,has obvious advantages in challenging areas such as road intersection and low-grade roads,and has best road extraction results when optical images is affected by clouds.