Road Information Extraction Method of Remote Sensing Image by Combining Attention and Extended Convolution
Aiming at the problem that the semantic segmentation of high-resolution remote sensing images has discontinuous ground edge segmentation as well as the complexity and diversity of road and background features result in low accuracy of road extraction and segmentation,a semantic segmentation network(A2 DU-Net)for road information extraction of remote sensing images integrating dual-channel attention and expansion convolution is proposed.Firstly,the coordinate attention(CA)module is introduced in the feature extraction part to capture road location,direction and cross-channel information to accurately locate road information.Secondly,aiming at the sensitive problem of network loss of detailed features,the multi-scale Atrous spatial pyramid pooling module(MASPPM)of multi-scale feature fusion is constructed by using hole convolution with different expansion rates at the end of the encoder to obtain larger receptive fields and improve network performance.Finally,in order to avoid the fusion of semantically dissimilar features of pure hop connections in U-Net,a dual-channel attention mechanism is added between the hop connections of encoder and decoder to achieve gating screening,suppress the features of non-target regions,and improve the segmentation accuracy of the network.The network model is tested on the public road dataset Massachusetts,and the evaluation indexes such as OA(accuracy),intersection-union ratio(IoU),average intersection-union ratio(mloU)and Fl reaches 98.07%,64.39%,81.20%and 88.67%,respectively.Compared with mainstream methods such as U-Net and DDUNet,mIoU increases by 3.07%and 0.22%,and IoU increases by 1.98%and 0.52%.Experimental results show that the proposed method is superior to all comparison methods,which can effectively improve the accuracy of road segmentation.