Global guidance multi-feature fusion network based on remote sensing image road extraction
Due to the high similarity between buildings and roads in remote sensing images,as well as the existence of shadows and occlusion,the existing deep learning semantic segmentation network generally has a high false segmentation rate when it comes to road segmentation.A global guide multi-feature fusion network(GGMNet)was proposed for road extraction in remote sensing images.To reduce the network's misjudgment rate of similar features around the road,the feature map was divided into several local features,and then the features were multiplied by the global context information to strengthen the extraction of various features.The method of integrating multi-stage features was used to accurate spatial positioning of roads and reduce the probability of identifying other ground objects as roads.An adaptive global channel attention module was designed,and the global information was used to guide the local information,so as to enrich the context information of each pixel.In the decoding stage,a multi-feature fusion module was designed to make full use of the location information and the semantic information in the feature map of the four stages in the backbone network,and the correlations between layers were uncovered to improve the segmentation accuracy.The network was trained and tested using CITY-OSM dataset,DeepGlobe Road extraction dataset and CHN6-CUG dataset.Test results show that GGMNet has excellent road segmentation performance,and the ability to reduce the false segmentation rate of road segmentation is better than comparing networks.
remote sensing imagedeep learningroad extractionattention mechanismcontext information