Multi Region-scale Collaborative Semantic Segmentation of High-resolution Remote Sensing Images for Multi-source Data
In the semantic segmentation of high-resolution remote sensing images,in order to solve the problem of how to effectively fuse spectral and elevation information to segment different features with similar spectra and to improve the local feature recognition accuracy by capturing long-range dependent information,this paper proposes a multi-region scale collaborative semantic segmentation method for multi-source data.The method includes:an unequal-length multi-branch semantic segmentation network to efficiently ex-tract multi-source features and fully utilize the complementary information between multi-source data;a lightweight collaborative atten-tional feature fusion module for efficiently fusing multi-branch features in the feature fusion stage;and a multi-region scale collabora-tive data enhancement method that guides the network to capture long-range dependent information.Experimental results on the public-ly available datasets Vaihingen and Potsdam provided by ISPRS show that the method proposed in this paper has a better segmentation performance compared with mainstream methods of the same type,and yields more complete information on feature details and a smal-ler number of parameters.
semantic segmentationhigh-resolution remote sensing imagesdigital surface modelmulti-source data fusioncollaborative attention