An adversarial learning-based unsupervised domain adaptation method for semantic segmentation of high-resolution remote sensing images
The key to the high performance of semantic segmentation models for high-resolution remote sensing images lies in the high domain consistency between the training and testing datasets.The domain discrepancies between different datasets,including differences in geographic locations,sensors'imaging patterns,and weather conditions,lead to significantly decreased accuracy when a model trained on one dataset is applied to another.Domain adaptation is an effective strategy to address the aforementioned issue.From the perspective of a domain adaptation model,this study developed an adversarial learning-based unsupervised domain adaptation framework for the semantic segmentation of high-resolution remote sensing images.This framework fused the entropy-weighted attention and class-wise domain feature aggregation mechanism into the global and local domain alignment modules,respectively,alleviating the domain discrepancies between the source and target.Additionally,the object context representation(OCR)and Atrous spatial pyramid pooling(ASPP)modules were incorporated to fully leverage spatial-and object-level contextual information in the images.Furthermore,the OCR and ASPP combination strategy was employed to improve segmentation accuracy and precision.The experimental results indicate that the proposed method allows for superior cross-domain segmentation on two publicly available datasets,outperforming other methods of the same type.