A Spatio-temporal Fusion Method Based on DCGAN Multi-level and Multi-scale Remote Sensing Images
Temporal-spatial fusion offers the capability to produce remote sensing images with both high temporal and spatial resolutions.Nevertheless,accurate prediction of such imagery becomes challenging,especially when regions undergo rapid changes or when there is consistent cloud interference due to prolonged cloud cover.To counter these challenges,this study presents the MUSTFGAN,a multi-level and multi-scale temporal-spatial fusion model grounded in the principles of DCGAN.This model leverages generators for feature extraction and discriminators for final discernment,ultimately delivering predictive imagery of high precision.The generator,equipped with multi-level and multi-scale feature extraction,fortifies the model's ability to grasp intricate image area nuances.This,in turn,enhances its prowess in detecting and recognizing objects across varying scales,amplifying the effectiveness of feature extraction.Within the discriminator,an embedded self-attention mechanism boosts its discernment capacity,leading to superior model performance and robustness.Employing multiple loss functions enables precise image computation,paving the way for the reconstruction of high-quality remote sensing images with exceptional spatial and temporal resolution.Such an approach bolsters the model's feature-learning capability,ensuring a robust generalizability.Empirical tests,conducted using two distinct datasets,benchmarked the proposed method against four conventional temporal-spatial fusion methodologies,assessing performance across six standard evaluation metrics.The empirical evidence showed that MUSTFGAN's efficiency spiked by 14.75%on the Yunnan Dianchi dataset.Furthermore,the LBP and Edge indicators surged by 20.78%and 14.18%respectively,while the SAM metric witnessed an 11%dip.Simultaneously,the SSIM,RMSE,and MAE metrics touched 90.43%,0.021 5,and 0.016 3,respectively.Impressively,even in locales plagued by cloud disturbances,the model adeptly anticipated landform alterations,further elevating the precision of temporal-spatial fusion.This ability effectively reduces the repercussions induced by extensive cloud cover and contamination,underscoring the viability and potency of the introduced approach.
spatiotemporal fusionDCGANmulti-level and multi-scale moduleself-attentive mechanismcloud interference