首页|一种基于DCGAN的多级多尺度遥感影像时空融合方法

一种基于DCGAN的多级多尺度遥感影像时空融合方法

扫码查看
时空融合可生成具有高时空分辨率的遥感数据影像,但有时因某区域快速发生变化或云层持续覆盖造成的云干扰,导致时空融合方法无法预测出近似真实的影像数据.针对以上问题,提出一种基于DCGAN的多级多尺度的时空融合模型(MUSTFGAN),通过生成器提取特征和判别器判别,最终获得高精度的预测影像.该方法在生成器中利用多级多尺度提取特征信息帮助模型对影像区域细节信息学习,提高模型对于不同尺度物体的识别和检测能力,从而提升特征提取的效果;判别器中加入自注意力机制模块,提高模型的判别能力,从而提高模型的性能和鲁棒性,并利用多损失函数计算影像精度,重建高质量的高空间和高时间分辨率遥感影像,提高了特征学习能力,具有很强的泛化性.使用两种数据集对该方法进行测试,并通过6种常见评估指标与4种经典的时空融合方法进行比较.实验结果表明:MUSTFGAN在云南滇池数据集上精度提升了14.75%,指标LBP和Edge分别提升了20.78%和14.18%;指标SAM降低了11%;指标SSIM、RMSE和MAE分别达到了90.43%、0.021 5和0.016 3;在区域产生云干扰的情况下,可以较好地预测地物的变化,进一步提高时空融合的准确性,填补大量云的遮挡,减少云污染造成的影响,证实了所提方法的可行性和有效性.
A Spatio-temporal Fusion Method Based on DCGAN Multi-level and Multi-scale Remote Sensing Images
Temporal-spatial fusion offers the capability to produce remote sensing images with both high temporal and spatial resolutions.Nevertheless,accurate prediction of such imagery becomes challenging,especially when regions undergo rapid changes or when there is consistent cloud interference due to prolonged cloud cover.To counter these challenges,this study presents the MUSTFGAN,a multi-level and multi-scale temporal-spatial fusion model grounded in the principles of DCGAN.This model leverages generators for feature extraction and discriminators for final discernment,ultimately delivering predictive imagery of high precision.The generator,equipped with multi-level and multi-scale feature extraction,fortifies the model's ability to grasp intricate image area nuances.This,in turn,enhances its prowess in detecting and recognizing objects across varying scales,amplifying the effectiveness of feature extraction.Within the discriminator,an embedded self-attention mechanism boosts its discernment capacity,leading to superior model performance and robustness.Employing multiple loss functions enables precise image computation,paving the way for the reconstruction of high-quality remote sensing images with exceptional spatial and temporal resolution.Such an approach bolsters the model's feature-learning capability,ensuring a robust generalizability.Empirical tests,conducted using two distinct datasets,benchmarked the proposed method against four conventional temporal-spatial fusion methodologies,assessing performance across six standard evaluation metrics.The empirical evidence showed that MUSTFGAN's efficiency spiked by 14.75%on the Yunnan Dianchi dataset.Furthermore,the LBP and Edge indicators surged by 20.78%and 14.18%respectively,while the SAM metric witnessed an 11%dip.Simultaneously,the SSIM,RMSE,and MAE metrics touched 90.43%,0.021 5,and 0.016 3,respectively.Impressively,even in locales plagued by cloud disturbances,the model adeptly anticipated landform alterations,further elevating the precision of temporal-spatial fusion.This ability effectively reduces the repercussions induced by extensive cloud cover and contamination,underscoring the viability and potency of the introduced approach.

spatiotemporal fusionDCGANmulti-level and multi-scale moduleself-attentive mechanismcloud interference

刘昱岑、普运伟、聂聆聪、王飞、李奇

展开 >

昆明理工大学国土资源工程学院,昆明 650093

昆明理工大学计算中心,昆明 650500

昆明理工大学信息工程与自动化学院,昆明 650504

时空融合 DCGAN 多级多尺度模块 自注意力机制 云干扰

国防科技创新特区项目

2016300TS00600113

2024

遥感信息
科学技术部国家遥感中心,中国测绘科学研究院

遥感信息

CSTPCD北大核心
影响因子:0.712
ISSN:1000-3177
年,卷(期):2024.39(2)