首页|不同上下文比例对损毁建筑遥感场景图片样本集构建的影响

不同上下文比例对损毁建筑遥感场景图片样本集构建的影响

扫码查看
基于深度学习的遥感影像场景分析是震后进行损毁评估的重要手段.在损毁建筑影像资源相对稀缺的情况下,构建高质量的遥感场景图片样本集,对提高场景识别和分类精度具有重要意义.作为遥感分析的重要参考依据,上下文信息在场景图片中所占比例是影响样本集构建效果的一个关键因素.目前,样本集构建方法中缺乏对上下文信息合适比例的探索.该文以构建高质量样本集为目标,设计一种调整场景图片上下文信息比例的方法,研究不同上下文信息占比对场景样本集构建的影响,探索上下文信息比例的最佳设置范围.文章构建6组不同上下文信息占比的场景图片样本集,使用5种经典卷积神经网络(convolutional neural network,CNN)进行训练和测试,并依次对每个模型的分类结果和不同上下文信息的分类结果进行分析.研究表明,当上下文信息占比为80%时,CNN网络达到了最佳的分类准确率(92.22%),当上下文信息占比为95%时,则降到89.03%;在所有的CNN模型中,GoogLeNet的分类表现最好,平均准确率达到93.13%.该研究可以找到场景样本集中合理的上下文信息比例设置范围,有效提高遥感场景图片分类的准确率,为损毁建筑遥感场景图片样本集的制作提供指导.
Impacts of different proportions of contextual information on the construction of sample sets of remote sensing scene images for damaged buildings
Deep learning-based scene analysis of remote sensing images serves as a critical means for post-earthquake damage assessment.Given scarce images of damaged buildings,constructing high-quality sample sets of remote sensing scene images holds crucial significance for improving the accuracy of scene recognition and classification.The proportion of contextual information in scene images,as a significant reference for remote sensing analysis,is a key factor affecting the construction effects of sample sets.Currently,the appropriate proportion of contextual information remains under-studied in the sample set construction method.Aiming to construct high-quality sample sets,this study designed a method for adjusting the proportion of contextual information in scene images.It investigated the impacts of different proportions of contextual information on the construction of scene sample sets,exploring the optimal proportion range of contextual information.This study constructed six sample sets of scene images under different proportions of contextual information for training and testing in five classic convolutional neural network(CNN)models.It analyzed the classification results of all the CNN models under different proportions of contextual information.The results indicate that with the proportion of contextual information being 80%,the classification accuracy of the CNN reached an optimal value of 92.22%,which decreased to 89.03%with the proportion of contextual information at 95%.Among all the CNN models,GoogLeNet exhibited superior classification performance with an average accuracy of 93.13%.This study enables the setting of proper proportion ranges of contextual information in scene sample sets,thus effectively improving the classification accuracy of remote sensing scene images,and guiding the construction of sample sets of remote sensing scene images for damaged buildings.

scene analysis of remote sensing imagespost-earthquake damage assessmentproportion of contextual informationsample set construction of scene imagesdamaged building

邰佳怡、慎利、乔文凡、周吾珍

展开 >

西南交通大学地球科学与环境工程学院,成都 610097

四川省国土科学技术研究院(四川省卫星应用技术中心),成都 610045

遥感影像场景分析 震后损毁评估 上下文信息占比 场景图片构建 损毁建筑物

国家自然科学基金面上项目&&

4207138641971330

2024

自然资源遥感
中国国土资源航空物探遥感中心

自然资源遥感

CSTPCD北大核心
影响因子:1.275
ISSN:2097-034X
年,卷(期):2024.36(3)