首页|基于知识蒸馏自适应DenseNet的无人机对地目标可见光与红外图像融合

基于知识蒸馏自适应DenseNet的无人机对地目标可见光与红外图像融合

扫码查看
可见光与红外图像融合旨在利用两种不同传感器之间有效的信息,通过互补的图像特征实现图像增强。然而,当前基于深度学习的融合方法倾向于优先考虑评价指标。模型的复杂性较高,权重参数较大,推理性能低,泛化性较差,不易部署到无人机载边缘计算端。为了应对这些挑战,本文提出了一种新颖的可见光与红外图像融合方法,即知识蒸馏的自适应DenseNet来学习预先存在的融合模型,通过使用超参数(例如宽度和深度)来实现融合效果和模型轻量化。本文提出的方法在典型地面目标数据集进行了评估,实验结果表明,该模型参数仅为 77 KB,推理时间为 0。95 ms,具有超轻量的网络结构,良好的图像融合效果和复杂场景下较强的泛化能力。
Fusion of visible and infrared images of ground targets by unmanned aerial vehicles based on knowledge distillation adaptive DenseNet
Visible and infrared image fusion aims to exploit the effective information between two different sensors to achieve image enhancement through complementary image features.However,current deep learning-based fusion methods tend to priorities evaluation metrics,and the models have high complexity,large weight parameters,low inference performance,poor generalization,and are not easy to deploy on the UAV edge computing platform.To address these challenges,this paper proposes a novel approach for visible and infrared image fusion,i.e.,adaptive DenseNet with knowledge distillation to learn a pre-existing fusion model,which achieves fusion effectiveness and model lightweighting through the use of hyperparameters(e.g.,width and depth).The proposed method is evaluated on a typical ground target dataset,and the experimental results show that the model parameter is only 77 KB and the inference time is 0.95 ms,which has an ultra-light network structure,excellent image fusion effect and strong generalization ability in complex scenes.

visible and infrared imagesimage fusionknowledge distillationadaptiveunmanned aerial vehicles

童小钟、赵宗庆、苏绍璟、左震、孙备

展开 >

国防科技大学智能科学学院 长沙 410072

可见光与红外图像 图像融合 知识蒸馏 自适应 无人机

2024

仪器仪表学报
中国仪器仪表学会

仪器仪表学报

CSTPCD北大核心
影响因子:2.372
ISSN:0254-3087
年,卷(期):2024.45(5)