首页|在轨光学目标检测模型轻量化研究

在轨光学目标检测模型轻量化研究

扫码查看
目前应用广泛的神经网络模型结构复杂、参数量大,对星上有限的计算和存储资源占用较多.本文面向微纳卫星在轨计算平台,提出一种深度可分离卷积神经网络模型,该模型结合反向残差结构与通道注意力的思路改进在轨识别算法Yolov4网络模型结构,改进网络模型的局部模块结构,降低整体网络结构的深度与复杂度;同时利用可分离卷积结构实现空间卷积,改进SPP模块与PANet模块,降低模型参数量;通过合并卷积层与Batch Normalization层,进一步加快前向推理速度;此外借鉴Focal损失函数的思想,改进目标检测网络的损失函数,减缓前景与背景样本比例不均衡问题.通过与原Yolov4网络模型对比,在保证识别精度94.09%的前提下参数量降低约7倍,FLOPs降低约30倍.同时与Yolo系列、SSD、MobileNet、CenterNet等前沿网络模型对比的实验结果再一次验证了本文的算法性能,为实现在轨目标识别与过滤无用数据提供了理论支撑.
Lightweight model for On-Orbit optical object detection
As an important transport carrier and military target,aircraft detection in remote sensing images is important for aircraft rescue,early warning,and other fields.At present,the widely used neural network model has a complex structure and requires a large number of parameters,which limits the computing and storage resources of aircraft detection satellites.The efficiency and accuracy of satellite in-orbit detection need to be studied,and the computational structure must be optimized.Using neural networks in lightweight operation can reduce the computational costs and compress the overall framework.In this study,on the basis of a deep separable convolution neural network combined with deep separable convolution,the SwishBlcok bottleneck module was established by referring to the construction idea of a reverse residual structure.The characteristics of the network were simultaneously expanded in three aspects as follows:ResBlock_body was replaced with the overall design idea of the main framework of YOLO v4.Simultaneously,the channel attention mechanism of SENet was used for reference and integrated into the network structure.Different weights were given to the extracted feature maps and information.On the premise of maintaining channel separation,a separable convolution structure was used to improve the SPP structure and PANet structure;in this manner,both the number of model parameters and the memory dependence could be reduced.Moreover,the convolution layer and the batch normalization layer were merged to further accelerate forward reasoning.Drawing on the focal loss function,the loss function of object detection was improved to solve the imbalance between foreground and background data samples.The quality of algorithm restoration necessitates verification.In this study,objective evaluation indices were used to measure the algorithm from multiple angles.The public RSOD dataset and an internally produced dataset were used to compare the high-performance network models for algorithm verification.In terms of the rationality of the various improvements in the network model,verification experiments were conducted to measure the quality and processing speed of the algorithm.Then,the trained model was deployed on an embedded platform to verify the detection speed of the improved YOLO v4 algorithm model for on-orbit object recognition.The number of parameters of the proposed scheme was reduced by sevenfold compared with that of the original method,and the number of FLOPs was reduced by approximately 30 at a recognition accuracy of 94.09%.Subsequently,the experimental results were compared with the findings for the YOLO series,SSD,MobileNet,CenterNet,and other cutting-edge network models.The proposed algorithm outperformed the other methods.The proposed on-orbit object detection model can overcome the limitations of computing and storage resources,which traditionally cannot support high-precision complex models.The experimental results from ground and embedded platforms also prove that the proposed on-orbit object detection algorithm can effectively detect remote sensing targets based on detection performance.Future research may expand the scale of remote sensing datasets and improve the universality of model application scenarios

remote sensingconvolution neural networkYolov4optical remote object detectionreverse residual structure

吕晓宁、夏玉立、赵军锁、乔鹏

展开 >

中国科学院软件研究所天基综合信息系统重点实验室,北京 100190

中国科学院大学,北京 100049

遥感 卷积神经网络 Yolov4 在轨目标检测 反向残差结构

国家自然科学基金

62027801

2024

遥感学报
中国地理学会环境遥感分会 中国科学院遥感应用研究所

遥感学报

CSTPCD北大核心
影响因子:2.921
ISSN:1007-4619
年,卷(期):2024.28(4)
  • 1