首页|轻量化的端到端非合作目标位姿估计方法

轻量化的端到端非合作目标位姿估计方法

扫码查看
针对空间非合作目标六自由度位姿估计问题,基于卷积神经网络设计一种轻量化网络LSPENet,无须手动设计特征便可实现端到端的位姿估计.使用深度可分离卷积、高效通道注意力(ECA)机制构成基本模块,兼顾了网络的复杂度和准确度.设计两个分支分别用于估计位置和姿态,位置估计使用直接回归法,姿态估计引入软分配编码.在URSO数据集的实验结果表明:姿态软分配编码相比直接回归姿态能够显著减小姿态误差;相比其他端到端位姿估计网络,所提网络的参数量减少76.7%,单幅推理时间降低13.3%,同时位置估计精度提高54.6%,姿态估计精度提高57.8%.实现的轻量化端到端位姿估计网络为星载单目视觉位姿估计提供了新思路.
Lightweight Network-Based End-to-End Pose Estimation for Noncooperative Targets
Aiming at the problem of six-degree-of-freedom pose estimation for noncooperative targets in space,this research involved designing a lightweight network named LSPENet based on convolutional neural networks,which could be used to realize end-to-end pose estimation without manually designing features.We used depth-separable convolution and efficient channel attention(ECA)to form the basic module,which balanced the complexity and accuracy of the network.One branch was designed for location estimation using direct regression,and another branch was designed for orientation estimation by introducing soft-assignment coding.Experimental results on the URSO dataset show that soft-assignment coding-based orientation estimation exhibits substantially lesser errors than direct regression-based orientation.Further,compared with the other end-to-end pose estimation network,the proposed network reduces parameter count by 76.7%and decreases single-image inference time by 13.3%,while simultaneously improving location estimation accuracy by 54.6%and orientation estimation accuracy by 57.8%.Overall,LSPENet provides a new idea for monocular visual pose estimation on board.

image processingconvolutional neural networkpose estimationnoncooperative targetsoft-assignment coding

刘佳辉、张永合、张文秀

展开 >

中国科学院微小卫星创新研究院,上海 201304

中国科学院大学,北京 100049

图像处理 卷积神经网络 位姿估计 非合作目标 软分配编码

国家重点研发计划国家重点研发计划国家自然科学青年基金

2021YFC22026002021YFA071710042001408

2024

激光与光电子学进展
中国科学院上海光学精密机械研究所

激光与光电子学进展

CSTPCD北大核心
影响因子:1.153
ISSN:1006-4125
年,卷(期):2024.61(14)
  • 8