DMANet:Dense Multi-scale Attention Network for Space Non-cooperative Object Pose Estimation
Accurate pose estimation of space non-cooperative targets with a monocular camera is crucial to space debris removal,autonomous rendezvous,and other on-orbit services.However,monocular pose estimation methods lack depth information,resulting in scale uncertainty issue that significantly reduces their accuracy and real-time performance.We first propose a multi-scale attention block(MAB)to extract complex high-dimensional semantic features from the input image.Second,based on the MAB module,we propose a dense multi-scale attention network(DMANet)for estimating the 6-degree-of-freedom(DoF)pose of space non-cooperative targets,which consists of planar position estimation,depth position estimation,and attitude estimation branches.By introducing an Euler angle-based soft classification method,we formulate the pose regression problem as a classical classification problem.Besides,we design a space non-cooperative object model and construct a pose estimation dataset by using Coppeliasim.Finally,we thoroughly evaluate the proposed method on the SPEED+,URSO datasets and our dataset,compared to other state-of-the-art methods.Experiment results demonstrate that the DMANet achieves excellent pose estimation accuracy.