首页|采用辅助学习的物体六自由度位姿估计

采用辅助学习的物体六自由度位姿估计

扫码查看
为了在严重遮挡以及少纹理等具有挑战性的场景下,准确地估计物体在相机坐标系中的位置和姿态,同时进一步提高网络效率,简化网络结构,本文基于RGB-D数据提出了采用辅助学习的六自由度位姿估计方法.网络以目标物体图像块、对应深度图以及CAD模型作为输入,首先,利用双分支点云配准网络,分别得到模型空间和相机空间下的预测点云;接着,对于辅助学习网络,将目标物体图像块和由深度图得到的Depth-XYZ输入多模态特征提取及融合模块,再进行由粗到细的位姿估计,并将估计结果作为先验用于优化损失计算.最后,在性能评估阶段,舍弃辅助学习分支,仅将双分支点云配准网络的输出利用点对特征匹配进行六自由度位姿估计.实验结果表明:所提方法在YCB-Video数据集上的AUC和ADD-S<2 cm结果分别为 95.9%和 99.0%;在LineMOD数据集上的平均ADD(-S)结果为 99.4%;在LM-O数据集上的平均ADD(-S)结果为71.3%.与现有的其他六自由度位姿估计方法相比,采用辅助学习的方法在模型性能上具有优势,在位姿估计准确率上有较大提升.
Object 6-DoF pose estimation using auxiliary learning
In order to accurately estimate the position and pose of an object in the camera coordinate sys-tem in challenging scenes with severe occlusion and scarce texture,while also enhancing network efficien-cy and simplifying the network architecture,this paper proposed a 6-DoF pose estimation method using auxiliary learning based on RGB-D data.The network took the target object image patch,corresponding depth map,and CAD model as inputs.First,a dual-branch point cloud registration network was used to obtain predicted point clouds in both the model space and the camera space.Then,for the auxiliary learn-ing network,the target object image patch and the Depth-XYZ obtained from the depth map were input to the multi-modal feature extraction and fusion module,followed by coarse-to-fine pose estimation.The es-timated results were used as priors for optimizing the loss calculation.Finally,during the performance evaluation stage,the auxiliary learning branch was discarded and only the outputs of the dual-branch point cloud registration network are used for 6-DoF pose estimation using point pair feature matching.Experi-mental results indicate that the proposed method achieves AUC of 95.9%and ADD-S<2 cm of 99.0%in the YCB-Video dataset;ADD(-S)result of 99.4%in the LineMOD dataset;and ADD(-S)result of 71.3%in the LM-O dataset.Compared with existing 6-DoF pose estimation methods,our method using auxiliary learning has advantages in terms of model performance and significantly improves pose estimation accuracy.

6-DoF pose estimationauxiliary learningRGB-D image3D point cloud

陈敏佳、盖绍彦、达飞鹏、俞健

展开 >

东南大学 自动化学院,江苏 南京 210096

东南大学 复杂工程系统测量与控制教育部重点实验室,江苏 南京 210096

南京航空航天大学 空间光电探测与感知工业和信息化部重点实验室,江苏 南京 211106

六自由度位姿估计 辅助学习 深度图像 三维点云

国家自然科学基金江苏省卓越博士后计划江苏省前沿引领技术基础研究专项江苏省高等学校优势学科建设工程资助课题南京航空航天大学空间光电探测与感知工信部重点实验室开放基金中央高校基本科研业务费专项

623050552022ZB118BK20192004CNJ2022025-1NJ2022025

2024

光学精密工程
中国科学院长春光学精密机械与物理研究所 中国仪器仪表学会

光学精密工程

CSTPCD北大核心
影响因子:2.059
ISSN:1004-924X
年,卷(期):2024.32(6)
  • 29