首页|基于深度特征正射匹配的无人机视觉定位方法

基于深度特征正射匹配的无人机视觉定位方法

Unmanned aerial vehicle visual localization method based on deep feature orthorectification matching

扫码查看
在卫星拒止条件下无人机安全、可靠地完成各类作业的基础是获取高精度的定位信息,传统图像匹配方法保障困难、定位精度差且匹配约束多.因此,提出一种基于深度特征正射匹配的视觉定位方法,通过深度学习网络提取正射校正后的无人机航拍图像和商业地图的深度特征,获得匹配关系,进而计算无人机高精度位置信息.根据视觉测量机理模型分析不同因素对视觉定位精度的影响,使用中空航拍图像数据集进行离线实验,实验结果表明:相比传统基于方向梯度直方图(HOG)特征的模板匹配方法,所提方法的定位精度提高了 25%,位置均方根误差(RMSE)优于15 m+0.5%H(5000 m以下),具有一定的工程应用价值.
In the context of satellite denial conditions,the foundation for the safe and reliable completion of various tasks by unmanned aerial vehicles(UAVs)is the acquisition of high-precision positioning information.Traditional image matching methods face challenges in guaranteeing security,exhibit poor positioning accuracy,and involve numerous matching constraints.Therefore,a visual positioning method based on deep feature orthorectification matching is proposed,which utilizes a deep learning network to extract depth features from orthorectified UAV aerial images and commercial maps,establishes matching relationships and subsequently calculates high-precision UAV position information.The impact of different factors on visual positioning accuracy is analyzed according to the visual measurement model,and offline experiments ae conducted using a dataset of hollow aerial images.The experimental results demonstrate that,compared with the traditional template matching methods based on histogram of oriented gradients(HOG)features,the proposed method improves positioning accuracy by 25%,and the positioning root mean square error(RMSE)is better than 15 m+0.5%H(for height below 5000 m),which shows certain engineering application value.

visual localizationdeep learningmatching navigationunmanned aerial vehiclesatellite denial

尚克军、赵亮、张伟建、明丽、刘崇亮

展开 >

北京自动化控制设备研究所,北京 100074

北京理工大学 自动化学院,北京 100081

视觉定位 深度学习 匹配导航 无人机 卫星拒止

中国科协青年人才托举工程项目

2021QNTJ-003

2024

中国惯性技术学报
中国惯性技术学会

中国惯性技术学报

CSTPCD北大核心
影响因子:0.792
ISSN:1005-6734
年,卷(期):2024.32(1)
  • 3