基于轻量化YOLOv8n的动态视觉SLAM算法
Dynamic visual SLAM algorithm based on lightweight YOLOv8n
江祥奎 1杨刚 1杜遥遥1
作者信息
- 1. 西安邮电大学自动化学院,陕西西安 710121
- 折叠
摘要
为了改善在动态场景下同步定位与地图绘制(Simultaneous Localization And Mapping,SLAM)算法定位精度低的问题,提出一种基于轻量化YOLOv(You Only Look Once version)8n的动态视觉SLAM算法.利用加权双向特征金字塔网络(Bidirectional Feature Pyramid Network,BiFPN)对YOLOv8n模型进行轻量化改进,减少其参数量.在SLAM算法中引入轻量化YOLOv8n模型,并结合稀疏光流法组成目标检测线程,以去除动态特征点,利用经过筛选的特征点进行特征匹配和位姿估计.实验结果表明:轻量化YOLOv8n模型参数量下降了 36.7%,权重减少了 33.3%,能够实现YOLOv8n模型的轻量化;与ORB-SLAM3算法相比,所提算法在动态场景下的定位精度提高83.38%,有效提高了动态场景下SLAM算法的精度.
Abstract
In order to improve the low positioning accuracy of simultaneous localization and mapping(SLAM)algorithm in dynamic scenes,a dynamic visual SLAM algorithm based on lightweight you only look once version(YOLOv)8n is proposed.The BiFPN network is used in the lightweight im-provement of the YOLOv8n model to reduce the number of parameters.In the visual SLAM algo-rithm,the lightweight YOLOv8n model and the object detection thread of the sparse optical flow method are introduced to remove the dynamic feature points,and the filtered feature points are used for feature matching and pose estimation.Experiment results show that the parameters of the im-proved YOLOv8n model are reduced by 36.7%and the weight is reduced by 33.3%,which realizes the lightweight of YOLOv8n model.Compared with the ORB-SLAM3 algorithm,the positioning accuracy of the improved algorithm in dynamic scenes is improved by 83.38%,which effectively im-proves the accuracy of SLAM algorithm in dynamic scenes.
关键词
视觉同步定位与地图绘制/YOLOv8n/目标检测/稀疏光流法/动态特征点剔除Key words
visual simultaneous positioning and mapping/YOLOv8n/object detection/sparse opti-cal flow method/dynamic feature point elimination引用本文复制引用
基金项目
陕西省科技厅重点研发计划一般项目(2022NY-087)
出版年
2024