基于雷达与相机融合的动态SLAM算法
Dynamic SLAM Algorithm Based on Lidar and Camera Fusion
鲍柏仲 1詹小斌 1喻蝶 1司言 1段暕 1史铁林1
作者信息
摘要
针对基于激光雷达的SLAM系统在动态环境下因物体的移动、变形导致的建图与定位精度下降等问题,提出了一种雷达-相机融合的SLAM算法.使用深度学习对图像进行实例分割并将分割结果融合到雷达点云当中,从而剔除动态对象雷达点云.基于LIO-SAM算法整体框架,利用YOLOv5获取图像语义信息,将点云投影到像素坐标系下得到点云语义信息,据此剔除其中的动态对象点云,有效地提升了算法在动态场景下的定位精度.在开源数据集KITTI对算法进行实验验证,其绝对位姿误差均值比LIO-SAM下降了3.48%,中值下降了4.85%,均方根误差下降了2.86%.
Abstract
Aiming at the problem of reduced mapping and positioning accuracy caused by the movement and deformation of objects in dynamic environments of lidar-based SLAM systems,a lidar-camera integration SLAM algorithm was proposed.Deep learning was used to perform instance segmentation on images and the segmentation results were fused into lidar point cloud to e-liminate dynamic object.Based on the overall framework of the LIO-SAM algorithm,YOLOv5 was used to obtain image semantic information,and the point cloud was projected into the pixel coordinate system to obtain the point semantic information.According to the semantics,the dynamic object point clouds were eliminated,which effectively improved the position accuracy of the algo-rithm in dynamic scene.The algorithm was experimentally verified on KITTI on an open source dataset.Compared with LIO-SAM,the mean absolute pose error of the algorithm dropped by 3.48%,the median of the algorithm dropped by 4.85%,and the root mean square error of the algorithm dropped by 2.86%.
关键词
LIO-SAM/YOLOv5/激光雷达SLAM/传感器融合/动态场景/实例分割Key words
LIO-SAM/YOLOv5/lidar SLAM/sensor fusion/dynamic scene/instance segmentation引用本文复制引用
基金项目
湖北省重点研发计划(2021BAA196)
国家自然科学基金(52205103)
国家自然科学基金(52375097)
出版年
2024