首页|基于雷达与相机融合的动态SLAM算法

基于雷达与相机融合的动态SLAM算法

扫码查看
针对基于激光雷达的SLAM系统在动态环境下因物体的移动、变形导致的建图与定位精度下降等问题,提出了一种雷达-相机融合的SLAM算法.使用深度学习对图像进行实例分割并将分割结果融合到雷达点云当中,从而剔除动态对象雷达点云.基于LIO-SAM算法整体框架,利用YOLOv5获取图像语义信息,将点云投影到像素坐标系下得到点云语义信息,据此剔除其中的动态对象点云,有效地提升了算法在动态场景下的定位精度.在开源数据集KITTI对算法进行实验验证,其绝对位姿误差均值比LIO-SAM下降了3.48%,中值下降了4.85%,均方根误差下降了2.86%.
Dynamic SLAM Algorithm Based on Lidar and Camera Fusion
Aiming at the problem of reduced mapping and positioning accuracy caused by the movement and deformation of objects in dynamic environments of lidar-based SLAM systems,a lidar-camera integration SLAM algorithm was proposed.Deep learning was used to perform instance segmentation on images and the segmentation results were fused into lidar point cloud to e-liminate dynamic object.Based on the overall framework of the LIO-SAM algorithm,YOLOv5 was used to obtain image semantic information,and the point cloud was projected into the pixel coordinate system to obtain the point semantic information.According to the semantics,the dynamic object point clouds were eliminated,which effectively improved the position accuracy of the algo-rithm in dynamic scene.The algorithm was experimentally verified on KITTI on an open source dataset.Compared with LIO-SAM,the mean absolute pose error of the algorithm dropped by 3.48%,the median of the algorithm dropped by 4.85%,and the root mean square error of the algorithm dropped by 2.86%.

LIO-SAMYOLOv5lidar SLAMsensor fusiondynamic sceneinstance segmentation

鲍柏仲、詹小斌、喻蝶、司言、段暕、史铁林

展开 >

华中科技大学机械科学与工程学院

LIO-SAM YOLOv5 激光雷达SLAM 传感器融合 动态场景 实例分割

湖北省重点研发计划国家自然科学基金国家自然科学基金

2021BAA1965220510352375097

2024

仪表技术与传感器
沈阳仪表科学研究院

仪表技术与传感器

CSTPCD北大核心
影响因子:0.585
ISSN:1002-1841
年,卷(期):2024.(7)