Dynamic SLAM Algorithm Based on Lidar and Camera Fusion
Aiming at the problem of reduced mapping and positioning accuracy caused by the movement and deformation of objects in dynamic environments of lidar-based SLAM systems,a lidar-camera integration SLAM algorithm was proposed.Deep learning was used to perform instance segmentation on images and the segmentation results were fused into lidar point cloud to e-liminate dynamic object.Based on the overall framework of the LIO-SAM algorithm,YOLOv5 was used to obtain image semantic information,and the point cloud was projected into the pixel coordinate system to obtain the point semantic information.According to the semantics,the dynamic object point clouds were eliminated,which effectively improved the position accuracy of the algo-rithm in dynamic scene.The algorithm was experimentally verified on KITTI on an open source dataset.Compared with LIO-SAM,the mean absolute pose error of the algorithm dropped by 3.48%,the median of the algorithm dropped by 4.85%,and the root mean square error of the algorithm dropped by 2.86%.