SLAM algorithm based on fusion of visual semantics and laser point cloud
Lidar,as one of SLAM(simultaneous localization and mapping)sensors,has been widely studied and used due to its high precision and stable performance.However,the obtained point cloud data is sparse and contains little feature information,which could lead to some problems such as mismatching and pose estimation error,and affect the localization and mapping accuracy of SLAM.In view of the above problems,an SLAM algorithm(VSIL-SLAM)that integrates visual semantic information and laser point cloud data is proposed.Firstly,based on the projection idea,the post-clustering point cloud is mapped to the semantic detection frame to generate semantic objects and solve the problem of scarce features of the original laser point cloud.Then,on the basis of shape features,topological features are introduced to describe semantic objects,and a topological similarity measurement method based on matching is proposed to solve the problem of mismatching caused by single features and improve the matching accuracy.Finally,the front-end odometer is constructed based on geometric features and semantic objects by adding the geometric constraints of semantic objects point-to-point,and the back-end loopback detection and pose map optimization design are completed.Experimental results demonstrate that the proposed algorithm improves the performance of the laser SLAM algorithm in both localization and mapping.
visual semanticspoint cloud clusteringfusion algorithmtopological similarity measurementfront-end odometerlaser SLAM