LiDAR point-cloud-based 2D wall line detection experiment of indoor mobile robot
[Objective]The efficient navigation of indoor mobile robots crucially depends on the swift,accurate identification of wall line elements within their surroundings.Conventional approaches,such as the Hough transform methods,have certain limitations,including a propensity to lose vital information and generate an excessive number of redundant line segments.To overcome these challenges,a novel algorithm that leverages the spatial characteristics of Light Detection and Ranging(LiDAR)point cloud data is presented to extract 2D wall lines directly,remarkably improving accuracy and efficiency in environmental element identification.[Methods]First,the raw 3D point cloud data obtained by LiDAR undergo direct pass-through filtering to remove the interference due to ground and ceiling points present in the cloud data,enabling a clean dataset free from distortions that can skew the overall outcomes.Second,the new purified point cloud data are projected onto a 2D ground plane.This projection is a key part of the process because it transforms the original 3D data into a simpler,more manageable format.The conversion of the data minimizes the complexity of the data,facilitating analysis and interpretation.Third,the projected 2D data are then transformed into a binary image,which facilitates linear detection because it simplifies the data even further,enabling faster and more accurate detection of lines.Thus,the EDlines algorithm is used to extract a preliminary set of straight lines from the obtained binarized image.Finally,the indoor local map's building wall line model is developed through a custom straight-line filtering enhancement algorithm,which is established to manage the previous collection of extracted line segment components.The algorithm is composed of three main stages performed in sequence:merging nearby points,absorbing short segments into longer ones,and combining them into long lines from multiple short ones.After processing,the previously extracted set of line segments can be integrated and connected within the collection of straight-line fragments,redundant lines are removed,and the line segments to close the wall lines are joined.This approach enables a more complete and precise extraction of the wall lines,preserves the shape characteristics,and achieves a closer representation of the actual environment.[Results&Conclusions]Simulation experiments on the Robot Operating System platform reveal that the new algorithm reduces the average number of extracted line segments by 50.8%across diverse complexity scenarios.In addition,the processing time stays consistently under 100 ms,thus meeting the demanding requirements for real-time environment perception and mapping during mobile robot navigation.Moreover,the approach can be applied as a practical teaching experiment,enabling undergraduate students to integrate theoretical knowledge with practical operations and gain a deeper understanding of indoor map construction and environment perception technology.By integrating technical innovation with educational practice,this paper paves the way for enhanced learning experiences and technological developments in robotics.
indoor mobile robotLiDAR data processingwall line detectionreal-time environment recognitionteaching experiment