LiDAR-Radar Fusion Object Detection Algorithm Based on BEV Occupancy Prediction
Beam attenuation and target occlusion in the working environment of LiDAR can cause the output point cloud to be sparse at the far end,which leads to the phenomenon of detection accuracy degradation with distance for 3D object detection algo-rithms based on LiDAR.To address this problem,a LiDAR-radar fusion object detection algorithm based on BEV occupancy pre-diction is proposed.First,a simplified bird's eye view(BEV)occupancy prediction sub-network is proposed to generate position-related radar features,which also helps to solve the network convergence difficulty problem caused by the sparsity of radar data.Then,in order to achieve cross-modal feature fusion,a multi-scale LiDAR-radar fusion layer based on BEV space feature correla-tion is designed.Experimental results on the nuScenes dataset show that the mean average precision(mAP)of the proposed radar branch network reaches 21.6%,and the inference time is 8.3ms.After adding the fusion layer structure,the mAP of the multi-modal detection algorithm improves by 2.9%,compared to the baseline algorithm CenterPoint,and the additional inference time overhead is only 8.6ms.At the 30m position of the distance sensor,the detection accuracy of the multi-modal algorithm for 10 categories in the nuScenes dataset increases by 2.1%~16.0%compared to CenterPoint respectively.
3D Object detectionLiDARRadarOccupancy predictionBird's eye viewFeature fusion