首页|基于相机与激光雷达融合的目标定位与跟踪

基于相机与激光雷达融合的目标定位与跟踪

扫码查看
环境感知是无人驾驶的关键技术,针对相机缺乏深度信息无法定位检测目标以及目标跟踪精度较差的问题,提出一种基于相机与激光雷达融合的目标定位与跟踪算法.该算法通过图像检测框内的激光雷达点云簇在像素平面的面积比例大小获得检测目标的定位信息,然后根据检测目标的轮廓点云在像素坐标系下的横向移动速度和纵向移动速度融合图像检测框中心坐标提高目标跟踪精度.实验结果表明:所提目标定位算法正确率为88.5417%,且平均每帧处理时间仅为0.03 s,满足实时性要求;图像检测框中心横坐标的平均误差为4.49 pixel,纵坐标的平均误差为1.80 pixel,平均区域重叠率为87.42%.
Target Localization and Tracking Method Based on Camera and LiDAR Fusion
Environmental perception is a key technology for unmanned driving.However,cameras often lack depth information to locate and detect targets and have poor tracking accuracy;therefore,a target localization and tracking algorithm based on the fusion of camera and LiDAR technologies is proposed.This algorithm obtains the positioning information of the detected target by measuring the proportion of the area of the LiDAR point cloud cluster in the pixel plane within the image detection frame.Subsequently,based on the horizontal and vertical movement speeds of the detected target's contour point cloud in the pixel coordinate system,the center coordinate of the image detection frame is fused to improve the target tracking accuracy.The experimental results show that the accuracy of the proposed target localization algorithm is 88.5417%,and the average processing time per frame is only 0.03 s,meeting real-time requirements.The average error of the horizontal axis of the image detection frame center is 4.49 pixel,the average error of the vertical axis is 1.80 pixel,and the average area overlap rate is 87.42%.

sensor fusionmachine vision3D LiDARtarget localizationtarget tracking

张普、刘金清、肖金超、熊俊峰、冯天伟、王忠泽

展开 >

福建师范大学医学光电科学与技术教育部重点实验室,福建 福州 350007

福建省光子技术重点实验室,福建 福州 350007

福建省光电传感应用工程技术研究中心,福建 福州 350007

广州工业智能研究院,广东 广州 511458

展开 >

传感器融合 机器视觉 3D激光雷达 目标定位 目标跟踪

国家自然科学基金面上项目南沙区重点领域科技项目

622733322022ZD016

2024

激光与光电子学进展
中国科学院上海光学精密机械研究所

激光与光电子学进展

CSTPCD北大核心
影响因子:1.153
ISSN:1006-4125
年,卷(期):2024.61(8)
  • 21