无线电工程2024,Vol.54Issue(2) :507-514.DOI:10.3969/j.issn.1003-3106.2024.02.030

基于改进Centerfusion的自动驾驶3D目标检测模型

3D Target Detection Method of Autopilot Based on Improved Centerfusion

黄俊 刘家森
无线电工程2024,Vol.54Issue(2) :507-514.DOI:10.3969/j.issn.1003-3106.2024.02.030

基于改进Centerfusion的自动驾驶3D目标检测模型

3D Target Detection Method of Autopilot Based on Improved Centerfusion

黄俊 1刘家森1
扫码查看

作者信息

  • 1. 重庆邮电大学通信与信息工程学院,重庆 400065
  • 折叠

摘要

针对自动驾驶路面上目标漏检和错检的问题,提出一种基于改进Centerfusion的自动驾驶3D目标检测模型.该模型通过将相机信息和雷达特征融合,构成多通道特征数据输入,从而增强目标检测网络的鲁棒性,减少漏检问题;为了能够得到更加准确丰富的3D目标检测信息,引入了改进的注意力机制,用于增强视锥网格中的雷达点云和视觉信息融合;使用改进的损失函数优化边框预测的准确度.在Nuscenes数据集上进行模型验证和对比,实验结果表明,相较于传统的Centerfusion模型,提出的模型平均检测精度均值(mean Average Precision,mAP)提高了1.3%,Nuscenes检测分数(Nuscenes Detection Scores,NDS)提高了 1.2%.

Abstract

To address the issues of missed and erroneous detection of targets on road in autonomous driving,a 3D object detection model for autonomous driving based on improved Centerfusion is proposed.Firstly,the model integrates camera information with radar features to form a multi-channel feature data input,enhancing the robustness of the target detection network and reducing missed detections.Secondly,an improved attention mechanism is introduced to facilitate feature fusion between radar point clouds and visual information within the view frustum grid,enabling more accurate and comprehensive 3D object detection information.Additionally,an enhanced loss function is employed to optimize the accuracy of bounding box predictions.Finally,the model is validated and compared using the Nuscenes dataset.Experimental results demonstrate that compared to traditional Centerfusion model,the proposed model achieves a 1.3%increase in mean Average Precision(mAP)and a 1.2%improvement in Nuscenes Detection Scores(NDS).

关键词

传感器融合/3D目标检测/注意力机制/毫米波雷达

Key words

sensor fusion/3D object detection/attention mechanism/millimeter wave radar

引用本文复制引用

基金项目

国家自然科学基金(61771085)

出版年

2024
无线电工程
中国电子科技集团公司第五十四研究所

无线电工程

影响因子:0.667
ISSN:1003-3106
参考文献量2
段落导航相关论文