首页|基于改进PointPillars的目标检测算法研究

基于改进PointPillars的目标检测算法研究

扫码查看
针对PointPillars主干网络提取特征不精细、小目标特征丢失的问题,提出一种改进PointPillars的目标检测算法FOPointPillars.首先,引入全维度动态卷积(Omni-dimensional Dynamic Convolution,ODConv)代替普通卷积对伪图像进行特征提取,增强特征提取能力;之后,引入特征金字塔结构(Feature Pyramid Network,FPN),将提取的特征多尺度融合,获取小目标精确的语义信息;然后,在KITTI公共数据集中进行训练与测试;最后,将该网络部署到自研小车上.实验结果表明,在鸟瞰图(Bird's Eye View,BEV)、3D 空间和平均方向相似性(Average Orientation Similarity,AOS)上,FOPointPillars 检测算法的mAP分别达到70.51%、64.31%、71.64%,相比原网络PointPillars分别提升1.65%、0.74%、2.18%.该方法对障碍物的检测功能可辅助无人驾驶小车对环境进行感知.
Research on Object Detection Algorithm Based on Improved PointPillars
To address the issue of the PointPillars backbone network's limited feature extraction and loss of small target characteristics,a target detection algorithm called FOPointPillars is proposed as an improvement of PointPillars.Firstly,ODConv(Omni-dimensional Dynamic Convolution)is introduced to replace the general convolution method to extract features from pseudo-images,enhancing the capability to extract features.Secondly,FPN(Feature Pyramid Network)is incorporated to fuse the extracted features at multiple scales and make precise semantic information for small objects obtained.Next,the network is trained and tested using the KITTI public dataset.Finally,the network is deployed on a self-researched cart.The FOPointPillars detection algorithm achieves mAP of 70.51%,64.31%,and 71.64%for BEV(Bird's Eye View),3D Space,and AOS,respectively.Compared to the original PointPillars network,the algorithm shows an increase of 1.65%,0.74%,and 2.18%in mAP,respectively.The detection function of this method for obstacles could provide assistance in environmental perception for driverless trolleys.

object detectionPointPillarsomni-dimensional dynamic convolutionfeature pyramid networkdriverless car

张骞、车虎、刘君、刘锐军

展开 >

南昌航空大学信息工程学院,南昌 330063

江西东锐机械有限公司,南昌 330038

目标检测 PointPillars 全维度动态卷积 特征金字塔 无人驾驶小车

国家自然科学基金

62066031

2024

南昌航空大学学报(自然科学版)
南昌航空大学

南昌航空大学学报(自然科学版)

影响因子:0.287
ISSN:1001-4926
年,卷(期):2024.38(2)
  • 27