基于深度学习的目标检测方法在智能车载控制器应用时很难同时满足检测精度与速度的要求.因此,提出一种多级参数融合的驾驶场景目标检测方法,实现检测速度和精度的同步提升.首先,设计出一种多级分支结构用于构建模型,同时,为提高模型的推理速度,引入一种多级参数融合的方法,即将多级结构层等效为单一的卷积-批标准化层,在保证模型泛化能力不变的条件下,大幅度减小模型的参数量.其次,为增加模型的检测精度,提出一种SSIoU(Soft scaled intersection of union)边界框损失计算方法以及一种联合半锚框的标签关联算法,提高模型对驾驶场景的适应能力.最后,开展基于DAIR-V2X-V数据集的试验验证,结果表明,所提出的多级参数融合模型,相比于目前先进的YOLO(You only look once)算法,检测精度(Mean average precision,mAP)提高了 9.89%,推理速度(Frames per second,FPS)提高了 51.89%.
Research on Detection Method for Driving Scenarios Based on Multi-stage Parameter Fusion Network
It is difficult to meet the requirements of both accuracy and speed when applied to intelligent vehicle controllers for object detection based on deep learning methods.Therefore,a multi-stage parameter fusion object detection method for driving scenarios has been proposed,achieving an improvement for detection speed and accuracy simultaneously.Firstly,a multi-stage branching structure is designed to build the model,at the same time,to improve the speed of model inference,the multi-stage branching structure is equivalent to a single convolution-batch normalization layer by introducing a parameter fusion method,whose parameters are reduced greatly with unchanged generalization.Secondly,to improve detection accuracy,a bounding box loss function of SSIoU(Soft scaled intersection of union)and a united semi-anchor free labeling assignment are put forward,enhancing model adaptability to driving scenarios.Finally,the experiments are conducted on the DAIR-V2X-V dataset,the results show that the approach proposed achieves 9.89%and 51.89%improvements in mAP(mean average precision)and FPS(Frames per second)compared to the state-of-the-art YOLO(You only look once)algorithm.