首页|基于YOLOP-L的多特征融合道路全景驾驶检测

基于YOLOP-L的多特征融合道路全景驾驶检测

扫码查看
目前,驾驶员视角下的交通图像检测技术成为交通领域的重要研究方向,同时提取车辆、道路、交通标志等多种特征已经成为驾驶员理解道路信息多样性的亟需任务.以往研究已在单类目标检测的特征提取方面取得了长足进步,然而,这些研究不能很好地联合应用于其他区别较大的特征检测任务中,且融合训练过程中会损失个别特征检测的精度.针对驾驶员视野范围内道路信息多样且复杂的特点,本文提出了一种基于多特征融合训练的检测模型YOLOP-L,它能够同时对多种不同特征交通目标进行融合训练,同时保证单项检测任务的精度.首先,为了解决特征融合中语义信息表达不完整的问题,设计的SP-LNet模块通过FPN与双向特征网络结合实现网络更深层次的融合,使得提取的信息更完整,从而提升道路小目标的检测性能;其次,设计新的分割头深度可分离卷积,将语义信息与局部特征融合促使多特征融合的训练准确度与速度得到进一步提升;再次,体系中设计的GDL-Focal多类混合损失函数更专注于困难样本,可用于解决样本特征不平衡的问题.最后,对比实验表明:YOLOP-L相比原YOLOP网络运行的速度更快;在车辆目标检测任务下召回率提升了2.2%;在车道线检测任务下准确率提升2.8%,车道线IoU的值较HybridNets网络下降2.45%,但较YOLOP-L网络提升1.95%;在可行驶区域分割任务下其整体检测性能提升1.1%.结果表明,在具有挑战性的BDD100K数据集上,YOLOP-L可以在复杂场景下有效解决检测精度不足和分割缺失的问题,提高了车辆识别、车道线检测以及道路行驶区域联合训练的准确性和鲁棒性.
Multi Feature Fusion for Road Panoramic Driving Detection Based on YOLOP-L
In recent years,traffic image detection technology from the driver's perspective has become an important research di-rection in the field of transportation,and extracting various features such as vehicles,roads,and traffic signs has become an urgent task for drivers to understand the diversity of road information.Previous studies have made significant progress in feature extrac-tion for single class object detection.However,these studies cannot be well applied to other feature detection with significant differences,and the accuracy of individual feature detection will be lost during fusion training.In response to the diverse and com-plex road information within the driver's field of view,this paper proposes a detection model YOLOP-L based on multi feature fusion training.It can simultaneously fuse and train multiple different feature traffic targets,while ensuring the accuracy of indi-vidual detection tasks.The results indicate that YOLOP-L can effectively solve the problems of insufficient detection accuracy and missing segmentation in complex scenes on the challenging BDD100K dataset,improving the accuracy and robustness of vehicle recognition,lane line detection,and joint training of road driving areas.Finally,comparative experiments show that YOLOP-L runs faster than the original YOLOP network.The recall rate increases by 2.2%under the vehicle target detection task.In the lane detection task,the accuracy improves by 2.8%,and the IoU value of the lane line decreases by 2.45%compared to the Hy-bridNets network,but increases by 1.95%compared to the YOLOP-L network.Its overall detection performance improves by 1.1%under the task of driving area segmentation.The results indicate that YOLOP-L can effectively solve the problems of insuffi-cient detection accuracy and missing segmentation in complex scenes on the challenging BDD100K dataset,improving the accuracy and robustness of vehicle recognition,lane line detection,and joint training of road driving areas.

Panoramic drivingMulti featurefusionVehicle inspectionTravelable area detectionLane line detectionBidirectional feature pyramid network

吕嘉璐、周力、巨永锋

展开 >

长安大学电子与控制工程学院 西安 710064

全景驾驶 多特征融合 车辆检测 可行驶区域检测 车道线检测 双向特征金字塔

2024

计算机科学
重庆西南信息有限公司(原科技部西南信息中心)

计算机科学

CSTPCD北大核心
影响因子:0.944
ISSN:1002-137X
年,卷(期):2024.51(z1)
  • 26