首页|基于任务特征解耦的自动驾驶视觉联合感知模型

基于任务特征解耦的自动驾驶视觉联合感知模型

扫码查看
针对现有自动驾驶视觉联合感知算法耦合解码网络未考虑各子任务不同特征需求,导致检测任务内部竞争及边缘分割粗糙问题,提出一种基于任务特征解耦的自动驾驶视觉联合感知(TFDJP)模型.在目标检测解码部分,设计逐级语义增强模块及空间信息细化模块,聚合不同语义层次特征,对分类与定位子任务梯度流分离编码,减少子任务间内部冲突,同时在定位部分增加交并比感知预测分支,加强子任务间关联,提升定位精度.在可行驶区域分割及车道线检测解码部分,构建双分辨率解耦分支网络,对目标低频主体区域及高频边界像素分离建模,采用边界损失引导目标完成从局部到整体的训练学习,实现目标主体及边缘的逐步优化,提升整体性能.在BDD100K数据集上的实验结果表明,与YOLOP相比,TFDJP的目标检测平均精度提升2.7百分点,可行驶区域分割平均交并比提升1.3百分点,车道线检测准确率提升10.6百分点.相较于其他多任务模型,TFDJP实现了准确性与实时性的有效平衡.
Task Feature Decoupling Model for Autonomous Driving Visual Joint Perception
This paper proposes a task feature decoupling-based autonomous driving visual joint perception(TFDJP)model to address the issues of internal competition and rough edge segmentation in detection tasks.These issues arise because existing autonomous driving visual joint perception algorithms that use coupled decoding networks do not consider the different feature requirements of each subtask.For object detection and decoding,we designed a hierarchical semantic enhancement module and a spatial information refinement module.These modules aggregate features of different semantic levels,separate and encode the gradient flow of classification and localization subtasks,reduce internal conflicts between subtasks,and add an intersection-over-union ratio perception prediction branch to the localization part for strengthening the correlation between subtasks and improving localization accuracy.For drivable area segmentation and lane detection decoding,we constructed a dual-resolution decoupling branch network to model the separation of low-frequency main area and high-frequency boundary pixels of the target.Boundary loss is used to guide the target to complete training and learning from local to global,gradually optimizing the target's main body and edges,thereby improving overall performance.Experimental results on the BDD100K dataset show that,compared to YOLOP,the proposed TFDJP model has an average target detection accuracy improvement of 2.7 percentage points,an average intersection-to-intersection ratio improvement of 1.3 percentage points for drivable area segmentation,and an accuracy improvement of 10.6 percentage points for lane detection.Compared to other multitasking models,the proposed TFDJP model effectively balances accuracy and real-time performance.

image processingautonomous driving visual joint perceptiontask feature decouplingsemantic fusiondual resolution decoupling

王越、曹家乐

展开 >

天津大学电气自动化与信息工程学院,天津 300072

图像处理 自动驾驶视觉联合感知 任务特征解耦 语义融合 双分辨率解耦

2024

激光与光电子学进展
中国科学院上海光学精密机械研究所

激光与光电子学进展

CSTPCD北大核心
影响因子:1.153
ISSN:1006-4125
年,卷(期):2024.61(22)