首页|基于Trans-nightSeg的夜间道路场景语义分割方法

基于Trans-nightSeg的夜间道路场景语义分割方法

扫码查看
针对夜间道路场景图像亮度低及缺乏带标注的夜间道路场景语义分割数据集的问题,提出夜间道路场景语义分割方法Trans-nightSeg。使用TransCartoonGAN,将带标注的白天道路场景语义分割数据集Cityscapes转换为低光条件下的道路场景图像,两者共用同一个语义分割标注,丰富夜间道路场景数据集。将该结果和真实的道路场景数据集一并作为N-Refinenet的输入,N-Refinenet网络引入了低光图像自适应增强网络,提高夜间道路场景的语义分割性能。该网络采用深度可分离卷积替代普通的卷积,降低了计算量。实验结果表明,所提算法在Dark Zurich-test和Nighttime Driving-test数据集上的平均交并比(mIoU)分别达到56。0%和56。6%,优于其他的夜间道路场景语义分割算法。
Semantic segmentation method on nighttime road scene based on Trans-nightSeg
The semantic segmentation method Trans-nightSeg was proposed aiming at the issues of low brightness and lack of annotated semantic segmentation dataset in nighttime road scenes.The annotated daytime road scene semantic segmentation dataset Cityscapes was converted into low-light road scene images by TransCartoonGAN,which shared the same semantic segmentation annotation,thereby enriching the nighttime road scene dataset.The result together with the real road scene dataset was used as input of N-Refinenet.The N-Refinenet network introduced a low-light image adaptive enhancement network to improve the semantic segmentation performance of the nighttime road scene.Depth-separable convolution was used instead of normal convolution in order to reduce the computational complexity.The experimental results show that the mean intersection over union(mIoU)of the proposed algorithm on the Dark Zurich-test dataset and Nighttime Driving-test dataset reaches 56.0%and 56.6%,respectively,outperforming other semantic segmentation algorithms for nighttime road scene.

image enhancementsemantic segmentationgenerative adversarial network(GAN)style trans-formationroad scene

李灿林、张文娇、邵志文、马利庄、王新玥

展开 >

郑州轻工业大学计算机与通信工程学院,河南郑州 450000

中国矿业大学计算机科学与技术学院,江苏徐州 221116

上海交通大学计算机科学与工程系,上海 200240

图像增强 语义分割 生成对抗网络(GAN) 风格转换 道路场景

国家自然科学基金国家自然科学基金河南省科技攻关计划上海市科技创新行动计划人工智能科技支撑计划江苏省"双创博士"人才项目

619721576210626821210221009721511101200JSSCBS20211220

2024

浙江大学学报(工学版)
浙江大学

浙江大学学报(工学版)

CSTPCD北大核心
影响因子:0.625
ISSN:1008-973X
年,卷(期):2024.58(2)
  • 29