A Lightweight Feature Fusion Network for Road Scene Segmentation
There is a contradiction between real-time and accuracy in the semantic segmentation of road scene.Integrating multi-level features and multi-scale context information can improve the performance of the segmentation model.However,complex feature fusion will consume a lot of computing resources,and the existing methods often ignore the location information during the segmentation process,which results in unsatisfactory segmentation performance.In order to solve the above problems,an efficient light feature fusion network(LFFNet)is used for road scene segmentation.Specifically,this paper uses a multi-level feature fusion module to enhance the semantic consistency by embedding spatial location information in the attention mechanism,so as to retain accurate location information while capturing long-range dependencies.Additionally,a light semantic pyramid module is utilized to extract multi-scale contextual information through depthwise separable convolutions.Experimental results demonstrate that LFFNet reduces FLOPs by 2.3 times and increases the speed by 1.7 times compared with existing methods and achieves a balance between segmentation accuracy and computational efficiency.