Multi-scale feature-enhanced method of green landscape segmentation for street views
To address the challenges arising from the complex and diverse nature of landscapes in street view images,such as misclassification,blurry boundary segmentation,and loss of details,we propose MFDNet,the Multi-Scale Feature-Enhanced Urban Green Landscape Segmentation Network.In the encoding stage,we utilize an improved multi-scale residual network to extract contextual information and distinguish between similar features.Concurrently,we introduce a feature enhancement module to improve the edge and detail information of target features.To capture rich contextual dependencies,we incorporate a dual-attention mechanism to model local features effectively.Moreover,we integrate the feature enhancement module into the decoder,allowing for the fusion of multi-level features to enhance the recovery of target information and refine edge details.Through ablation experiments conducted on the Cityscapes dataset and our homemade dataset StreetData,we demonstrate that MFDNet achieves an average improvement of 2.96%in intersection ratio and 5.57%in merger ratio compared to the base network.Furthermore,comparison experiments on the two datasets highlight the superior performance of MFDNet over the comparison model,exhibiting higher average intersection ratios of 1.25%~5.29%and merger ratios of 1.52%~6.95%.These experimental results confirm the effectiveness of MFDNet in accurately identifying green landscapes in street views and extracting urban green landscape data with high precision.
deep learningstreet viewmulti-scale feature enhancementurban green spacesemantic segmentation