Multi-directional text detection based on the fusion of enhanced feature extraction network and semantic feature
A text detection method was proposed based on an enhanced feature extraction network and semantic feature fusion,thus addressing the challenges such as variable length and oblique angle of scene text.An enhanced dilated residual module(EDRM)was designed by combining deformable convolution with atrous convolution for the layers conv4_x and conv5_x of ResNet18.This module served as the backbone network,enhancing the capability of feature extraction while increasing the feature map resolution and reducing the loss of spatial information.Secondly,to address the inadequacies of the existing algorithms in extracting text semantic features,bi-directional long short-term memory(BiLSTM)was applied to the feature fusion section,enhancing the representation ability of fusion feature map for scene text,the correlation of feature sequences,and the text localization ability of the model.The model was evaluated on the multi-directional text dataset ICDAR2015 and the long text dataset MSRA-TD500.The results demonstrated that compared with the current efficient DBNet algorithm,the F value of the proposed algorithm increased by 1.8%and 3.3%,respectively,showing strong competitiveness.
deformable convolutionatrous convolutiontext detectionsemantic featurebi-directional long short-term memory