Multi-scale Fusion Edge Detection Model with Spatial Co-location Rule Based on Dense Extreme Inception Network
Edge detection is the basis of many computer vision tasks.Current techniques mainly rely on deep learning,but most models improve the accuracy of predicted edges using Non-Maximum Suppression in the evaluation stage.These models only focus on the accuracy of predicted edges without considering the coarseness and fineness of the edges.To address this issue,this paper proposes a new feature fusion strategy based on the dense extreme inception network.This strategy incorporates top-down multi-scale fusion edge detection with spatial co-location rule and retains the multi-network structure based on the traditional deep learning edge detection side output.The proposed strategy can better integrate the high semantic characteristic of high-layer information with the high-resolution texture characteristic of low-layer information,thereby suppressing pixel confusions in backgrounds and lines that are predicted incorrectly in edge detection.In the feature connection,Concat block is used to replace the single operation of Concat,to better fuse semantic information in different scales.Lastly,a simple attention fusion block is used to fuse outputs of multiple networks.Also,different output prediction maps at different scales are deeply supervised combining the tracing loss.This model is independent of Non-Maximum Suppression.By fully utilizing the multi-scale and multi-level information of the target image,this model improves the accuracy of prediction along with improving images'edges.The experimental results show that without the morphological Non-Maximum Suppression,on the BIPED data set,the proposed model on ODS,OIS,and AP are 0.891,0.895,0.900,respectively;with the morphological Non-Maximum Suppression,the proposed model on ODS,OIS,and AP are 0.894、0.899、0.931,respectively,which is superior to all comparison algorithms involved in this article.Also,on the MDBD data set,optimal results were also achieved.