Semantic Difference and Performance Difference Analysis Method for Power Multimodal Data fusion
Multimodal data fusion is an important technique to realize digitalization of power systems.However,the se-mantic difference caused by data form,physical meanings and so on,and the perception ability difference caused by data incompleteness,perception mechanism and so on restrict its development and application.Taking electrical structured time series and unstructured images as fusion objects,we put forward a feature fusion method with high fault tolerance based on feature assimilation and weight decision-making mechanism.Firstly,different convolution neural networks are used to extract features for all modes of data.Secondly,for semantic difference,angle difference is used to represent mul-ti-modal semantic difference,based on which the joint representation space is searched to achieve the feature assimilation.Thirdly,for perception ability difference,cross loss entropy is used to characterize the target perception ability of different modes,based on which the fusion weight determination mechanism is constructed to calculate fusion weight.Finally,the multi-modal features are concatenated to perceive the power target.The emergency repair scenario of power transmission and distribution lines is taken as an example,and the model is trained based on the phased training strategy.In addition,the effectiveness of the proposed method is verified from three aspects including fusion based perception,feature assimi-lation,and weight decision-making mechanisms.