Multimodal Adaptive Fusion Based Detection of Fake News in Short Videos
With the rapid development of Internet and social media,the dissemination route of news is no longer limited to tradi-tional media channels.Semantically rich multimodal data becomes the carrier of news while fake news has been widely spread.As the proliferation of false news will have an unpredictable impact on individuals and society,the detection of false news has become a current research hotspot.Existing multimodal false news detection methods only focus on text and image data,which not only fail to fully utilize the multimodal information in short videos but also ignore the consistency and difference features between dif-ferent modalities.As a result,it is difficult for them to give full play to the advantages of multimodal fusion.To solve this pro-blem,a fake news detection model for short videos based on multimodal adaptive fusion is proposed.This model extracts features from multimodal data in short videos,uses cross-modal alignment fusion to obtain the consistency and complementarity features among different modalities,and then achieves adaptive fusion based on the contribution of different modal features to the final fu-sion result.Finally,a classifier is used to achieve fake news detection.The results of the experiment conducted on a publicly avai-lable short video dataset demonstrate that the accuracy,precision,recall,and F1-score of the proposed model are higher than those of the state-of-the-art models.