Study on Fake News Detection Technology in Resource-constrained Environments
In recent years,social media has become a fertile ground for the spread and proliferation of fake news due to its open-ness and convenience.Compared to unimodal fake news,multimodal fake news,which combines various forms of information such as text and images,creates more confusing false content and has a more far-reaching effects.Existing methods for multimodal fake news detection predominantly rely on small models.However,the rapid development of multimodal large models offers new pers-pectives for addressing this issue.These models,though,are typically parameter-intensive and computationally demanding,making them challenging to deploy in environments with limited computational and energy resources.To address these challenges,this study proposes a multimodal fake news detection model based on the multimodal large model Long-CLIP.This model is capable of processing long texts and attending to both coarse-grained and fine-grained details.Additionally,by employing an efficient coarse-to-fine layer-wise pruning method,a more lightweight multimodal fake news detection model is obtained to adapt to resource-con-strained scenarios.Finally,on the Weibo dataset,the proposed model is compared with current popular multimodal large models before and after fine-tuning and other pruning methods,and its effectiveness is verified.Results indicate that the Long-CLIP-based multimodal fake news detection model significantly reduces model parameters and inference time compared to current popu-lar multimodal large models,while maintaining superior detection performance.After compression,the model achieves a 50%re-duction in parameters and a 1.92 s decrease in inference time,with only a 0.01 drop in detection accuracy.
Fake news detectionMultimodal large modelsResource-constrainedModel compressionPruning