Research on multimodal emotion characteristics based on short video of rainstorm disaster
To improve the efficiency of disaster response,the"Hebei rainstorm"and"Heilongjiang rainstorm"were adopted as illustrative cross-regional research cases,and text-image-audio multimodal data were collected from short videos.In the face of massive unstructured data,deep learning technology was employed to realize the extraction of multimodal emotional features,cross-modal integration and intelligent sentiment classification in short videos.By comprehensively using spatial and temporal big data,the multimodal emotional characteristics of short video of rainstorm disaster were deeply mined and analyzed in the spatial and temporal dimension.The results indicate that the model's accuracy exceeds 85%,efficiently fulfilling the objectives set for short video analysis.From the temporal perspective,the emotional fluctuations of netizens broadly align with the cycle of rainstorm disasters,providing a basis for assessing disaster severity and public opinion trends.Furthermore,the intervention of media and government entities plays a significant role in shaping the emotional evolution surrounding rainstorm disasters.In terms of spatial dimensions,negative emotions exhibit a"low-high-low"trend as disasters shift locations,and the resonance and diffusion of these emotions display distinct regional characteristics.Therefore,it is imperative to prioritize public opinion guidance in disaster-stricken areas,as well as in some eastern regions of China and non-disaster areas experiencing similar phenomena.