A Transformer-based model for PM2.5 concentration prediction
In the field of deep learning,the recurrent neural networks(RNNs)are often used to predict the concentration of PM2.5.However,traditional methods encounter challenges in capturing the spatiotemporal correlations among multi-site data.To ad-dress this issue,research is conducted on predicting PM2.5 concentration using a Transformer-based network model.The Trans-former employs a multi-head self-attention mechanism that better captures the spatiotemporal dependencies of PM2.5 concentration indices across various locations.The model's encoder extracts feature information,while the decoder handles dependencies in the input features to output future PM2.5 concentrations.Experimental results on real datasets demonstrate that the Transformer network model possesses enhanced predictive capabilities.