A Super Resolution Method for Spatiotemporal Feature Fusion Based on Deformable Attention
Video super-resolution technology aims to convert low resolution videos into high-resolution videos.The existing feature alignment methods based on deformable convolution are limited by the receptive field size,and can only perform local offsets in the convolution space at specified spatial positions.The effect is not good when there is large-scale motion between frames.Therefore,a alignment method based on de-formable attention space transformation is proposed to sample the entire feature map.Firstly,by offsetting,the sampling points are focused on any position related to the current processing location;Secondly,the model uses recursive structures to propagate fused features globally,and Transformer to extract features and align frames locally;Again,input the aligned features into a spatiotemporal feature fusion module with channel attention to supplement the reconstruction information;Finally,the output of the fusion module is propagated bidirectionally with a re-cursive network to supplement the temporal features of adjacent frames,and high-resolution video is obtained through sub-pixel convolution with 4x upsampling.The experiment shows that the network improves the PSNR index by 0.69 dB and 0.43 dB on the REDS4 and Vid4 datas-ets,respectively,with BasicVSR as the baseline.