Facial Animation Algorithm Based on Improved Thin Plate Spline Motion Model
Facial animation plays a crucial role in applications involving movies,games,and virtual reality in terms of achieving realistic and vivid emotional communication.When handling multiple factors,such as facial shape,posture,and expression,good motion estimation results can be obtained through thin plate spline nonlinear transformation.However,this approach results in imprecise motion estimation when dealing with complex facial textures and mouth movements,necessitating better image restoration capabilities.To address this issue,this paper proposes a facial animation algorithm based on an improved Thin Plate Spline Motion Model(TPSMM).First,based on TPSMM,a Farneback optical flow pyramid algorithm is introduced,which combines the thin plate spline and background affine transformations to enhance the accuracy of local facial motion estimation.Second,to accurately recover the detailed textural information for missing areas,a multi-scale detail perception network is introduced.This network minimizes the loss of facial detail information caused by multi-layer downsampling of the source image by Embedding Channel Attention(ECA)modules in the encoder.In the decoder,the Coordinate Attention(CA)module effectively captures important features at different positions in the motion estimation feature map,thereby improving the quality of facial image generation.Experimental results show that,compared to the First Order Motion Model(FOMM),Motion Representations for Articulated Animation(MRAA),and TPSMM,the proposed algorithm achieves optimal L1,Average Keypoint Distance(AKD),and Average Euclidean Distance(AED)values on the MUG,UvA-Nemo,and Oulu-CASIA datasets,with averages of 0.012 9,0.923,and 0.000 99,respectively.