A Spatially Adaptive Motion Style Transfer Method with Temporal Convolutional Network
To address the issue of the human motion generation methods heavily relying on supervised learning and paired datasets,inspired by image style transfer methods,a spatially adaptive motion style transfer model incorporating temporal convolutional networks is proposed.By inputting virtual character motion behaviors,the model generates motion sequences of different styles.Firstly,considering temporal and spatial factors,a neural network framework with temporal convolutional network as the backbone is de-signed to accurately extract the content features and style features of motion data from the unpaired datasets.Then,based on the spatial adaptive normalization method,an adaptive method suitable for motion style transfer is proposed by improving and adapting it to the motion decoder.Finally,for the errors in motion transfer,the foot behavior is achieved by introducing forward kinematics in the network to constrain the joint errors transmitted along the motion chain.To verify the performance of the proposed method,experi-ments are conducted using the CMU and Xia open-source datasets in terms of principal component analysis,data clustering,and visualization.The results show that the proposed model can effectively achieve style transfer for multiple unpaired datasets,with natural and realistic animation effects,good interaction and scalability,and can be widely applied to virtual human modeling in computer animation.
deep learningmotion style transfertemporal convolutional networkspatially adaptive instance normalization