查看更多>>摘要:By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News-Current study results on virtual reali ty and intelligent hardware have been published. According to news originating f rom Hefei University of Technology by NewsRx editors, the research stated, "Cons iderable research has been conducted in the areas of audio-driven virtual charac ter gestures and facial animation with some degree of success. However, few meth ods exist for generating full-body animations, and the portability of virtual ch aracter gestures and facial animations has not received sufficient attention." The news journalists obtained a quote from the research from Hefei University of Technology: "Therefore, we propose a deep-learning-based audio-to-animation-and -blendshape (Audio2AB) network that generates gesture animations andARK it's 52 facial expression parameter blendshape weights based on audio, audio-correspondi ng text, emotion labels, and semantic relevance labels to generate parametric da ta for full- body animations. This parameterization method can be used to drive full-body animations of virtual characters and improve their portability. In the experiment, we first downsampled the gesture and facial data to achieve the sam e temporal resolution for the input, output, and facial data. The Audio2AB netwo rk then encoded the audio, audio- corresponding text, emotion labels, and semant ic relevance labels, and then fused the text, emotion labels, and semantic relev ance labels into the audio to obtain better audio features. Finally, we establis hed links between the body, gestures, and facial decoders and generated the corr esponding animation sequences through our proposed GAN-GF loss function. By usin g audio, audio-corresponding text, and emotional and semantic relevance labels a s input, the trained Audio2AB network could generate gesture animation data cont aining blendshape weights. Therefore, different 3D virtual character animations could be created through parameterization."