A Knowledge Distillation Based Model Optimization Method for Speaker Verification
The pre trained model trained on large-scale unsupervised data has excellent generalization ability,and can be improved in corresponding tasks by fine-tuning on small-scale annotated data.However,pre trained models combined with upstream and downstream models often have a large computational load and slow inference speed,making them unsuitable for deployment on low performance edge devices and difficult to meet scenarios that require real-time tasks.Based on this,a optimization method for speaker verification model based on knowledge distillation is proposed,which achieves the lightweighting of the entire task process by distilling the pre trained model and downstream model onto a student network.