首页|基于知识蒸馏的说话人验证模型轻量化方案

基于知识蒸馏的说话人验证模型轻量化方案

扫码查看
利用大规模无监督数据训练的预训练模型具有非常好的泛化性,只需在小规模标注数据上进行微调,就能在相应任务上有所提高.然而,预训练模型加上下游模型通常拥有较大的计算量和较慢的推理速度,不适合在低性能的边缘设备上部署,也难以满足需要实时化任务的场景.基于此,提出一种基于知识蒸馏的说话人验证模型轻量化方案,通过将预训练模型和下游模型蒸馏到一个学生网络上,实现整个任务流程的轻量化.
A Knowledge Distillation Based Model Optimization Method for Speaker Verification
The pre trained model trained on large-scale unsupervised data has excellent generalization ability,and can be improved in corresponding tasks by fine-tuning on small-scale annotated data.However,pre trained models combined with upstream and downstream models often have a large computational load and slow inference speed,making them unsuitable for deployment on low performance edge devices and difficult to meet scenarios that require real-time tasks.Based on this,a optimization method for speaker verification model based on knowledge distillation is proposed,which achieves the lightweighting of the entire task process by distilling the pre trained model and downstream model onto a student network.

speaker verificationmodel light weightedknowledge distillation

钱建宇

展开 >

上海大学 通信与信息工程学院,上海 200444

说话人验证 模型轻量化 知识蒸馏

2024

电声技术
电视电声研究所(中国电子科技集团公司第三研究所)

电声技术

影响因子:0.259
ISSN:1002-8684
年,卷(期):2024.48(7)