A lightweight transfer module was introduced to re solve the problem that current pre-trained language models (PLMs) cannot be operated and trained on edge devices due to the excessive number of parameters.The deployment of the transfer module was separated from the large PLM,and an efficient cloud-side collaborative transfer learning framework was implemented,which could transfer PLM to downstream tasks with only a small number of parameters fine-tuned.Cross-domain cloud-side collaborative deployment was also supported.Downstream tasks in multiple domain can collaboratively share the same PLM,which effectively saves computing overhead.Tasks can be efficiently separated and deployed on different devices to realize the separate deployment of multiple tasks and the sharing of PLM.Experiments on four public natural language processing task datasets were conducted,and the results showed that the performance of this framework was over 95% of that of fully fine-tuned BERT methods.
关键词
自然语言处理/迁移学习/云边协同/计算效率/模型部署
Key words
natural language processing/transfer learning/cloud-edge collaboration/computation efficiency/model deployment