The rapid development of Federated Learning(FL)technology promotes collaborative training of gradient models using data from different end users.Its notable feature is that the training dataset does not leave the local device,and only gradient model updates are locally computed and shared,enabling edge servers to generate global gradient models.However,the heterogeneity between local devices can affect training performance,and shared gradient model updates pose privacy breaches and malicious tampering threats.This study proposes a verifiable privacy-preserving cross-domain FL scheme based on cloud-edge fusion.In the scheme,end users use single mask blinding technology to protect data privacy,vector inner product based signature algorithms to generate signatures for gradient models,and edge servers aggregate private data through blinding technology to generate deblinded aggregated signatures.This ensures the global gradient model is updated and the sharing process is tamper proof.It adopts multi-region weight forwarding technology to address the problem of limited computing resources and communication costs of devices in heterogeneous networks.The experimental results demonstrate that the proposed scheme can be safely and efficiently deployed in heterogeneous networks,and system experiments and simulations are performed on four benchmark datasets:MNIST,SVHN,CIFAR-10,and CIFAR-100.Compared with the classical federated learning scheme,the gradient model convergence speed of our scheme is improved by an average of 21.6%with comparable accuracy.
Federated Learning(FL)global gradient modeldata privacyverifiable privacy-preservingcross-domain training