A Privacy Preserving and Verifiable Federated Learning Scheme Based on Homomorphic Encryption
Cross-silo federated learning enables clients to collaboratively train a machine learning model by aggregating local model updates without sharing raw data.However,studies have shown that intermediate parameters transmitted during training can also leak the privacy of raw data.A curious central server may falsify or tamper with aggregation results for its own benefit.To address these issues,an anti-collusion privacy preserving and verifiable cross-silo federated learning scheme was proposed.Specifically,the intermediate parameters of each client were encrypted to protect data privacy,and key management and collaborative decryption were achieved by combining secret sharing schemes to enhance system security.Furthermore,data integrity and authentication were achieved through aggregate signatures,and the verifiability of central server aggregation gradients was ensured using polynomial commitments.Security analysis shows that the proposed scheme not only protects the privacy of intermediate parameters and verifies data integrity,but also ensures the correctness of aggregation gradients.Performance analysis shows that compared to the existing schemes,the proposed scheme can significantly reduce the communication overhead.