Privacy protection algorithm based on dynamic learning rate boundary
Deep learning optimization algorithms were prone to privacy leakage when training on data,and convolutional neural networks incurred a huge memory overhead when performing privacy calculations due to the calculation of the gradient of each sample.To address the above problems,a dynamic learning rate bounding algorithm for differential privacy combined with hybrid re-shading clipping was proposed.Combining the AdaBound optimization algorithm with differential privacy alleviated the extreme learning rate and instability of the algorithm during training,and reduced the impact on the model convergence speed due to the addition of noise during backpropagation.The use of hybrid re-shading clipping on the convolutional layer simplified the overhead cost of direct computation of gradient in the update,which could effectively train the differential privacy models.Simulation experiments were conducted to compare with other classical differential privacy algorithms,which showed that the algorithm achieved higher accuracy under the same privacy budget,with better performance and better privacy protection for the model.