摘要
在智能网联交通系统中,联邦学习通过下发模型到智能网联路侧设施,在中心服务器的调度下完成分布式本地训练与全局聚合,提高交通数据隐私保护,但仍存在数据隐私泄露风险.攻击者根据路侧设施共享的模型参数,发起梯度泄露攻击,可复原路侧设施的训练交通数据.基于差分隐私理论与信息熵理论,针对梯度泄露攻击,设计颗粒化梯度扰动防御方法,挑选Fisher信息值低的神经元,对梯度注入精心设计的拉普拉斯噪声,干扰攻击者基于上传梯度的数据复原.通过理论分析得出该防御方法满足差分隐私保护与训练收敛.实验结果表明,颗粒化梯度扰动方法有效防御梯度泄露攻击,同时保证训练精确度在90%以上,优于整体梯度扰动方法与随机梯度扰动方法.
Abstract
In intelligent networked transportation systems,federated learning can improve privacy protection of transportation data by enabling roadside infrastructures to execute distributed local training and global aggregation under the scheduling of an edge server,but there still exists privacy leakage risk.The attackers can use gradient leakage attack to recover the training transportation data of the roadside infrastructures given their shared model parameters.This paper proposes a granular gradient perturbation method based on dif-ferential privacy theory and information theory to defend the gradient leakage attack.The defense method selects the neurons with low Fisher information value and adds designed Laplace noise into their corresponding gradients to perturb the data reconstruction by the at-tackers.The theoretical analysis is provided to validate that the proposed defense method satisfies the differential privacy protection and training convergence.Experimental results demonstrate that the proposed defense method effectively defends against the gradient leakage attack,while the training accuracy of federated learning keeps above 90%,which is better than the overall gradient perturbation method and the random gradient perturbation method.