FedKRec:Privacy-preserving Federated Learning for Recommendation Based on Anonymity
A recommendation system based on federated learning disperses model training on multiple local devices without sharing data on the server to achieve privacy protection of user data.Most existing methods usually broad-cast the item feature matrix from the server to the user to calculate losses and update the gradient of the item back to the server,with a risk of leaking user interests and preferences.To address this issue,this article proposes a federa-ted learning recommendation algorithm FedKRec based on anonymization to avoid privacy breaches.Inspired by K's anonymous idea,FedKRec hides the gradient of(private)positive samples within the gradient of K static negative samples when uploading gradient information to the server.Firstly,the analysis of real datasets shows that the dis-tribution of positive sample item categories can leak user interest preferences.We propose an adaptive negative sam-ple sampling method that considers item category balance.Secondly,due to the significant difference in gradient magnitude between positive and negative samples,it is easy to cause information leakage in positive samples.We propose adding a certain amount of Gaussian noise to the gradient of positive and negative samples,which prevents attackers from accurately identifying positive samples.Finally,we theoretically prove that from the distribution of i-tem categories,the set of positive and negative samples with added noise will not reveal user preferences.The exper-imental results on multiple public datasets show that the proposed FebKRec algorithm achieves comparable recom-mendation performance with traditional methods while effectively protecting user privacy.