Lagrangian Dual-based Privacy Protection and Fairness Constrained Method for Few-shot Learning
Few-shot learning aims to use a small amount of data for training and significantly improve model performance,and is an important approach to address privacy and fairness issues of sensitive data in neural network models.In few-shot learning,there is a risk of privacy and fairness issues in training neural network models due to the fact that small sample datasets often contain certain sensitive data,and that such sensitive data may be discriminatory.In addition,in many domains,data is difficult or impossible to access for reasons such as privacy or security.Also,in differential privacy models,the introduction of noise not only leads to a reduction in model utility,but also causes an imbalance in model fairness.To address these challenges,this paper propo-ses a sample-level adaptive privacy filtering algorithm based on the Rényi differential privacy filter to exploit Rényi differential privacy to achieve a more accurate calculation of privacy loss.Furthermore,it proposes a Lagrangian dual-based privacy and fair-ness constraint algorithm,which adds the differential privacy constraint and the fairness constraint to the objective function by in-troducing a Lagrangian method,and introduces a Lagrangian multiplier to balance these constraints.The Lagrangian multiplier method is used to transform the objective function into a pairwise problem,thus optimising both privacy and fairness,and achie-ving a balance between privacy and fairness through the Lagrangian function.It is shown that the proposed method improves the performance of the model while ensuring privacy and fairness of the model.
Few-shot learningPrivacy and fairnessRényi differential privacyFairness constraintLagrangian dual