首页|FedRDF: A Robust and Dynamic Aggregation Function Against Poisoning Attacks in Federated Learning

FedRDF: A Robust and Dynamic Aggregation Function Against Poisoning Attacks in Federated Learning

扫码查看
Federated Learning (FL) represents a promising approach to typical privacy concerns associated with centralized Machine Learning (ML) deployments. Despite its well-known advantages, FL is vulnerable to security attacks such as Byzantine behaviors and poisoning attacks, which can significantly degrade model performance and hinder convergence. The effectiveness of existing approaches to mitigate complex attacks, such as median, trimmed mean, or Krum aggregation functions, has been only partially demonstrated in the case of specific attacks. Our study introduces a novel robust aggregation mechanism utilizing the Fourier Transform (FT), which is able to effectively handle sophisticated attacks without prior knowledge of the number of attackers. Employing this data technique, weights generated by FL clients are projected into the frequency domain to ascertain their density function, selecting the one exhibiting the highest frequency. Consequently, malicious clients’ weights are excluded. Our proposed approach was tested against various model poisoning attacks, demonstrating superior performance over state-of-the-art aggregation methods.

TrainingServersData modelsComputational modelingAggregatesFourier transformsMathematical modelsFrequency-domain analysisFederated learningConvergence

Enrique Mármol Campos、Aurora Gonzalez-Vidal、José L. Hernández-Ramos、Antonio Skarmeta

展开 >

University of Murcia, Murcia, Spain

2025

IEEE transactions on emerging topics in computing

IEEE transactions on emerging topics in computing

ISSN:
年,卷(期):2025.13(1)
  • 45