首页|Introduction to the Special Issue on Performance Evaluation of Federated Learning Systems Part 1

Introduction to the Special Issue on Performance Evaluation of Federated Learning Systems Part 1

扫码查看
Federated learning has recently emerged as a popular approach to training machine learning modelson data that is scattered across multiple heterogeneous devices, often referred to as “clients” in afederated learning system. These clients iteratively compute updates to the machine learning modelson their local datasets. These updates are periodically aggregated across clients, typically butnot always with the help of a parameter server. The aggregated model then serves as the startingpoint for new rounds of client updates. In many real-world applications such as connected-andautonomousvehicles (CAVs), the underlying distributed/decentralized systems on which federatedlearning algorithms are executing suffer a wide degree of heterogeneity including but not limitedto data distributions, computation speeds, and external local environments. Moreover, the clientsare often resource-constrained edge or end devices and may compete for common resources suchas communication bandwidth. Such heterogeneity raises significant research questions on howthese systems will perform under different variants of federated learning algorithms.

Carlee Joe-Wong、Lili Su

展开 >

Carnegie Mellon University, Pittsburgh, United States

Northeastern University, Boston, United States

2025

ACM Transactions on Modeling and Performance Evaluation of Computing Systems