3260 papers • 126 benchmarks • 313 datasets
The federated learning setup presents numerous challenges including data heterogeneity (differences in data distribution), device heterogeneity (in terms of computation capabilities, network connection, etc.), and communication efficiency. Especially data heterogeneity makes it hard to learn a single shared global model that applies to all clients. To overcome these issues, Personalized Federated Learning (PFL) aims to personalize the global model for each client in the federation.
(Image credit: Papersgraph)
These leaderboards are used to track progress in personalized-federated-learning-8
Use these libraries to find personalized-federated-learning-8 models and implementations
No subtasks available.
Information theoretically, it is proved that the mixture of local and global models can reduce the generalization error and a communication-reduced bilevel optimization method is proposed, which reduces the communication rounds to $O(\sqrt{T})$ and can achieve a convergence rate of $O(1/T)$ with some residual error.
This work proposes an algorithm for personalized FL (pFedMe) using Moreau envelopes as clients' regularized loss functions, which help decouple personalized model optimization from the global model learning in a bi-level problem stylized for personalizedFL.
This work identifies that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks and proposes a simple, general framework, Ditto, that can inherently provide fairness and robustness benefits.
This work proposes to study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions, which encompasses most of the existing personalized FL approaches and leads to federated EM-like algorithms for both client-server and fully decentralized settings.
It is proved that this method obtains linear convergence to the ground-truth representation with near-optimal sample complexity in a linear setting, demonstrating that it can efficiently reduce the problem dimension for each client.
This work efficiently calculate optimal weighted model combinations for each client, based on figuring out how much a client can benefit from another's model, to achieve personalization in federated FL.
The Federated Conditional Policy (FedCP) method is proposed, which generates a conditional policy for each sample to separate the global information and personalized information in its features and then processes them by a global head and a personalized head, respectively.
This work proposes a new pFL method, named GPFL, to simultaneously learn global and personalized feature information on each client and shows the superiority of GPFL over ten state-of-the-art methods regarding effectiveness, scalability, fairness, stability, and privacy.
This work conducts explicit local-global feature alignment by leveraging global semantic knowledge for learning a better representation for each client as a function of the combining weights and derive an optimization problem for estimating optimal weights.
Adding a benchmark result helps the community track progress.