3260 papers • 126 benchmarks • 313 datasets
The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines. Source: Analysis of Distributed StochasticDual Coordinate Ascent
(Image credit: Papersgraph)
These leaderboards are used to track progress in distributed-optimization-4
No benchmarks available.
Use these libraries to find distributed-optimization-4 models and implementations
No datasets available.
No subtasks available.
This work obtains tight convergence rates for FedAvg and proves that it suffers from `client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence, and proposes a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the ` client-drifts' in its local updates.
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.
The ZOOpt toolbox that provides efficient derivative-free solvers and are designed easy to use, and particularly focuses on optimization problems in machine learning, addressing high-dimensional, noisy, and large-scale problems.
This work proposes a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency and rigorously analyzed this protocol provides theoretical bounds for its resistance against Byzantine and Sybil attacks and shows that it has a marginal communication overhead.
This research presents a novel probabilistic approach to estimating the response of the immune system to laser-spot assisted, 3D image analysis of central nervous system injury.
This work presents a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing, and extends the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso.
This work derives a procedure that allows for learning from all available sources, yet automatically suppresses irrelevant or corrupted data, and shows that this method provides significant improvements over alternative approaches from robust statistics and distributed optimization.
This paper shows that for loss functions that satisfy the Polyak-Kojasiewicz condition, rounds of communication suffice to achieve a linear speed up, that is, an error of $O(1/pT)$, where $T$ is the total number of model updates at each worker.
This work proposes FedDANE, an optimization method that is adapted from DANE, a method for classical distributed optimization, to handle the practical constraints of federated learning, and provides convergence guarantees for this method when learning over both convex and non-convex functions.
A new relay-style execution technique called L2L (layer-to-layer) where at any given moment, the device memory is primarily populated only with the executing layer(s)'s footprint, which shows a new form of mixed precision for faster throughput and convergence.
Adding a benchmark result helps the community track progress.