3260 papers • 126 benchmarks • 313 datasets
Linear Mode Connectivity refers to the relationship between input and output variables in a linear regression model. In a linear regression model, input variables are combined with weights to predict output variables. Understanding the linear model connectivity can help interpret model results and identify which input variables are most important for predicting output variables.
(Image credit: Papersgraph)
These leaderboards are used to track progress in linear-mode-connectivity-4
No benchmarks available.
Use these libraries to find linear-mode-connectivity-4 models and implementations
No datasets available.
No subtasks available.
This work finds that standard vision models become stable to SGD noise in this way early in training, and uses this technique to study iterative magnitude pruning (IMP), the procedure used by work on the lottery ticket hypothesis to identify subnetworks that could have trained in isolation to full accuracy.
It is argued that neural network loss landscapes often contain (nearly) a single basin after accounting for all possible permutation symmetries of hidden units, and three algorithms to permute the units of one model to bring them into alignment with a reference model in order to merge the two models in weight space are introduced.
It is empirically found that different minima of the same task are typically connected by very simple curves of low error, and this finding is exploited to propose an effective algorithm that constrains the sequentially learned minima to behave as the multitask solution.
A unified mathematical framework for neural network (NN) model fusion is proposed and utilized to reveal new insights about the linear mode connectivity of SGD solutions and to provide new empirical evidence for recent conjectures.
If the permutation invariance of neural networks is taken into account, SGD solutions will likely have no barrier in the linear interpolation between them, which has implications for lottery ticket hypothesis, distributed training, and ensemble methods.
Based on convergence rates in Wasserstein distance of empirical measures, it is shown that, with high probability, two wide enough two-layer neural networks trained with stochastic gradient descent are linearly connected.
A Sinkhorn re-basin network with the ability to obtain the transportation plan that better suits a given objective and the benefit of the method is compared against similar approaches from the literature under several conditions for both optimal transport and linear mode connectivity.
A novel signal-to-noise (SNR) iterative pruning procedure is introduced, which extracts evolvable sub-networks and incorporates loss curvature information into the network pruning step and finds that these initializations encode an inductive bias, which transfers across different evolution strategies, related tasks and even GD-based training.
Adding a benchmark result helps the community track progress.