3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in low-rank-compression-1
No benchmarks available.
Use these libraries to find low-rank-compression-1 models and implementations
No datasets available.
No subtasks available.
The Domain Adaptive Low Rank (DALR) method significantly outperforms existing low-rank compression techniques and takes into account the target domain, so it can more optimally remove the redundancy in the weights.
We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression. Our algorithm hinges on the idea of compressing each convolutional (or fully-connected) layer by slicing its channels into multiple groups and decomposing each group via low-rank decomposition. At the core of our algorithm is the derivation of layer-wise error bounds from the Eckart Young Mirsky theorem. We then leverage these bounds to frame the compression problem as an optimization problem where we wish to minimize the maximum compression error across layers and propose an efficient algorithm towards a solution. Our experiments indicate that our method outperforms existing low-rank compression approaches across a wide range of networks and data sets. We believe that our results open up new avenues for future research into the global performance-size trade-offs of modern neural networks. Our code is available at https://github.com/lucaslie/torchprune.
A novel method, Decomposable-Net (the network decomposable in any size), which allows flexible changes to model size without retraining and introduces a simple criterion for rank selection that effectively suppresses approximation error.
A software framework that allows a user to compress a neural network or other machine learning model using different compression schemes with minimal effort, and the compressed model is competitive in terms of prediction accuracy and compression ratio with other algorithms.
Neural net compression can be achieved by approximating each layer's weight matrix by a low-rank matrix. The real difficulty in doing this is not in training the resulting neural net (made up of one low-rank matrix per layer), but in determining what the optimal rank of each layer is—effectively, an architecture search problem with one hyperparameter per layer. We show that, with a suitable formulation, this problem is amenable to a mixed discrete-continuous optimization jointly over the ranks and over the matrix elements, and give a corresponding algorithm. We show that this indeed can select ranks much better than existing approaches, making low-rank compression much more attractive than previously thought. For example, we can make a VGG network faster than a ResNet and with nearly the same classification error.
experimentally with deep neural nets, it is observed that 1) the authors can find significantly better models in the error-compression space, indicating that different compression types have complementary benefits, and 2) the best type of combination depends exquisitely on the type of neural net.
A new training method, low-rank projection with energy transfer (LRPET), that trains low-rank compressed networks from scratch and achieves competitive performance and combines LRPET with quantization and hashing methods and achieves even better compression than the original single method.
This paper introduces a novel low-rank representation termed Tensor Train Neural Fields (TT-NF), a TT parameterization of the neural field, trained with backpropagation to minimize a non-convex objective for learning neural fields on dense regular grids and efficient methods for sampling from them.
Adding a benchmark result helps the community track progress.