A new simple and efficient iterative approach, which alternates low-rank factorization with a smart rank selection and fine-tuning, which improves the compression rate while maintaining the accuracy for a variety of tasks.
The low-rank tensor approximation is very promising for the compression of deep neural networks. We propose a new simple and efficient iterative approach, which alternates low-rank factorization with a smart rank selection and fine-tuning. We demonstrate the efficiency of our method comparing to non-iterative ones. Our approach improves the compression rate while maintaining the accuracy for a variety of tasks.
Maksym Kholiavchenko
1 papers
E. Ponomarev
2 papers
A. Cichocki
2 papers