1
RGP: Neural Network Pruning Through Regular Graph With Edges Swapping
2
Exploiting Sparse Self-Representation and Particle Swarm Optimization for CNN Compression
3
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations
4
Vision Transformer for Small-Size Datasets
5
Fire Together Wire Together: A Dynamic Pruning Approach with Self-Supervised Mask Prediction
6
DECORE: Deep Compression with Reinforcement Learning
7
Network Pruning via Performance Maximization
8
Towards Compact CNNs via Collaborative Compression
9
DPFPS: Dynamic and Progressive Filter Pruning for Compressing Convolutional Neural Networks from Scratch
10
Network Quantization with Element-wise Gradient Scaling
11
Evolutionary Shallowing Deep Neural Networks at Block Levels
12
Towards Accurate and Compact Architectures via Neural Architecture Transformer
13
Pruning of Convolutional Neural Networks using ising Energy Model
14
Network Pruning Using Adaptive Exemplar Filters
15
EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression
16
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
17
Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed
18
T-Basis: a Compact Representation for Neural Networks
19
Slimming Neural Networks Using Adaptive Connectivity Scores
20
EDropout: Energy-Based Dropout and Pruning of Deep Neural Networks
21
Low-Rank Compression of Neural Nets: Learning the Rank of Each Layer
22
Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration
23
TRP: Trained Rank Pruning for Efficient Deep Neural Networks
24
Dynamical Channel Pruning by Conditional Accuracy Change for Deep Neural Networks
25
Controllable Orthogonalization in Training DNNs
26
Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification
27
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression
28
HRank: Filter Pruning Using High-Rank Feature Map
29
Filter Sketch for Network Pruning
30
Channel Pruning via Automatic Structure Search
31
Discrimination-Aware Network Pruning for Deep Model Compression
32
PyTorch: An Imperative Style, High-Performance Deep Learning Library
33
Holistic CNN Compression via Low-Rank Decomposition with Knowledge Transfer
34
GhostNet: More Features From Cheap Operations
35
Structured Multi-Hashing for Model Compression
36
ThiNet: Pruning CNN Filters for a Thinner Net
37
Effective Training of Convolutional Neural Networks With Low-Bitwidth Weights and Activations
38
PA-GD: On the Convergence of Perturbed Alternating Gradient Descent to Second-Order Stationary Points for Structured Nonconvex Optimization
39
Towards Optimal Structured CNN Pruning via Generative Adversarial Learning
40
Learned Step Size Quantization
41
FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search
42
Efficient Neural Network Compression
43
Rethinking the Value of Network Pruning
44
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
45
CLIP-Q: Deep Network Compression Learning by In-parallel Pruning-Quantization
46
GoDec+: Fast and Robust Low-Rank Matrix Decomposition Based on Maximum Correntropy
47
Model compression via distillation and quantization
48
NISP: Pruning Networks Using Neuron Importance Score Propagation
49
Compression-aware Training of Deep Networks
50
Learning Efficient Convolutional Networks through Network Slimming
51
On Compressing Deep Models by Low Rank and Sparse Decomposition
52
Channel Pruning for Accelerating Very Deep Neural Networks
53
Weighted-Entropy-Based Quantization for Deep Neural Networks
54
Coordinating Filters for Faster Deep Neural Networks
55
Pyramid Scene Parsing Network
56
Improving Training of Deep Neural Networks via Singular Value Bounding
57
Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning
58
Efficient Mobile Implementation of A CNN-based Object Recognition System
59
Pruning Filters for Efficient ConvNets
60
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
61
Quantized Convolutional Neural Networks for Mobile Devices
62
Deep Residual Learning for Image Recognition
63
Convolutional neural networks with low-rank regularization
64
Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
65
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
66
Accelerating Very Deep Convolutional Networks for Classification and Detection
67
Distilling the Knowledge in a Neural Network
68
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
69
Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition
70
Going deeper with convolutions
71
Very Deep Convolutional Networks for Large-Scale Image Recognition
72
ImageNet Large Scale Visual Recognition Challenge
73
The Pascal Visual Object Classes Challenge: A Retrospective
74
Speeding up Convolutional Neural Networks with Low Rank Expansions
75
Visualizing and Understanding Convolutional Networks
76
Computer Organization and Design, Fifth Edition: The Hardware/Software Interface
77
ImageNet classification with deep convolutional neural networks
78
SVD Based Image Processing Applications: State of The Art, Contributions and Research Challenges
79
Optimal exact least squares rank minimization
80
Sparse Subspace Clustering: Algorithm, Theory, and Applications
81
Semantic contours from inverse detectors
82
Robust sparse coding for face recognition
83
Robust Principal Component Analysis Based on Maximum Correntropy Criterion
84
Robust Recovery of Subspace Structures by Low-Rank Representation
85
Robust Face Recognition via Sparse Representation
86
Clustering by Passing Messages Between Data Points
87
IEEE) received the B.S., M.S., and Ph.D. degrees from the South China University of Technology, Guangzhou, China, in 2001, 2004, and 2013, respectively.
88
Bayesian Optimization in High Dimensions via Random Embeddings
89
Low Rank Approximation: Algorithms, Implementation Applications , vol. 906. Cham, Switzerland
90
Learning Multiple Layers of Features from Tiny Images
91
Energy transfer is critical for training efficiency