1
Invertible Tabular GANs: Killing Two Birds with OneStone for Tabular Data Synthesis
2
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
3
Dual-Teacher Class-Incremental Learning With Data-Free Generative Replay
4
AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression
5
Class-Incremental Learning with Generative Classifiers
6
Zero-shot Adversarial Quantization
7
Diversifying Sample Generation for Accurate Data-Free Quantization
8
Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization
9
Data-Free Network Quantization With Adversarial Knowledge Distillation
10
Generative Feature Replay For Class-Incremental Learning
11
Generative Low-bitwidth Data Free Quantization
12
The Break-Even Point on Optimization Trajectories of Deep Neural Networks
13
ZeroQ: A Novel Zero Shot Quantization Framework
14
Dreaming to Distill: Data-Free Knowledge Transfer via DeepInversion
15
PyHessian: Neural Networks Through the Lens of the Hessian
16
The Knowledge Within: Methods for Data-Free Model Compression
17
Loss aware post-training quantization
18
On the Efficacy of Knowledge Distillation
19
Data-Free Quantization Through Weight Equalization and Bias Correction
20
Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
21
Post training 4-bit quantization of convolutional networks for rapid-deployment
22
Adapting Auxiliary Losses Using Gradient Similarity
23
Learning to Quantize Deep Networks by Optimizing Quantization Intervals With Task Loss
24
Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm
25
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
26
On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length
27
Energy-Efficient Neural Network Accelerator Based on Outlier-Aware Low-Precision Computation
28
SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks
29
Data Synthesis based on Generative Adversarial Networks
30
SqueezeNext: Hardware-Aware Neural Network Design
31
PACT: Parameterized Clipping Activation for Quantized Neural Networks
32
MobileNetV2: Inverted Residuals and Linear Bottlenecks
33
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
34
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Network
35
Critical Learning Periods in Deep Neural Networks
36
Three Factors Influencing Minima in SGD
37
Automatic differentiation in PyTorch
38
Large Batch Training of Convolutional Networks
39
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
40
Weighted-Entropy-Based Quantization for Deep Neural Networks
41
In-datacenter performance analysis of a tensor processing unit
42
Early Stopping without a Validation Set
43
Deep Learning with Low Precision by Half-Wave Gaussian Quantization
44
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
45
Entropy-SGD: biasing gradient descent into wide valleys
46
Towards the Limit of Network Quantization
47
Conditional Image Synthesis with Auxiliary Classifier GANs
48
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
49
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
50
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
51
Deep Residual Learning for Image Recognition
52
Rethinking the Inception Architecture for Computer Vision
53
Fixed Point Quantization of Deep Convolutional Networks
54
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
55
Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
56
Inverting Visual Representations with Convolutional Networks
57
You Only Look Once: Unified, Real-Time Object Detection
58
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
59
U-Net: Convolutional Networks for Biomedical Image Segmentation
60
Distilling the Knowledge in a Neural Network
61
Adam: A Method for Stochastic Optimization
62
Understanding deep image representations by inverting them
63
Fully convolutional networks for semantic segmentation
64
Conditional Generative Adversarial Nets
65
Fixed-point feedforward deep neural network design using weights +1, 0, and −1
66
ImageNet classification with deep convolutional neural networks
67
Few-Shot Class Incremental Learning with Generative Feature Replay
68
Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning
69
Learning Multiple Layers of Features from Tiny Images
70
A method for solving the convex programming problem with convergence rate O(1/k^2)
71
An iterative method for the solution of the eigenvalue problem of linear differential and integral
72
Early stopping by gradient disparity
73
Computer vision models on PyTorch