3260 papers • 126 benchmarks • 313 datasets
To obtain the Information Plane (IP) of deep neural networks, which shows the trajectories of the hidden layers during training in a 2D plane using as coordinate axes the mutual information between the input and the hidden layer, and the mutual information between the output and the hidden layer.
(Image credit: Papersgraph)
These leaderboards are used to track progress in information-plane-4
No benchmarks available.
Use these libraries to find information-plane-4 models and implementations
No datasets available.
No subtasks available.
This work demonstrates the effectiveness of the Information-Plane visualization of DNNs and shows that the training time is dramatically reduced when adding more hidden layers, and the main advantage of the hidden layers is computational.
This work studies the information bottleneck (IB) theory of deep learning, and finds that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa.
EDGE is the first non-parametric MI estimator that can achieve paramet- ric MSE rates with linear time complexity and the utility of EDGE is illustrated for the analysis of the information plane (IP) in deep learning.
This work derives a theoretical convergence for the IP of autoencoders based on the information-theoretic concept of mutual information (MI), and proposes a new rule to adjust its parameters that compensates scale and dimensionality effects.
A theoretical analysis of the dualIB framework is provided, solving for the structure of its solutions, unraveling its superiority in optimizing the mean prediction error exponent and demonstrating its ability to preserve exponential forms of the original distribution.
The results show that since mutual information remains invariant under homeomorphism, only feature engineering methods that alter the entropy of the dataset will change the outcome of the neural network, and suggests that neural networks that can exploit the convolution theorem are equally accurate as standard convolutional neural networks, and can be more computationally efficient.
With augmented variables, it is shown that the IB objective can be solved with the alternating direction method of multipliers (ADMM), and it is proved that the proposed algorithm is consistently convergent, regardless of the value of β.
This study examines how the information flows are shaped by the network parameters, such as depth, sparsity, weight constraints, and hidden representations, and adopts autoencoders as models of deep learning.
The proposed HRel pruning method outperforms recent state-of-the-art filter pruning methods and the Information Plane dynamics of Information Bottleneck theory is analyzed for various Convolutional Neural Network architectures with the effect of pruning.
The results of the normalized HSIC value analysis reveal the E2E training ability to exhibit different information dynamics across layers, in addition to efficient information propagation, and suggest the need to consider the cooperative interactions between layers, not just the final layer when analyzing the information bottleneck of deep learning.
Adding a benchmark result helps the community track progress.