3260 papers • 126 benchmarks • 313 datasets
Compress data for machine interpretability to perform downstream tasks, rather than for human perception.
(Image credit: Papersgraph)
These leaderboards are used to track progress in feature-compression-11
No benchmarks available.
Use these libraries to find feature-compression-11 models and implementations
No datasets available.
No subtasks available.
This paper adopts ideas from knowledge distillation and neural image compression to compress intermediate feature representations more efficiently and shows that the learned feature representations can be tuned to serve multiple downstream tasks.
This work proposes a novel 3D object detection framework, Vehicles-Infrastructure Multi-view Intermediate fusion (VIMI), and proposes a Multi-scale Cross Attention (MCA) module that fuses infrastructure and vehicle features on selective multi-scales to correct the calibration noise introduced by camera asynchrony.
A flexible variable-rate feature compression method is presented that can operate on a range of rates by introducing a rate control parameter as an input to the neural network model.
A new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers and introduces extrinsic denoising processes and a new orthogonality loss term for pre-training and fine-tuning of the expert autoencoders.
An end-to-end architecture that consists of an encoder, a non-trainable channel layer, and a decoder for more efficient feature compression and transmission, which achieves a much higher compression ratio than existing methods.
This paper describes the bit-rate required to ensure high performance on all predictive tasks that are invariant under a set of transformations, such as data augmentations, and designs unsupervised objectives for training neural compressors.
Experimental results show that the proposed pipeline training framework not only significantly speeds up training, but also incurs little accuracy loss or additional memory/energy overhead, delivering a practical and efficient solution to edge-cloud model training.
This study introduces supervised compression for split computing (SC2) and proposes new evaluation criteria: minimizing computation on the mobile device, minimizing transmitted data size, and maximizing model accuracy and releases sc2bench, a Python package for future research on SC2.
A lightweight autoencoder-based method to compress the large intermediate feature of DNN, then a multi-agent hybrid proximal policy optimization (MAHPPO) algorithm to solve the optimization problem with a hybrid action space is proposed.
This article revisits one classical regularization named Dropout and its variant Nested Dropout and revisits the problem of learning with noisy labels and introduces compression inductive bias to network architectures to alleviate this overfitting problem.
Adding a benchmark result helps the community track progress.