3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in computational-efficiency-1
No benchmarks available.
Use these libraries to find computational-efficiency-1 models and implementations
No datasets available.
No subtasks available.
This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
A novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes is proposed to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs).
This work proposes a lightweight feature extractor that processes an image in less than a millisecond on a modern GPU and proposes a training loss that hinders the student from imitating the teacher feature extractor beyond the normal images, allowing it to drastically reduce the computational cost of the student–teacher model.
This work introduces a random search method for training static, linear policies for continuous control problems, matching state-of-the-art sample efficiency on the benchmark MuJoCo locomotion tasks.
This paper introduces CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points and facilitates the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation.
The flexibility, computational efficiency, robustness, and accuracy afforded by Kimera will build a solid basis for future metric-semantic SLAM and perception research, and will allow researchers across multiple areas to benchmark and prototype their own efforts without having to start from scratch.
This paper proposes a practical ultra lightweight OCR system, i.e., PP-OCR, with an overall model size of only 3.5M, and introduces a bag of strategies to either enhance the model ability or reduce the model size.
It is argued that PointNet itself can be thought of as a learnable "imaging" function, and classical vision algorithms for image alignment can be brought to bear on the problem -- namely the Lucas & Kanade (LK) algorithm.
This paper shows that careful design of deep RNNs using standard signal propagation arguments can recover the impressive performance of deep SSMs on long-range reasoning tasks, whileAlso introducing an RNN block called the Linear Recurrent Unit that matches both their performance on the Long Range Arena benchmark and their computational efficiency.
Adding a benchmark result helps the community track progress.