3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in philosophy-10
Use these libraries to find philosophy-10 models and implementations
No subtasks available.
A novel gradient harmonizing mechanism (GHM) is proposed to be a hedging for the disharmonies of single-stage detector and without any whistles and bells, this model achieves 41.6 mAP on COCO test-dev set which surpasses the state-of-the-art method, Focal Loss (FL) + $SL_1$, by 0.8.
A brief history of the library, an overview of its basic philosophy, a summary of the Library's architecture, and a description of how the Pylearn2 community functions socially are given.
Distiller is a library of DNN compression algorithms implementations, with tools, tutorials and sample applications for various learning tasks, and the rich content is complemented by a design-for-extensibility to facilitate new research.
This study introduces VanillaNet, a neural network architecture that embraces elegance in design and delivers performance on par with renowned deep neural networks and vision transformers, showcasing the power of minimalism in deep learning.
This paper identifies and rectify several causes for uneven and ineffective training in the popular ADM diffusion model architecture, without altering its high- level structure, and presents a method for setting the exponential moving average (EMA) parameters post-hoc, i.e., after completing the training run.
This paper introduces MIOpen and provides details about the internal workings of the library and supported features, as well as implementing fusion to optimize for memory bandwidth and GPU launch overheads, and implementing different algorithms to optimize convolutions for different filter and input sizes.
A comprehensive empirical study is conducted to evaluate several realizations of ATHENA with four threat models including zero-knowledge, black-boxes, gray-box, and white-box and explains why diversity matters, the generality of the defense framework, and the overhead costs incurred.
A simple and compact ViT architecture called UViT is proposed that achieves strong performance on COCO object detection and instance segmentation tasks and completes a scaling rule to optimize the model’s trade-off on accuracy and computation cost / model size.
Adding a benchmark result helps the community track progress.