3260 papers • 126 benchmarks • 313 datasets
To Optimize already existing models in Training/Inferencing tasks.
(Image credit: Papersgraph)
These leaderboards are used to track progress in model-optimization-3
No benchmarks available.
Use these libraries to find model-optimization-3 models and implementations
No datasets available.
No subtasks available.
The results indicate that a shift in focus from quantity to quality of data could lead to robust models and improved out-of-distribution generalization, and a model-based tool to characterize and diagnose datasets.
A Differentiable Binarization (DB) module that integrates the binarization process, one of the most important steps in the post-processing procedure, into a segmentation network is proposed and an efficient Adaptive Scale Fusion (ASF) module is proposed to improve the scale robustness by fusing features of different scales adaptively.
This work proposes an algorithm for personalized FL (pFedMe) using Moreau envelopes as clients' regularized loss functions, which help decouple personalized model optimization from the global model learning in a bi-level problem stylized for personalizedFL.
This work presents a novel active learning algorithm, termed as iterative surrogate model optimization (ISMO), for robust and efficient numerical approximation of PDE constrained optimization problems, based on deep neural networks.
This work efficiently calculate optimal weighted model combinations for each client, based on figuring out how much a client can benefit from another's model, to achieve personalization in federated FL.
A total of 30 advanced MLLMs are comprehensively evaluated on the first comprehensive MLLM Evaluation benchmark MME, which suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
PIXOR is proposed, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded from pixel-wise neural network predictions that surpasses other state-of-the-art methods notably in terms of Average Precision (AP), while still runs at 10 FPS.
This study has created a benchmark of 40 top-rated models from Kaggle used for 5 different tasks, and using a comprehensive set of fairness metrics, evaluated their fairness and applied 7 mitigation techniques and analyzed the fairness, mitigation results, and impacts on performance.
Adding a benchmark result helps the community track progress.