3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in defocus-blur-detection-6
Use these libraries to find defocus-blur-detection-6 models and implementations
No subtasks available.
This paper takes inspiration from the widely-used pre-training and then prompt tuning protocols in NLP and proposes a new visual prompting model, named Explicit Visual Prompting (EVP), which freezes a pre-trained model and then learns task-specific knowledge using a few extra parameters.
This work considers the depth information as the approximate soft label of DBD and proposes a joint learning framework inspired by knowledge distillation that learns the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network at the same time.
Although existing fully-supervised defocus blur detection (DBD) models significantly improve performance, training such deep models requires abundant pixel-level manual annotation, which is highly time-consuming and error-prone. Addressing this issue, this paper makes an effort to train a deep DBD model without using any pixel-level annotation. The core insight is that a defocus blur region/focused clear area can be arbitrarily pasted to a given realistic full blurred image/full clear image without affecting the judgment of the full blurred image/full clear image. Specifically, we train a generator G in an adversarial manner against dual discriminators Dc and Db. G learns to produce a DBD mask that generates a composite clear image and a composite blurred image through copying the focused area and unfocused region from corresponding source image to another full clear image and full blurred image. Then, Dc and Db can not distinguish them from realistic full clear image and full blurred image simultaneously, achieving a self-generated DBD by an implicit manner to define what a defocus blur area is. Besides, we propose a bilateral triplet-excavating constraint to avoid the degenerate problem caused by the case one discriminator defeats the other one. Comprehensive experiments on two widely-used DBD datasets demonstrate the superiority of the proposed approach. Source codes are available at: https://github.com/shangcai1/SG.
This work takes inspiration from the widely-used pre-training and then prompt tuning protocols in NLP and proposes a new visual prompting model, named Explicit Visual Prompting (EVP), which significantly outperforms other parameter-efficient tuning protocols under the same amount of tunable parameters.
Inspired by the law of depth, depth of field (DOF), and defocus, an approach called D-DFFNet is proposed, which incorporates depth and DOF cues in an implicit manner, which allows the model to understand the defocus phenomenon in a more natural way.
A mixture of experts (MoE) scheme for detecting five notable artifacts, including damaged tissue, blur, folded tissue, air bubbles, and histologically irrelevant blood from WSIs, and the best-performing pipeline for artifact detection is MoE with DCNNs.
Adding a benchmark result helps the community track progress.