3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in image-shadow-removal-3
No benchmarks available.
Use these libraries to find image-shadow-removal-3 models and implementations
No subtasks available.
This paper proposes a novel robust graph learning scheme to learn reliable graphs from the real-world noisy data by adaptively removing noise and errors in the raw data and shows that the proposed model outperforms the previous state-of-the-art methods.
Removing shadows in document images enhances both the visual quality and readability of digital copies of documents. Most existing shadow removal algorithms for document images use hand-crafted heuristics and are often not robust to documents with different characteristics. This paper proposes the Background Estimation Document Shadow Removal Network (BEDSR-Net), the first deep network specifically designed for document image shadow removal. For taking advantage of specific properties of document images, a background estimation module is designed for extracting the global background color of the document. During the process of estimating the background color, the module also learns information about the spatial distribution of background and non-background pixels. We encode such information into an attention map. With the estimated global background color and attention map, the shadow removal network can better recover the shadow-free image. We also show that the model trained on synthetic images remains effective for real photos, and provide a large set of synthetic shadow images of documents along with their corresponding shadow-free images and shadow masks. Extensive quantitative and qualitative experiments on several benchmarks show that the BEDSR-Net outperforms existing methods in enhancing both the visual quality and readability of document images.
This paper proposes the shadow-aware FusionNet that takes the shadow image as input to generate fusion weight maps across all the over-exposure images, and proposes the boundary-aware RefineNet to eliminate the remaining shadow trace further.
This work proposes a new shadow illumination model, which ensures the identity mapping among unshaded regions, and adaptively performs fine grained spatial mapping between shadow regions and their references, and reformulates the shadow removal task as a variational optimization problem.
This work handles high-resolution document shadow removal directly via a larger-scale real-world dataset and a carefully-designed frequency-aware network, which shows a clearly better performance than previous methods in terms of visual quality and numerical results.
This paper proposes an unsupervised domain-classifier guided shadow removal network, DC-ShadowNet, designed to integrate a shadow/shadow-free domain classifier into a generator and its discriminator, enabling them to focus on shadow regions.
This paper presents DeS3, a method that removes hard, soft and self shadows based on adaptive attention and ViT similarity, and outperforms state-of-the-art methods on the SRD, AISTD, LRSS, USR and UIUC datasets.
A Transformer-based model for document shadow removal is proposed that utilizes shadow context encoding and decoding in both shadow and shadow-free regions and is competitive with state-of-the-art methods.
A unified diffusion framework that integrates both the image and degradation priors for highly effective shadow removal and progressively refines the estimated shadow mask as an auxiliary task of the diffusion generator, which leads to more accurate and robust shadow-free image generation.
Adding a benchmark result helps the community track progress.