3260 papers • 126 benchmarks • 313 datasets
Remove the spots from mirror and clear the picture
(Image credit: Papersgraph)
These leaderboards are used to track progress in reflection-removal-3
Use these libraries to find reflection-removal-3 models and implementations
No subtasks available.
A novel deep convolutional encoder-decoder method to remove the objectionable reflection by learning a map between image pairs with and without reflection, which significantly outperforms the other tested state-of-the-art techniques.
A deep neural network structure that exploits edge information in addressing representative low-level vision tasks such as layer separation and image filtering by estimating edges and reconstructing images using only cascaded convolutional layers arranged such that no handcrafted or application-specific image-processing components are required.
The approach uses a fully convolutional network trained end-to-end with losses that exploit low-level and high-level image information and proposes a novel exclusion loss that enforces pixel-level layer separation.
This work proposes an Iterative Boost Convolutional LSTM Network (IBCLN) that enables cascaded prediction for reflection removal and creates a dataset of real-world images with reflection and ground-truth transmission layers to mitigate the problem of insufficient data.
This paper proposes the Concurrent Reflection Removal Network (CRRN), a network that integrates image appearance information and multi-scale gradient information with human perception inspired loss function, and is trained on a new dataset with 3250 reflection images taken under diverse real-world scenes.
Experimental results collectively show that the method outperforms the state-of-the-art with aligned data, and that significant improvements are possible when using additional misaligned data.
This work argues that, to remove reflection truly well, it should estimate the reflection and utilize it to estimate the background image, and proposes a cascade deep neural network, which estimates both the Background image and the reflection.
The mathematical background and motivating why the presented setups can be transformed and solved very efficiently in the Fourier domain are explained and how to practically use these solutions is shown, by providing the corresponding implementations.
Recently, deep learning-based single image reflection separation methods have been exploited widely. To benefit the learning approach, a large number of training image pairs (i.e., with and without reflections) were synthesized in various ways, yet they are away from a physically-based direction. In this paper, physically based rendering is used for faithfully synthesizing the required training images, and a corresponding network structure and loss term are proposed. We utilize existing RGBD/RGB images to estimate meshes, then physically simulate the light transportation between meshes, glass, and lens with path tracing to synthesize training data, which successfully reproduce the spatially variant anisotropic visual effect of glass reflection. For guiding the separation better, we additionally consider a module, backtrack network (BT-net) for backtracking the reflections, which removes complicated ghosting, attenuation, blurred and defocused effect of glass/lens. This enables obtaining a priori information before having the distortion. The proposed method considering additional a priori information with physically simulated training data is validated with various real reflection images and shows visually pleasant and numerical advantages compared with state-of-the-art techniques.
A multi-task end-to-end deep learning method with a semantic guidance component, to solve reflection removal and semantic segmentation jointly and shows significant performance gain when using high-level object-oriented information.
Adding a benchmark result helps the community track progress.