3260 papers • 126 benchmarks • 313 datasets
When a camera is pointed at a strong light source, the resulting photograph may contain lens flare artifacts. Flares appear in a wide variety of patterns (halos, streaks, color bleeding, haze, etc.) and this diversity in appearance makes flare removal challenging.
(Image credit: Papersgraph)
These leaderboards are used to track progress in flare-removal-3
Use these libraries to find flare-removal-3 models and implementations
No subtasks available.
Experiments show that the data synthesis approach is critical for accurate flare removal, and that models trained with the technique generalize well to real lens flares across different scenes, lighting conditions, and cameras.
Causally-taken images often suffer from flare artifacts, due to the unintended reflections and scattering of light inside the camera. However, as flares may appear in a variety of shapes, positions, and colors, detecting and removing them entirely from an image is very challenging. Existing methods rely on predefined intensity and geometry priors of flares, and may fail to distinguish the difference between light sources and flare artifacts. We observe that the conditions of the light source in the image play an important role in the resulting flares. In this paper, we present a deep framework with light source aware guidance for single-image flare removal (SIFR). In particular, we first detect the light source regions and the flare regions separately, and then remove the flare artifacts based on the light source aware guidance. By learning the underlying relationships between the two types of regions, our approach can remove different kinds of flares from the image. In addition, instead of using paired training data which are difficult to collect, we propose the first unpaired flare removal dataset and new cycle-consistency constraints to obtain more diverse examples and avoid manual annotations. Extensive experiments demonstrate that our method outperforms the baselines qualitatively and quantitatively. We also show that our model can be applied to flare effect manipulation (e.g., adding or changing image flares).
This paper proposes a robust computational method to automatically detect and remove flare spot artifacts and defines a new confidence measure able to select flare spots among the candidates; and a method to accurately determine the flare region is given.
Flare7K is introduced, the first nighttime flare removal dataset, which is generated based on the observation and statistics of real-world nighttime lens flares, and offers 5,000 scattering and 2,000 reflective flare images, consisting of 25 types of scattering flares and 10 types of reflective flares.
This paper proposes a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.
This work proposes an optical center symmetry prior, which suggests that the reflective flare and light source are always symmetrical around the lens's optical center, and creates the first reflective flare removal dataset called BracketFlare, which contains diverse and realistic reflective flare patterns.
Flare7K++ is introduced, the first comprehensive nighttime flare removal dataset, consisting of 962 real-captured flare images (Flare-R) and 7000 synthetic flares (Flare7K), and a new end-to-end pipeline to preserve the light source while removing lens flares.
This survey provides a comprehensive overview of the multifaceted domain of lens flare, encompassing its underlying physics, influencing factors, types, and characteristics, and extensively covers the wide range of methods proposed for flare removal.
Image flare is a common problem that occurs when a camera lens is pointed at a strong light source. It can manifest as ghosting, blooming, or other artifacts that can degrade the image quality. We propose a novel deep learning approach for flare removal that uses a combination of depth estimation and image restoration. We use a Dense Vision Transformer to estimate the depth of the scene. This depth map is then concatenated to the input image, which is then fed into a Uformer, a general U-shaped transformer for image restoration. Our proposed method demonstrates state-of-the-art performance on the Flare7K++ test dataset, demonstrating its effectiveness in removing flare artifacts from images. Our approach also demonstrates robustness and generalization to real-world images with various types of flare. We believe that our work opens up new possibilities for using depth information for image restoration. The code is available on GitHub
Adding a benchmark result helps the community track progress.