3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in multi-exposure-image-fusion-1
No benchmarks available.
Use these libraries to find multi-exposure-image-fusion-1 models and implementations
No datasets available.
No subtasks available.
This paper presents a novel automatic exposure correction method, able to robustly produce high‐quality results for images of various exposure conditions, and demonstrates the effectiveness of the proposed approach and its superiority over the state‐of‐the‐art methods and popularautomatic exposure correction tools.
Three deep convolutional sparse coding networks for three kinds of image fusion tasks (i.e., infrared and visible image fusion, multi-exposure image fusion and multi-modal image fusion) are presented.
This paper proposes TransMEF, a transformer-based multi-exposure image fusion framework that uses self-supervised multi-task learning that is based on an encoder-decoder network, which can be trained on large natural image datasets and does not require ground truth fusion images.
This study proposes a gamma correction module specifically designed to fully leverage latent information embedded within source images, and a novel color enhancement algorithm is presented to augment image saturation while preserving intricate details.
Experimental results prove the superiority of the proposed technique over existing state-of-the-art methods in terms of both subjective and objective evaluation.
A novel real-time visualization tool, named FuseVis, with which the end-user can compute per-pixel saliency maps that examine the influence of the input image pixels on each pixel of the fused image.
This study proposes an efficient multi-exposure fusion (MEF) approach with a simple yet effective weight extraction method relying on principal component analysis, adaptive well-exposedness and saliency maps.
A novel cross-attention-guided image fusion network is proposed, which is a unified and unsupervised framework for multi-modal image fusion, multi-exposure image fused, and multi-focus image fusion.
A perceptual multi-exposure fusion method that not just ensures fine shadow/highlight details but with lower complexity than detailenhanced methods, and can achieve a better improvement for current image enhancement techniques, ensuring fine detail in bright light.
This paper proposes a search-based paradigm, involving self-alignment and detail repletion modules for robust multi-exposure image fusion, and introduces neural architecture search to discover compact and efficient networks, investigating effective feature representation for fusion.
Adding a benchmark result helps the community track progress.