3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in hdr-reconstruction-9
No benchmarks available.
Use these libraries to find hdr-reconstruction-9 models and implementations
This paper addresses the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure, and proposes a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values.
One of the most challenging problems in reconstructing a high dynamic range (HDR) image from multiple low dynamic range (LDR) inputs is the ghosting artifacts caused by the object motion across different inputs. When the object motion is slight, most existing methods can well suppress the ghosting artifacts through aligning LDR inputs based on optical flow or detecting anomalies among them. However, they often fail to produce satisfactory results in practice, since the real object motion can be very large. In this study, we present a novel deep framework, termed NHDRRnet, which adopts an alternative direction and attempts to remove ghosting artifacts by exploiting the non-local correlation in inputs. In NHDRRnet, we first adopt an Unet architecture to fuse all inputs and map the fusion results into a low-dimensional deep feature space. Then, we feed the resultant features into a novel global non-local module which reconstructs each pixel by weighted averaging all the other pixels using the weights determined by their correspondences. By doing this, the proposed NHDRRnet is able to adaptively select the useful information (e.g., which are not corrupted by large motions or adverse lighting conditions) in the whole deep feature space to accurately reconstruct each pixel. In addition, we also incorporate a triple-pass residual module to capture more powerful local features, which proves to be effective in further boosting the performance. Extensive experiments on three benchmark datasets demonstrate the superiority of the proposed NDHRnet in terms of suppressing the ghosting artifacts in HDR reconstruction, especially when the objects have large motions.
This work model the HDR-to-LDR image formation pipeline as the dynamic range clipping, non-linear mapping from a camera response function, and quantization, and proposes to learn three specialized CNNs to reverse these steps.
This work proposes a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images, and achieves state-of-the-art reconstruction performance over the prior HDR methods on diverse scenes.
This work introduces a coarse-to-fine deep learning framework for HDR video reconstruction, which outperforms previous state-of-the-art methods and conducts more sophisticated alignment and temporal fusion in the feature space of the coarse HDR video to produce better reconstruction.
This work proposes a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction with denoising and dequantization, which achieves the state-of-the-art performance in quantitative comparisons and visual quality.
This paper reviews the first challenge on high-dynamic range (HDR) imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021.
It is found that methods that do not even reconstruct HDR information can compete with state-of-the-art deep learning methods, and that SI-HDR reconstruction needs better evaluation protocols.
Extensive experiments show that the proposed approach LANet can reconstruct visually convincing HDR images and demonstrate its superiority over state‐of‐the‐art approaches in terms of all metrics in inverse tone mapping.
This work compared six recent single image HDR reconstruction methods in a subjective image quality experiment on an HDR display and found that only two methods produced results that are, on average, more preferred than the unprocessed single exposure images.
Adding a benchmark result helps the community track progress.