3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in tone-mapping-11
No benchmarks available.
Use these libraries to find tone-mapping-11 models and implementations
No subtasks available.
This work presents a technique to “unprocess” images by inverting each step of an image processing pipeline, thereby allowing us to synthesize realistic raw sensor measurements from commonly available Internet photos.
The qualitative and quantitative comparisons demonstrate that the proposed method can outperform the existing LDR to HDR works with a marginal difference and can reconstruct plausible HDR images without presenting any visual artefacts.
This work proposes an end-to-end differentiable architecture that jointly performs demosaicking, denoising, deblurring, tone-mapping, and classification and shows that state-of-the-art ISPs discard information that is essential in corner cases, where conventional imaging and perception stacks fail.
The proposed method is the first framework to create high dynamic range images based on the estimated multi-exposure stack using the conditional generative adversarial network structure and is significantly similar to the ground truth than other state-of-the-art algorithms.
This paper presents a method for generating HDR content from LDR content based on deep Convolutional Neural Networks (CNNs) termed ExpandNet, which accepts LDR images as input and generates images with an expanded range in an end‐to‐end fashion.
This paper proposes a joint super-resolution (SR) and inverse tone-mapping (ITM) framework, called Deep SR-ITM, which learns the direct mapping from LR SDR video to their HR HDR version, and shows good subjective quality with increased contrast and details, outperforming the previous joint SR and ITM method.
This work proposes a new Side Window Filtering (SWF) technique which aligns the window's side or corner with the pixel being processed and demonstrates that implementing the SWF principle can effectively prevent artifacts such as color leakage associated with the conventional implementation.
This paper takes a divide-and-conquer approach in designing a novel GAN-based joint SR-ITM network, called JSI-GAN, which is composed of three task-specific subnets: an image reconstruction subnet, a detail restoration subnet and a local contrast enhancement (LCE) subnet.
Experimental results on two public datasets show that the novel multiscale bandpass convolutional neural network (MBCNN) outperforms state-of-the-art methods by a large margin.
Image and video enhancement such as color constancy, low light enhancement, and tone mapping on smartphones is challenging, because high-quality images should be achieved efficiently with a limited resource budget. Unlike prior works that either used very deep CNNs or large Trans-former models, we propose a structure-aware lightweight Transformer, termed STAR, for real-time image enhancement. STAR is formulated to capture long-range dependencies between image patches, which naturally and implicitly captures the structural relationships of different regions in an image. STAR is a general architecture that can be easily adapted to different image enhancement tasks. Extensive experiments show that STAR can effectively boost the quality and efficiency of many tasks such as illumination enhancement, auto white balance, and photo retouching, which are indispensable components for image processing on smartphones. For example, STAR reduces model complexity and improves image quality compared to the recent state-of-the-art [19] on the MIT-Adobe FiveK dataset [7] (i.e., 1.8dB PSNR improvements with 25% parameters and 13% float operations.)
Adding a benchmark result helps the community track progress.