3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in bokeh-effect-rendering-5
No benchmarks available.
Use these libraries to find bokeh-effect-rendering-5 models and implementations
No datasets available.
No subtasks available.
An end-to-end deep learning framework is proposed to generate high-quality bokeh effect from images with the help of a monocular depth estimation network.
A novel generator called Glass-Net is proposed, which generates bokeh images not relying on complex hardware and the GAN-based method and perceptual loss are combined for rendering a realistic Bokeh effect in the stage of finetuning the model.
An end-to-end Deep Multi-Scale Hierarchical Network (DMSHN) model is used for direct Bokeh effect rendering of images captured from the monocular camera with around 6x less runtime compared to the current state-of-the-art model in processing HD quality images.
This work enhances the diffusion model in several aspects such as network architecture, noise level, denoising steps, training image size, and optimizer/scheduler, and shows that tuning these hyperparameters allows the model to achieve better performance on both distortion and perceptual scores.
Many advancements of mobile cameras aim to reach the visual quality of professional DSLR cameras. Great progress was shown over the last years in optimizing the sharp regions of an image and in creating virtual portrait effects with artificially blurred backgrounds. Bokeh is the aesthetic quality of the blur in out-of-focus areas of an image. This is a popular technique among professional photographers, and for this reason, a new goal in computational photography is to optimize the Bokeh effect itself.This paper introduces EBokehNet, a efficient state-of-the-art solution for Bokeh effect transformation and rendering. Our method can render Bokeh from an all-in-focus image, or transform the Bokeh of one lens to the effect of another lens without harming the sharp foreground regions in the image. Moreover we can control the shape and strength of the effect by feeding the lens properties i.e. type (Sony or Canon) and aperture, into the neural network as an additional input. Our method is a winning solution at the NTIRE 2023 Lens-to-Lens Bokeh Effect Transformation Challenge, and state-of-the-art at the EBB benchmark.
We present the new Bokeh Effect Transformation Dataset (BETD), and review the proposed solutions for this novel task at the NTIRE 2023 Bokeh Effect Transformation Challenge. Recent advancements of mobile photography aim to reach the visual quality of full-frame cameras. Now, a goal in computational photography is to optimize the Bokeh effect itself, which is the aesthetic quality of the blur in out-of-focus areas of an image. Photographers create this aesthetic effect by benefiting from the lens optical properties.The aim of this work is to design a neural network capable of converting the the Bokeh effect of one lens to the effect of another lens without harming the sharp foreground regions in the image. For a given input image, knowing the target lens type, we render or transform the Bokeh effect accordingly to the lens properties. We build the BETD using two full-frame Sony cameras, and diverse lens setups.To the best of our knowledge, we are the first attempt to solve this novel task, and we provide the first BETD dataset and benchmark for it. The challenge had 99 registered participants. The submitted methods gauge the state-of-the-art in Bokeh effect rendering and transformation.
This paper has proposed an effective controllable bokeh rendering method, and contributed a Variable Aperture Bokeh Dataset (VABD), and demonstrated that the customized focal plane together aperture prompt can bootstrap model to simulate realistic bokeh effect.
Adding a benchmark result helps the community track progress.