3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in jpeg-compression-artifact-reduction-1
No benchmarks available.
Use these libraries to find jpeg-compression-artifact-reduction-1 models and implementations
No subtasks available.
It is shown that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting.
A strong baseline model SwinIR is proposed for image restoration based on the Swin Transformer that outperforms state-of-the-art methods on different tasks by up to 0.14∼0.45dB, while the total number of parameters can be reduced byUp to 67%.
Attention Retractable Transformer (ART) is proposed for image restoration, which presents both dense and sparse attention modules in the network, which greatly enhances representation ability of Transformer while providing retractable attention on the input image.
This paper proposes intra-inter Transformer (iiTransformer) by explicitly modelling long-range dependencies at the pixel-and patch-levels since there are benefits to considering both local and non-local feature correlations and provides a boundary artifact-free solution to support images with arbitrary sizes.
Experimental results show that the CUR transformer outperforms the state-of-the-art methods significantly on four low-level vision tasks, including real and synthetic image denoising, JPEG compression artifact reduction, and low-light image enhancement.
Adding a benchmark result helps the community track progress.