3260 papers • 126 benchmarks • 313 datasets
Reference-based Super-Resolution aims to recover high-resolution images by utilizing external reference images containing similar content to generate rich textures.
(Image credit: Papersgraph)
These leaderboards are used to track progress in reference-based-super-resolution
Use these libraries to find reference-based-super-resolution models and implementations
No datasets available.
No subtasks available.
An end-to-end deep model which enriches HR details by adaptively transferring the texture from Ref images according to their textural similarity is designed, which facilitates multi-scale neural transfer that allows the model to benefit more from those semantically related Ref patches, and gracefully degrade to SISR performance on the least relevant Ref inputs.
This work proposes a self-supervised domain adaptation strategy for real-world images that generalizes the standard patch-based feature matching with spatial alignment operations and further explores the dual-camera super-resolution that is one promising application of RefSR.
This paper presents a novel self-supervised learning approach for real-world image SR from observations at dual camera zooms (SelfDZSR), and takes the telephoto image instead of an additional high-resolution image as the supervision information and select a center patch from it as the reference to super-resolve the corresponding short-focus image patch.
Using cross-scale warping, the CrossNet network is able to perform spatial alignment at pixel-level in an end-to-end fashion, which improves the existing schemes both in precision and efficiency.
In this paper, we propose a novel and efficient reference feature extraction module referred to as the Similarity Search and Extraction Network (SSEN) for reference-based super-resolution (RefSR) tasks. The proposed module extracts aligned relevant features from a reference image to increase the performance over single image super-resolution (SISR) methods. In contrast to conventional algorithms which utilize brute-force searches or optical flow estimations, the proposed algorithm is end-to-end trainable without any additional supervision or heavy computation, predicting the best match with a single network forward operation. Moreover, the proposed module is aware of not only the best matching position but also the relevancy of the best match. This makes our algorithm substantially robust when irrelevant reference images are given, overcoming the major cause of the performance degradation when using existing RefSR methods. Furthermore, our module can be utilized for self-similarity SR if no reference image is available. Experimental results demonstrate the superior performance of the proposed algorithm compared to previous works both quantitatively and qualitatively.
The proposed C2-Matching significantly outperforms state of the arts by over 1dB on the standard CUFED5 benchmark and shows great generalizability on WR-SR dataset as well as robustness across large scale and rotation transformations.
An Accelerated Multi-scale Aggregation network (AMSA) for Reference-based Super-Resolution, including Coarse-to-Fine Embedded PatchMatch (CFE-PatchMatch) and Multi-Scale Dynamic Aggregation (MSDA) module is proposed and Experimental results show that the proposed AMSA achieves superior performance over state-of-the-art approaches on both quantitative and qualitative evaluations.
A novel self-supervised learning approach for real-world RefSR from observations at dual and multiple camera zooms is proposed, including patch-based optical flow alignment and auxiliary-LR guiding alignment and local overlapped sliced Wasserstein loss.
A deformable attention Transformer, namely DATSR, with multiple scales, each of which consists of a texture feature encoder (TFE) module, a reference-based deformable Attention (RDA) module and a residual feature aggregation (RFA) module is proposed.
This paper proposes an end-to-end trainable deep network that performs optical flow estimation and frame reconstruction by combining inputs from both video feeds and provides significant improvement over existing video frame interpolation and RefSR techniques in terms of objective PSNR and SSIM metrics.
Adding a benchmark result helps the community track progress.