3260 papers • 126 benchmarks • 313 datasets
When multiple images of the same view are taken from slightly different positions, perhaps also at different times, then they collectively contain more information than any single image on its own. Multi-Frame Super-Resolution fuses these low-res inputs into a composite high-res image that can reveal some of the original detail that cannot be recovered from any low-res image alone. ( Credit: HighRes-net )
(Image credit: Papersgraph)
These leaderboards are used to track progress in multi-frame-super-resolution-1
Use these libraries to find multi-frame-super-resolution-1 models and implementations
No subtasks available.
This report demonstrates that with same parameters and computational budgets, models with wider features before ReLU activation have significantly better performance for single image super-resolution (SISR) and introduces linear low-rank convolution into SR networks to achieve even better accuracy-efficiency tradeoffs.
A multiframe super-resolution algorithm that creates a complete RGB image directly from a burst of CFA raw images, which is the basis of the Super-Res Zoom feature, as well as the default merge method in Night Sight mode on Google's flagship phone.
A novel architecture for the burst super-resolution task, which takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output by explicitly aligning deep embeddings of the input frames using pixel-wise optical flow.
HighRes-net is presented, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion, and shows that by learning deep representations of multiple views, it can super-resolve low-resolution signals and enhance Earth Observation data at scale.
A novel residual attention model (RAMS) is proposed that efficiently tackles the Multi-image Super-resolution task, simultaneously exploiting spatial and temporal correlations to combine multiple images.
We propose a novel architecture to handle the problem of multi-frame super-resolution (MFSR). The proposed framework is known as Enhanced Burst Super-Resolution (EBSR), which divides the MFSR problem into three parts: alignment, fusion, and reconstruction. We propose a Feature Enhanced Pyramid Cascading and Deformable convolution (FEPCD) module to align multiple low-resolution burst images in the feature level. And then the aligned features are fused by a Cross Non-Local Fusion (CNLF) module. Finally, the SR image is reconstructed by the Long Range Concatenation Network (LRCN). In addition, we build a cascading residual pathway structure (CR) to improve the performance. We conduct several experiments to analyze and demonstrate these modules. Our EBSR model won the champion in the real track and second place in the synthetic track in the NTIRE21 Burst Super-Resolution Challenge.
The deep reparametrization of the maximum a posteriori formulation commonly employed in multi-frame image restoration tasks is proposed, derived by introducing a learned error metric and a latent representation of the target image, which transforms the MAP objective to a deep feature space.
A novel CNN-based technique that exploits both spatial and temporal correlations to combine multiple images and is the winner of the PROBA-V SR challenge issued by the European Space Agency.
This article shows how building a model that is fully invariant to temporal permutation significantly improves performance and data efficiency, and studies how to quantify the uncertainty of the super-resolved image so that the final user is informed on the local quality of the product.
Adding a benchmark result helps the community track progress.