3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in spectral-reconstruction-5
Use these libraries to find spectral-reconstruction-5 models and implementations
No subtasks available.
This paper develops an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods, and proposes a new multi-scale deepsuper-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model.
This paper presents a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network and receiving strong contextual information from the low- resolution representations, named as MIRNet.
This work proposes an efficient Transformer model by making several key designs in the building blocks (multi-head attention and feed-forward network) such that it can capture long-range pixel interactions, while still remaining applicable to large images.
This paper proposes a novel synergistic design that can optimally balance these competing goals in image restoration tasks, that progressively learns restoration functions for the degraded inputs, thereby breaking down the overall recovery process into more manageable steps.
A novel framework, Mask-guided Spectral-wise Transformer (MST), that treats each spectral feature as a token and calculates self-attention along the spectral dimension and significantly outperforms state-of-the-art (SOTA) methods on simulation and real HSI datasets while requiring dramatically cheaper computational and memory costs.
This work proposes a novel Transformer-based method, Multi-stage Spectral-wise Transformer (MST++), for efficient spectral reconstruction that significantly outperforms other state-of-the-art methods.
StyleMelGAN is a lightweight neural vocoder allowing synthesis of high-fidelity speech with low computational complexity, and MUSHRA and P.800 listening tests show that StyleMelGAN outperforms prior neural vocoders in copy-synthesis and Text-to-Speech scenarios.
A novel block is presented: Half Instance Normalization Block (HIN Block), to boost the performance of image restoration networks and a simple and powerful multi-stage network named HINet is designed, which surpasses the state-of-the-art (SOTA) on various image restoration tasks.
A high-resolution dual-domain learning network (HDNet) for HSI reconstruction with HR pixel-level attention and frequency-level refinement that achieves SOTA performance on simulated and real HSI datasets.
Adding a benchmark result helps the community track progress.