3260 papers • 126 benchmarks • 313 datasets
Underwater image restoration aims to rectify the distorted colors and present the true colors of the underwater scene.
(Image credit: Papersgraph)
These leaderboards are used to track progress in underwater-image-restoration-10
Use these libraries to find underwater-image-restoration-10 models and implementations
No subtasks available.
This work proposes a multi-color space encoder network, which enriches the diversity of feature representations by incorporating the characteristics of different color spaces into a unified structure, and designs a medium transmission-guided decoder network to enhance the response of network towards quality-degraded regions.
The proposed conditional generative adversarial network-based model is suitable for real-time preprocessing in the autonomy pipeline by visually-guided underwater robots and provides improved performances of standard models for underwater object detection, human pose estimation, and saliency prediction.
A method to improve the quality of visual underwater scenes using Generative Adversarial Networks (GANs), with the goal of improving input to vision-driven behaviors further down the autonomy pipeline is proposed.
A large scale underwater image (LSUI) dataset is built, which covers more abundant underwater scenes and better visual quality reference images than existing underwater datasets and a novel loss function combining RGB, LAB and LCH color spaces is designed following the human vision principle.
This paper constructs an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images and proposes an underwater image enhancement network (called Water-Net) trained on this benchmark as a baseline, which indicates the generalization of the proposed UIEB for training Convolutional Neural Networks (CNNs).
A novel method that achieves state-of-the-art results for underwater image restoration based on the unsupervised image-to-image translation framework is elaborates on by leveraging from contrastive learning and generative adversarial networks to maximize mutual information between raw and restored images.
It is demonstrated that a two-stage approach consisting of the CS step followed by PEOF much more accurately preserves the image structure and improves the (visual as well as numerical) video quality as compared to just the PEOf stage.
It is shown that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain for the task of UIR and an attentive skip mechanism is incorporated to adaptively refine the learned multi-contextual features.
This work constructed a large-scale real underwater image dataset, dubbed Heron Island Coral Reef Dataset (`HICRD’), for the purpose of benchmarking existing methods and supporting the development of new deep-learning based methods.
A model-based deep learning method for restoring clean images under various underwater scenarios, which exhibits good interpretability and generalization ability and outperform the state-of-the-art underwater image restoration methods.
Adding a benchmark result helps the community track progress.