3260 papers • 126 benchmarks • 313 datasets
Depth map super-resolution is the task of upsampling depth images. ( Image credit: A Joint Intensity and Depth Co-Sparse Analysis Model for Depth Map Super-Resolution )
(Image credit: Papersgraph)
These leaderboards are used to track progress in depth-map-super-resolution
No benchmarks available.
Use these libraries to find depth-map-super-resolution models and implementations
No subtasks available.
Experimental results show that the joint convolutional neural pyramid model with large receptive fields for joint depth map super-resolution outperforms existing state-of-the-art algorithms not only on data pairs of RGB/depth images, but also on other data pairs like color/saliency and color-scribbles/colorized images.
A novel Discrete Cosine Transform Network (DCTNet) is proposed, which outperforms previous state-of-the-art methods with a relatively small number of parameters, and employs an edge attention mechanism to highlight the contours informative for guided upsampling.
This work constructs a large-scale dataset named "RGB-D-D", in which the high-frequency component adaptively decomposed from RGB image to guide the depth map SR, and provides a fast depth map super-resolution (FDSR) baseline.
A novel approach towards depth map super-resolution using multi-view uncalibrated photometric stereo with nonconvex variational approach such that no calibration on lighting or camera motion is required due to the formulation of an end-to-end joint optimization problem.
An unpaired learning method for depth super-resolution based on a learnable degradation model and including a dedicated enhancement component which integrates surface quality measures to produce more accurate depth images is proposed.
This work proposes a novel continuous depth representation for DSR, which exploits a distance field modulated by arbitrarily upsampled target gridding, through which the geometric information is explicitly introduced into feature aggregation and target generation.
A comprehensive survey of recent progress in guided depth map super-resolution (GDSR), which aims to reconstruct a high-resolution depth map from a low-resolution observation with the help of a paired high- resolution color image, is presented.
A CNN architecture and its efficient implementation, called the deformable kernel network (DKN), that outputs sets of neighbors and the corresponding weights adaptively for each pixel is proposed that outperforms the state of the art by a significant margin in all cases.
To effectively extract and combine relevant information from LR depth and HR guidance, a multi-modal attention based fusion (MMAF) strategy for hierarchical convolutional layers, including a feature enhancement block to select valuable features and a feature recalibration block to unify the similarity metrics of modalities with different appearance characteristics are proposed.
This work proposes an attentional kernel learning module to generate dual sets of filter kernels from the guidance and the target and then adaptively combine them by modeling the pixelwise dependency between the two images.
Adding a benchmark result helps the community track progress.