3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in transparent-objects-9
No benchmarks available.
Use these libraries to find transparent-objects-9 models and implementations
No subtasks available.
A deep learning framework, called TOM-Net, is proposed for learning the refractive flow of transparent object matting, which comprises two parts, namely a multi-scale encoder-decoder network for producing a coarse prediction, and a residual network for refinement.
This paper establishes an easy method for capturing and labeling 3D keypoints on desktop objects with an RGB camera, and develops a deep neural network, called KeyPose, that learns to accurately predict object poses using 3DKeypoints, from stereo input, and works even for transparent objects.
This work develops a novel end-to-end approach for natural image matting with a guided contextual attention module, which is specifically designed for image matts and can mimic information flow of affinity-based methods and utilize rich features learned by deep neural networks simultaneously.
A novel boundary-aware segmentation method, termed TransLab, is proposed, which exploits boundary as the clue to improve segmentation of transparent objects and significantly outperforms 20 recent object segmentation methods based on deep learning.
This work elaborately generates boundaries of transparent objects from other samples into the current image during training, which adjusts the data space and improves the generalization of the models, and presents AdaptiveASPP, an enhanced version of ASPP that can capture multi-scale and cross-modality features dynamically.
A new approach for depth completion of transparent objects from a single RGB-D image using a local implicit neural representation built on ray-voxel pairs that allows the method to generalize to unseen objects and achieve fast inference speed.
This work proposes a physically-based network to recover 3D shape of transparent objects using a few images acquired with a mobile phone camera, under a known but arbitrary environment map and renders a synthetic dataset to encourage the model to learn refractive light transport across different views.
Adding a benchmark result helps the community track progress.