3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in 3d-architecture-10
No benchmarks available.
Use these libraries to find 3d-architecture-10 models and implementations
No datasets available.
No subtasks available.
This work adopts a recursively trained architecture in which a first network generates a preliminary boundary map that is provided as input along with the original image to a second network that generates a final boundary map, and uses a much deeper network than previously employed for boundary detection.
This work provided a freely available dataset of 12 annotated two-photon vasculature microscopy stacks and demonstrated the use of deep learning framework consisting both 2D and 3D convolutional filters (ConvNet) that produced promising segmentation result.
3DMV is presented, a novel method for 3D semantic scene segmentation of RGB-D scans in indoor environments using a joint 3D-multi-view prediction network that achieves significantly better results than existing baselines.
A 2D deep residual Unet with 104 convolutional layers (DR-Unet104) for lesion segmentation in brain MRIs is presented as a state-of-the-art 2D lesions segmentation architecture that can be used on lower power computers than a 3D architecture.
A new 3D backbone network, called VoV3D, is proposed that consists of a temporal one-shot aggregation (T-OSA) module and a depthwise factorized component, D(2+1)D, that decomposes a 3D depthwise Convolution into two spatial and temporal depthwise convolutions for efficient architecture.
ULIP is introduced to learn a unified representation of image, text, and 3D point cloud by pre-training with object triplets from the three modalities, using a pre-trained vision-language model that has already learned a common visual and textual space by training with massive image-text pairs.
Multi-resolution 3D U-Nets address current shortcomings in bone segmentation from upper-body CT scans by allowing for capturing a larger field of view while avoiding the cubic growth of the input pixels and intermediate computations that quickly outgrow the computational capacities in 3D.
PAniC-3D significantly outper-forms baseline methods, and provides data to establish the task of stylized reconstruction from portrait illustrations, and represents sophisticated geometries with a volumetric radiance field.
Adding a benchmark result helps the community track progress.