3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in 2d-semantic-segmentation-2
Use these libraries to find 2d-semantic-segmentation-2 models and implementations
xBD provides pre- and post-event multi-band satellite imagery from a variety of disaster events with building polygons, classification labels for damage types, ordinal labels of damage level, and corresponding satellite metadata, and will be the largest building damage assessment dataset to date.
An end-to-end realtime global attention neural network (RGANet) for the challenging task of semantic segmentation and an improved evaluation metric, namely MGRID, is proposed to alleviate the negative effect of non-convex, widely scattered ground-truth areas.
This research proposes an encoder-decoder architecture with a unique efficient residual network, Efficient-ResNet, developed with the additional attention-fusion networks (AfNs) inspired by AbM to improve the efficiency in the one-to-one conversion of the semantic information.
Experimental results show that regardless of inputing a single depth or RGB-D, the proposed disentangled framework can generate high-quality semantic scene completion, and outperforms state-of-the-art approaches on both synthetic and real datasets.
3D-MiniNet is presented, a novel approach for LIDAR semantic segmentation that combines 3D and 2D learning layers that first learns a 2D representation from the raw points through a novel projection which extracts local and global information from the 3D data.
Findings on problem framing, data processing and training procedures which are specifically helpful for the task of building damage assessment using the newly released xBD dataset are reported.
A SofGAN image generator to decouple the latent space of portraits into two subspaces: a geometry space and a texture space and it is shown that the system can generate high-quality portrait images with independently controllable geometry and texture attributes.
This paper revisits the classic multiview representation of 3D meshes and study several techniques that make them effective for 3D semantic segmentation of meshes and shows that the virtual views enable more effective training of 2D semantic segmentsation networks than previous multIView approaches.
Adding a benchmark result helps the community track progress.