3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in unsupervised-saliency-detection-11
Use these libraries to find unsupervised-saliency-detection-11 models and implementations
No subtasks available.
A graph-based method that uses the selfsupervised transformer features to discover an object from an image using spectral clustering with generalized eigen-decomposition and showing that the second smallest eigenvector provides a cutting solution since its absolute value indicates the likelihood that a token belongs to a foreground object.
This paper proposes a simple but effective winner-takes-all voting mechanism for selecting the salient masks, leveraging object priors based on framing and distinctiveness and trains a salient object detector, termed SELF-MASK, which outperforms prior approaches on three unsupervised SOD benchmarks.
We introduce MOVE, a novel method to segment objects without any form of supervision. MOVE exploits the fact that foreground objects can be shifted locally relative to their initial position and result in realistic (undistorted) new images. This property allows us to train a segmentation model on a dataset of images without annotation and to achieve state of the art (SotA) performance on several evaluation datasets for unsupervised salient object detection and segmentation. In unsupervised single object discovery, MOVE gives an average CorLoc improvement of 7.2% over the SotA, and in unsupervised class-agnostic object detection it gives a relative AP improvement of 53% on average. Our approach is built on top of self-supervised features (e.g. from DINO or MAE), an inpainting network (based on the Masked AutoEncoder) and adversarial training.
This work proposes FOUND, a simple model made of a single conv1 x 1 initialized with coarse background masks extracted from self-supervised patch-based representations, which reaches state-of-the-art results on unsupervised saliency detection and object discovery benchmarks.
Adding a benchmark result helps the community track progress.