The Segment Anything Model (SAM) is introduced: a new task, model, and dataset for image segmentation, and its zero-shot performance is impressive – often competitive with or even superior to prior fully supervised results.
We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive – often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at segment-anything.com to foster research into foundation models for computer vision. We recommend reading the full paper at: arxiv.org/abs/2304.02643.
Laura Gustafson
2 papers
Chloé Rolland
2 papers
A. Berg
5 papers
A. Kirillov
1 papers
Eric Mintun
1 papers
Hanzi Mao
2 papers
Tete Xiao
1 papers
Spencer Whitehead
1 papers
Piotr Dollár
3 papers