3260 papers • 126 benchmarks • 313 datasets
Determine the source or origin of a generated image, such as identifying the model or tool used to create it. This information can be useful for detecting copyright infringement or for investigating digital crimes.
(Image credit: Papersgraph)
These leaderboards are used to track progress in synthetic-image-attribution
No benchmarks available.
Use these libraries to find synthetic-image-attribution models and implementations
No datasets available.
No subtasks available.
This paper presents a large-scale dataset named ArtiFact, comprising diverse generators, object categories, and real-world challenges, and proposes a multi-class classification scheme that addresses social platform impairments and effectively detects synthetic images from both seen and unseen generators.
A verification framework that relies on a Siamese Network to address the problem of open-set attribution of synthetic images to the architecture that generated them, and its ability to operate in both closed and open-set scenarios.
Although the selected models have similar ImageNet accuracies and compute requirements, it is found that they differ in many other aspects: types of mistakes, output calibration, transferability, and feature invariance, among others.
Adding a benchmark result helps the community track progress.