3260 papers • 126 benchmarks • 313 datasets
Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.
(Image credit: Papersgraph)
These leaderboards are used to track progress in backdoor-attack-62
No benchmarks available.
Use these libraries to find backdoor-attack-62 models and implementations
No datasets available.
No subtasks available.
Adding a benchmark result helps the community track progress.