3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in provable-adversarial-defense-1
Use these libraries to find provable-adversarial-defense-1 models and implementations
No subtasks available.
This work extends randomized smoothing to cover parameterized transformations and certify robustness in the parameter space and shows how to efficiently compute the inverse of an image transformation, enabling individual guarantees in the online setting.
This work proposes a robust attack strategy called Adversarial Patch Attack with Momentum (APAM) to systematically evaluate the robustness of crowd counting models, where the attacker's goal is to create an adversarial perturbation that severely degrades their performances, thus leading to public safety accidents.
A novel algebraic perspective unifies various types of 1-Lipschitz neural networks, including the ones previously mentioned, along with methods based on orthogonality and spectral methods, and shows that SLLs outperform previous approaches on certified robust accuracy.
Adding a benchmark result helps the community track progress.