3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in neural-network-security-14
No benchmarks available.
Use these libraries to find neural-network-security-14 models and implementations
The Attacking Distance-aware Attack (ADA) is proposed to enhance a poisoning attack by finding the optimized target class in the feature space by deducing pair-wise distances between different classes in the latent feature space from shared model parameters based on the backward error analysis.
Adding a benchmark result helps the community track progress.