3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in neural-network-security-3
No benchmarks available.
Use these libraries to find neural-network-security-3 models and implementations
How deep learning in security works is given and the basic methods of exploitation are explored, but also the offensive capabilities deep learning enabled tools provide are looked at.
The Attacking Distance-aware Attack (ADA) is proposed to enhance a poisoning attack by finding the optimized target class in the feature space by deducing pair-wise distances between different classes in the latent feature space from shared model parameters based on the backward error analysis.
Adding a benchmark result helps the community track progress.