3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in inference-attack-2
No benchmarks available.
Use these libraries to find inference-attack-2 models and implementations
No subtasks available.
This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon.
This most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains and proposes the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
For the first time, it is both quantitatively and qualitatively demonstrate that GAN architecture can successfully generate time series signals that are not only structurally similar to the training sets but also diverse in nature across generated samples.
This work proposes MemGuard, the first defense with formal utility-loss guarantees against black-box membership inference attacks and is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attack.
This paper establishes necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, and derives connections of disparate vulnerability to algorithmic fairness and to differential privacy, and establishes which attacks are suitable for estimating disparate vulnerability.
An identifiability bound is derived, which relates the adversary assumed in differential privacy to previous work on membership inference adversaries, and it is shown that it can be tight in practice.
This article provides the taxonomies for both attacks and defenses, based on their characterizations, and discusses their pros and cons, and point out several promising future research directions to inspire the researchers who wish to follow this area.
It is shown that gradients encode a surprisingly large amount of information, such that all the individual images can be recovered with high fidelity via GradInversion, even for complex datasets, deep networks, and large batch sizes.
This work proposes a formal definition of distribution inference attacks general enough to describe a broad class of attacks distinguishing between possible training distributions, and introduces a metric that quantifies observed leakage by relating it to the leakage that would occur if samples from the training distribution were provided directly to the adversary.
A Likelihood Ratio Attack is developed that is $10\times more powerful at low false-positive rates, and also strictly dominates prior attacks on existing metrics.
Adding a benchmark result helps the community track progress.