3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in membership-inference-attack
No benchmarks available.
Use these libraries to find membership-inference-attack models and implementations
No datasets available.
No subtasks available.
This most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains and proposes the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
For the first time, it is both quantitatively and qualitatively demonstrate that GAN architecture can successfully generate time series signals that are not only structurally similar to the training sets but also diverse in nature across generated samples.
This article provides the taxonomies for both attacks and defenses, based on their characterizations, and discusses their pros and cons, and point out several promising future research directions to inspire the researchers who wish to follow this area.
This paper establishes necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, and derives connections of disparate vulnerability to algorithmic fairness and to differential privacy, and establishes which attacks are suitable for estimating disparate vulnerability.
It is demonstrated that even a well-generalized model contains vulnerable instances subject to a new generalized MIA (GMIA), and novel techniques for selecting vulnerable instances and detecting their subtle influences ignored by overfitting metrics are used.
It is shown that the min-max strategy can mitigate the risks of membership inference attacks (near random guess), and can achieve this with a negligible drop in the model's prediction accuracy (less than 4%).
A concrete safe model compression mechanism, called MIA-SafeCompress, which can automatically compress a big model to a small one following the dynamic sparse training paradigm, inspired by the test-driven development (TDD) paradigm in software engineering is proposed.
This work proposes MemGuard, the first defense with formal utility-loss guarantees against black-box membership inference attacks and is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attack.
A Likelihood Ratio Attack is developed that is $10\times more powerful at low false-positive rates, and also strictly dominates prior attacks on existing metrics.
This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon.
Adding a benchmark result helps the community track progress.