2
Differential Privacy Defenses and Sampling Attacks for Membership Inference
3
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
4
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
5
Membership Leakage in Label-Only Exposures
6
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning
7
CoinPress: Practical Private Mean and Covariance Estimation
8
Learn to Forget: User-Level Memorization Elimination in Federated Learning
9
Towards Probabilistic Verification of Machine Unlearning
10
Dynamic Backdoor Attacks Against Machine Learning Models
11
Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations
12
Membership Inference Attacks and Defenses in Supervised Learning via Generalization Gap
13
Approximate Data Deletion from Machine Learning Models: Algorithms and Evaluations
14
Machine unlearning: linear filtration for logit-based classifiers
15
Privacy Attacks on Network Embeddings
16
Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation
17
Analyzing Information Leakage of Updates to Natural Language Models
19
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks
20
Certified Data Removal from Machine Learning Models
21
Five Years of the Right to be Forgotten
22
Lifelong Anomaly Detection Through Unlearning
23
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
24
GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
25
GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs
26
High Accuracy and High Fidelity Extraction of Neural Networks
27
Making AI Forget You: Data Deletion in Machine Learning
28
Neutaint: Efficient Dynamic Taint Analysis with Neural Networks
29
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference
30
Misleading Authorship Attribution of Source Code using Adversarial Learning
31
Overlearning Reveals Sensitive Attributes
32
Language in Our Time: An Empirical Analysis of Hashtags
33
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
34
On Training Robust PDF Malware Classifiers
35
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
36
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
37
How to prove your model belongs to you: a blind-watermark based framework to protect intellectual property of DNN
38
MBeacon: Privacy-Preserving Beacons for DNA Methylation Data
39
Evaluating Differentially Private Machine Learning in Practice
40
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
41
Auditing Data Provenance in Text-Generation Models
42
LEMNA: Explaining Deep Learning based Security Applications
43
Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations
44
Model-Reuse Attacks on Deep Learning Systems
45
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
46
Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning
47
Exploiting Unintended Feature Leakage in Collaborative Learning
48
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
49
SoK: Security and Privacy in Machine Learning
50
Scalable Private Learning with PATE
51
Stealing Hyperparameters in Machine Learning
52
Understanding Membership Inferences on Well-Generalized Learning Models
53
Tagvisor: A Privacy Advisor for Sharing Hashtags
54
Machine Learning with Membership Privacy using Adversarial Regularization
55
Towards Plausible Graph Anonymization
56
Towards Reverse-Engineering Black-Box Neural Networks
57
Machine Learning Models that Remember Too Much
58
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
59
walk2friends: Inferring Social Links from Mobility Profiles
60
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
61
Knock Knock, Who's There? Membership Inference on Aggregate Location Data
62
Humans forget, machines remember: Artificial intelligence and the Right to Be Forgotten
63
On Calibration of Modern Neural Networks
64
LOGAN: Evaluating Privacy Leakage of Generative Models Using Generative Adversarial Networks
65
Ensemble Adversarial Training: Attacks and Defenses
66
Towards the Science of Security and Privacy in Machine Learning
67
Membership Privacy in MicroRNA-based Studies
68
Membership Inference Attacks Against Machine Learning Models
69
DeepCity: A Feature Learning Framework for Mining Location Check-Ins
70
The Right to be Forgotten in the Media: A Data-Driven Study
71
Densely Connected Convolutional Networks
72
Stealing Machine Learning Models via Prediction APIs
73
Deep Learning with Differential Privacy
74
Quantifying Location Sociality
75
Practical Black-Box Attacks against Machine Learning
76
Deep Residual Learning for Image Recognition
77
The Limitations of Deep Learning in Adversarial Settings
78
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
79
Towards Making Systems Forget with Machine Unlearning
80
Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing
81
The Algorithmic Foundations of Differential Privacy
82
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
83
Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays
85
Exploiting Transparency Measures for Membership Inference: a Cautionary Tale
86
on Neural Information Processing Systems (NeurIPS)
87
Amnesia” - A Selection of Machine Learning Models That Can Forget User Data Very Fast
88
Trojaning Attack on Neural Networks
89
Computer and Communications Security (CCS)
90
OF SIMPLE MODELS We use multiple ML models in our experiments. All models are implemented by sklearn version 0.22 except for the logistic regression
91
This dataset contains around 3M samples. We filter out attributes with too many missing values and obtain 30 valid features. The valid features include temperature, humidity, pressure, etc
92
STL10 is a 10-class image dataset with each class containing 1,300 images. Classes include airplane, bird, car, cat, deer, dog, horse, monkey, ship, and truck
93
MNIST is an image dataset widely use for classification
94
CIFAR10 is the benchmark dataset used to evaluate image recognition algorithms