2
Practical Membership Inference Attack Against Collaborative Inference in Industrial IoT
3
Generative Adversarial Networks
4
Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications
5
Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture
6
Foundations of Machine Learning
7
Digestive neural networks: A novel defense strategy against inference attacks in federated learning
8
Membership Inference Attacks Against Recommender Systems
9
Source Inference Attacks in Federated Learning
10
Evaluating the Vulnerability of End-to-End Automatic Speech Recognition Models to Membership Inference Attacks
11
EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
12
PAR-GAN: Improving the Generalization of Generative Adversarial Networks Against Membership Inference Attacks
13
Defending Privacy Against More Knowledgeable Membership Inference Attackers
14
Membership Inference Attacks on Lottery Ticket Networks
15
EAR: An Enhanced Adversarial Regularization Approach against Membership Inference Attacks
16
This Person (Probably) Exists. Identity Membership Attacks Against GAN Generated Faces
17
Trustworthy AI: A Computational Perspective
18
A Comprehensive Survey of Privacy-preserving Federated Learning
20
Membership Inference on Word Embedding and Beyond
21
A Survey of Unsupervised Generative Models for Exploratory Data Analysis and Representation Learning
22
On the Difficulty of Membership Inference Attacks
23
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
24
Membership Privacy for Machine Learning Models Through Knowledge Transfer
25
Membership Inference Attacks on Deep Regression Models for Neuroimaging
26
privGAN: Protecting GANs from membership inference attacks at low cost to utility
27
Membership Inference Attack Susceptibility of Clinical Language Models
28
Membership Inference Attacks on Knowledge Graphs
29
On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models
30
On the privacy-utility trade-off in differentially private hierarchical text classification
31
Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
32
A Taxonomy of Attacks on Federated Learning
33
Understanding deep learning (still) requires rethinking generalization
34
Node-Level Membership Inference Attacks Against Graph Neural Networks
35
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
36
Membership Inference Attack on Graph Neural Networks
37
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
38
Practical Blind Membership Inference Attack via Differential Comparisons
39
Membership Inference Attack with Multi-Grade Service Models in Edge Intelligence
40
Extracting Training Data from Large Language Models
41
When Machine Learning Meets Privacy
42
Privacy-Preserving in Defending against Membership Inference Attacks
43
On the Privacy Risks of Algorithmic Fairness
44
FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries
45
Differentially Private Learning Does Not Bound Membership Inference
46
Quantifying Membership Privacy via Information Leakage
47
HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
48
Quantifying Privacy Leakage in Graph Embedding
49
GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning
50
An Extension of Fano's Inequality for Characterizing Model Susceptibility to Membership Inference Attacks
51
Quantifying Membership Inference Vulnerability via Generalization Gap and Other Model Metrics
52
Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning
53
Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy
54
Investigating the Impact of Pre-trained Word Embeddings on Memorization in Neural Networks
55
A Comprehensive Analysis of Information Leakage in Deep Transfer Learning
56
Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries
57
A Pragmatic Approach to Membership Inferences on Machine Learning Models
58
Against Membership Inference Attack: Pruning is All You Need
59
Differential Privacy Protection Against Membership Inference Attack on Machine Learning for Genomic Data
60
Beyond Model-Level Membership Privacy Leakage: an Adversarial Approach in Federated Learning
61
Membership Leakage in Label-Only Exposures
62
Label-Leaks: Membership Inference Attack with Label
63
Label-Only Membership Inference Attacks
64
How Does Data Augmentation Affect Privacy in Machine Learning?
65
ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning
66
A Survey of Privacy Attacks in Machine Learning
67
Auditing Differentially Private Machine Learning: How Private is Private SGD?
68
Adversarial Examples on Object Recognition
69
On the Effectiveness of Regularization Against Membership Inference Attacks
70
GAN Enhanced Membership Inference: A Passive Local Attack in Federated Learning
71
Revisiting Membership Inference Under Realistic Assumptions
72
An Overview of Privacy in Machine Learning
73
A Secure Federated Learning Framework for 5G Networks
74
Defending Model Inversion and Membership Inference Attacks via Prediction Purification
75
When Machine Unlearning Jeopardizes Privacy
76
Diabetic Retinopathy Detection
77
Privacy in Deep Learning: A Survey
78
Meta-Learning in Neural Networks: A Survey
79
Racism and discrimination in COVID-19 responses
80
Information Leakage in Embedding Models
81
Systematic Evaluation of Privacy Risks of Machine Learning Models
82
Improved Baselines with Momentum Contrastive Learning
83
Threats to Federated Learning: A Survey
84
Membership Inference Attacks and Defenses in Classification Models
85
Membership Inference Attacks and Defenses in Supervised Learning via Generalization Gap
86
Data and Model Dependencies of Membership Inference Attack
87
Modelling and Quantifying Membership Information Leakage in Machine Learning
88
Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack
89
privGAN: Protecting GANs from membership inference attacks at low cost
90
Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer
91
Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation
94
Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability
95
Momentum Contrast for Unsupervised Visual Representation Learning
96
Demystifying the Membership Inference Attack
97
A taxonomy and terminology of adversarial machine learning
98
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
99
FedMD: Heterogenous Federated Learning via Model Distillation
100
Characterizing Membership Privacy in Stochastic Gradient Langevin Dynamics
101
Alleviating Privacy Attacks via Causal Learning
102
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
103
Accident Risk Prediction based on Heterogeneous Sparse Data: New Dataset and Insights
104
GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
105
On Inferring Training Data Attributes in Machine Learning Models
106
Federated Learning: Challenges, Methods, and Future Directions
107
Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection
108
Invariant Risk Minimization
109
On the Privacy Risks of Model Explanations
110
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference
111
Generating Private Data Surrogates for Vision Related Tasks
112
Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models
113
SocInf: Membership Inference Attacks on Social Media Health Data With Machine Learning
114
Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning
115
ML Defense: Against Prediction API Threats in Cloud-Based Machine Learning Service
116
White-box vs Black-box: Bayes Optimal Strategies for Membership Inference
117
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
118
Membership Inference Attacks Against Adversarially Robust Deep Learning Models
119
The Audio Auditor: User-Level Membership Inference in Internet of Things Voice Services
120
Location Embeddings for Next Trip Recommendation
121
Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?
122
Evaluating Differentially Private Machine Learning in Practice
123
Measuring Membership Privacy on Aggregate Location Time-Series
124
GANobfuscator: Mitigating Information Leakage Under GAN via Differential Privacy
125
Demystifying Membership Inference Attacks in Machine Learning as a Service
126
Adversarial Attack and Defense on Graph Data: A Survey
127
Differentially Private Data Generative Models
128
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
129
Deep learning for healthcare: review, opportunities and challenges
130
Auditing Data Provenance in Text-Generation Models
131
Findings of the 2018 Conference on Machine Translation (WMT18)
132
Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations
133
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
134
General Data Protection Regulation
135
Algorithms that remember: model inversion attacks and data protection law
136
Privacy-preserving Machine Learning through Data Obfuscation
137
Killing Four Birds with one Gaussian Process: The Relation between different Test-Time Attacks
138
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
139
Performing Co-membership Attacks Against Deep Generative Models
140
BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling
141
Exploiting Unintended Feature Leakage in Collaborative Learning
142
Extreme Adaptation for Personalized Neural Machine Translation
143
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
144
Generating Artificial Data for Private Deep Learning
145
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
146
Differentially Private Generative Adversarial Network
147
Understanding Membership Inferences on Well-Generalized Learning Models
148
Certified Robustness to Adversarial Examples with Differential Privacy
149
Machine Learning with Membership Privacy using Adversarial Regularization
150
Differentially Private Releasing via Deep Generative Model
151
Towards Measuring Membership Privacy
152
Differentially Private Federated Learning: A Client Level Perspective
153
Moonshine: Distilling with Cheap Convolutions
154
Progressive Growing of GANs for Improved Quality, Stability, and Variation
155
mixup: Beyond Empirical Risk Minimization
156
Learning Differentially Private Recurrent Language Models
157
The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes
158
Machine Learning Models that Remember Too Much
159
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
160
walk2friends: Inferring Social Links from Mobility Profiles
161
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
162
Knock Knock, Who's There? Membership Inference on Aggregate Location Data
163
WASSA-2017 Shared Task on Emotion Intensity
164
Privacy-Preserving Generative Deep Neural Networks Support Clinical Data Sharing
165
Towards Deep Learning Models Resistant to Adversarial Attacks
166
Attention is All you Need
167
LOGAN: Membership Inference Attacks Against Generative Models
168
ChestX-Ray8: Hospital-Scale Chest X-Ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases
169
Neural Collaborative Filtering
170
Improved Training of Wasserstein GANs
171
BEGAN: Boundary Equilibrium Generative Adversarial Networks
172
Generating Multi-label Discrete Patient Records using Generative Adversarial Networks
173
Age Progression/Regression by Conditional Adversarial Autoencoder
174
Rényi Differential Privacy
175
A survey on deep learning in medical image analysis
176
Towards the Science of Security and Privacy in Machine Learning
177
Understanding deep learning requires rethinking generalization
178
Adversarial Machine Learning at Scale
179
Membership Inference Attacks Against Machine Learning Models
180
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
181
Semi-Supervised Classification with Graph Convolutional Networks
182
Stealing Machine Learning Models via Prediction APIs
183
Enriching Word Vectors with Subword Information
184
node2vec: Scalable Feature Learning for Networks
185
Deep Learning with Differential Privacy
186
Multi-class texture analysis in colorectal cancer histology
187
Smart Reply: Automated Response Suggestion for Email
188
Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds
189
MIMIC-III, a freely accessible critical care database
190
The Cityscapes Dataset for Semantic Urban Scene Understanding
191
Communication-Efficient Learning of Deep Networks from Decentralized Data
192
Participatory Cultural Mapping Based on Collective Behavior Data in Location-Based Social Networks
193
Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases
194
Autoencoding beyond pixels using a learned similarity metric
195
Deep Residual Learning for Image Recognition
196
Rethinking the Inception Architecture for Computer Vision
197
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
198
Federated Optimization: Distributed Optimization Beyond the Datacenter
199
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
200
Train faster, generalize better: Stability of stochastic gradient descent
201
What Is Machine Learning
202
Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books
203
Distilling the Knowledge in a Neural Network
204
The human splicing code reveals new insights into the genetic determinants of disease
205
Explaining and Harnessing Adversarial Examples
206
Deep Learning Face Attributes in the Wild
207
GloVe: Global Vectors for Word Representation
208
A data-driven approach to cleaning large face datasets
209
Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing
210
CLiPS Stylometry Investigation (CSI) corpus: A Dutch corpus for the detection of age, gender, personality, sentiment and deception in text
211
Impact of HbA1c Measurement on Hospital Readmission Rates: Analysis of 70,000 Clinical Database Patient Records
212
Auto-Encoding Variational Bayes
213
Do Deep Nets Really Need to be Deep?
214
Intriguing properties of neural networks
215
Distributed Representations of Words and Phrases and their Compositionality
216
Hidden factors and hidden topics: understanding rating dimensions with review text
217
Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers
218
Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition
219
Bayesian Learning via Stochastic Gradient Langevin Dynamics
220
Chameleons in Imagined Conversations: A New Approach to Understanding Coordination of Linguistic Style in Dialogs
221
Caltech-UCSD Birds 200
222
On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy
223
ImageNet: A large-scale hierarchical image database
224
Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environments
225
Collective Classification in Network Data
226
Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays
227
Differential Privacy: A Survey of Results
228
Calibrating Noise to Sensitivity in Private Data Analysis
229
Acquiring linear subspaces for face recognition under variable lighting
230
RCV1: A New Benchmark Collection for Text Categorization Research
231
Unsupervised Learning: Foundations of Neural Computation
232
Monte Carlo Statistical Methods
233
NewsWeeder: Learning to Filter Netnews
234
Principles of Risk Minimization for Learning Theory
235
When Does Data Augmentation Help With Membership Inference Attacks?
236
Accuracy-Privacy Trade-off in Deep Ensembles
237
Comparing Local and Central Differential Privacy Using Membership Inference Attacks
238
Resisting membership inference attacks through knowledge distillation
239
Reconstruction-Based Membership Inference Attacks are Easier on Difficult Problems
240
GAN-Leaks: A Taxonomy ofMembership Inference Attacks against Generative Models. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security (Virtual Event, USA) (CCS ’20)
241
Exploiting Transparency Measures for Membership Inference: a Cautionary Tale
242
Towards the Infeasibility of Membership
243
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
244
Membership Inference Attack against Differentially Private Deep Learning Model
245
ar X iv : 1 80 1 . 01 59 4 v 2 [ cs . C R ] 2 5 M ar 2 01 8 Differentially Private Releasing via Deep Generative Model ( Technical Report )
246
Philipp Koehn, and Christof Monz
247
Reddit comments dataset. https://bigquery.cloud.google.com/dataset/fh-bigquery:redditcomments
250
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
251
Dropout: a simple way to prevent neural networks from overfitting
252
Acquire Valued Shoppers Challenge
253
Information Theory and Statistics
254
Reading Digits in Natural Images with Unsupervised Feature Learning
255
Large text compression benchmark
256
Learning Multiple Layers of Features from Tiny Images
257
Texas Hospital Inpatient Discharge Public Use Data File
258
Texas Health Care Information Collection Center
259
The mnist database of handwritten digits
260
Artificial intelligence: a modern approach
261
Online algorithms and stochastic approximations
262
Gradient-based learning applied to document recognition
265
Convergence de la répartition empirique vers la répartition théorique
266
Edinburgh Research Explorer Characteristic Functions on Graphs: Birds of a Feather, from Statistical Descriptors to Parametric Models
267
Neural network based attacks