2
Better Trigger Inversion Optimization in Backdoor Scanning
3
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
4
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information
5
Backdoor Defense via Decoupling the Training Process
6
Few-shot Backdoor Defense Using Shapley Estimation
7
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
8
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
9
Adversarial Neuron Pruning Purifies Backdoored Deep Models
10
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
11
Trigger Hunting with a Topological Prior for Trojan Detection
12
Adversarial Unlearning of Backdoors via Implicit Hypergradient
13
How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data
14
Excess Capacity and Backdoor Poisoning
15
Poisoning and Backdooring Contrastive Learning
16
MaxUp: Lightweight Adversarial Training with Data Augmentation Improves Neural Network Training
17
Backdoor Attacks on Self-Supervised Learning
18
DeHiB: Deep Hidden Backdoor Attack on Semi-supervised Learning via Adversarial Perturbation
19
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
20
Rethinking the Backdoor Attacks’ Triggers: A Frequency Perspective
21
Black-box Detection of Backdoor Attacks with Limited Information and Data
22
Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits
23
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
24
Invisible Backdoor Attack with Sample-Specific Triggers
25
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
26
Can Adversarial Weight Perturbations Inject Neural Backdoors
27
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
28
Label-Consistent Backdoor Attacks
29
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy
30
Latent Backdoor Attacks on Deep Neural Networks
31
Detecting AI Trojans Using Meta Neural Analysis
32
Hidden Trigger Backdoor Attacks
33
Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization
34
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems
35
DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks
36
BadNets: Evaluating Backdooring Attacks on Deep Neural Networks
37
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
38
STRIP: a defence against trojan attacks on deep neural networks
39
A New Backdoor Attack in CNNS by Training Set Corruption Without Label Poisoning
40
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
41
Spectral Signatures in Backdoor Attacks
42
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
43
Practical Fault Attack on Deep Neural Networks
44
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
45
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
46
mixup: Beyond Empirical Risk Minimization
47
Towards Deep Learning Models Resistant to Adversarial Attacks
48
Understanding Black-box Predictions via Influence Functions
50
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
51
Deep Residual Learning for Image Recognition
52
Rethinking the Inception Architecture for Computer Vision
53
Intriguing properties of neural networks
54
Detection of traffic signs in real-world images: The German traffic sign detection benchmark
55
ImageNet: A large-scale hierarchical image database
56
Differential Privacy: A Survey of Results
57
Image up-sampling using total-variation regularization with a new observation model
58
Soroush Abbasi Koohpayegani, and Hamed Pirsiavash
59
What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors
60
SPECTRE: Defending Against Backdoor Attacks Using Robust Covariance Estimation
61
Trojaning Attack on Neural Networks
62
Dropout: a simple way to prevent neural networks from overfitting
63
Learning Multiple Layers of Features from Tiny Images
64
Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content
65
(a) Did you state the full set of assumptions of all theoretical results
66
Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
67
Have you read the ethics review guidelines and ensured that your paper conforms to them
68
Did you mention the license of the assets? [Yes] All datasets we used are public and cited properly
69
c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
70
c) Did you include any new assets either in the supplemental material or as a URL?
71
If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes]
72
b) Did you describe any potential participant risks, with links to Institutional Review
73
Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
74
The defender has a small clean holdout set to sanitize the backdoored model
75
b) Did you mention the license of the assets? [Yes]