1
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
2
SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency
3
The "Beatrix" Resurrections: Robust Backdoor Detection via Gram Matrices
4
Data-free Backdoor Removal based on Channel Lipschitzness
5
Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips
6
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints
7
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
8
Backdoor Defense via Decoupling the Training Process
9
Few-Shot Backdoor Attacks on Visual Object Tracking
10
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios
11
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks
12
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
13
Backdoor Attack through Frequency Domain
14
Detecting Backdoor Attacks against Point Cloud Classifiers
15
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
16
Adversarial Unlearning of Backdoors via Implicit Hypergradient
17
CLEAR: Clean-up Sample-Targeted Backdoor in Neural Networks
18
Towards Consumer Loan Fraud Detection: Graph Neural Networks with Role-Constrained Conditional Random Field
19
Hidden Backdoors in Human-Centric Language Models
20
A Backdoor Attack against 3D Point Cloud Classifiers
21
Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set
22
Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits
23
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
24
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
25
Invisible Backdoor Attack with Sample-Specific Triggers
26
Input-Aware Dynamic Backdoor Attack
27
SoK: Certified Robustness for Deep Neural Networks
28
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases
29
Backdoor Learning: A Survey
30
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
31
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
32
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements
33
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
34
Interleaved Sequence RNNs for Fraud Detection
35
Backdoor Suppression in Neural Networks using Input Fuzzing and Majority Voting
36
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy
37
AI developers tout revolution, drugmakers talk evolution.
38
ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation
39
Detecting AI Trojans Using Meta Neural Analysis
40
A Benchmark Study Of Backdoor Data Poisoning Defenses For Deep Neural Network Classifiers And A Novel Defense
41
Hidden Trigger Backdoor Attacks
42
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
43
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
44
Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems
45
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems
46
DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks
47
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
48
Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model
49
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
50
BadNets: Evaluating Backdooring Attacks on Deep Neural Networks
51
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
52
STRIP: a defence against trojan attacks on deep neural networks
53
Knockoff Nets: Stealing Functionality of Black-Box Models
54
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
55
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
56
A Survey on Data Collection for Machine Learning: A Big Data - AI Integration Perspective
57
Exploring Connections Between Active Learning and Model Extraction
58
Spectral Signatures in Backdoor Attacks
59
Learning with Bad Training Data via Iterative Trimmed Loss Minimization
60
Clean-Label Backdoor Attacks
61
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
62
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
63
Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition
64
SoK: Security and Privacy in Machine Learning
65
Dynamic Graph CNN for Learning on Point Clouds
66
Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation
67
MobileNetV2: Inverted Residuals and Linear Bottlenecks
68
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
69
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
70
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
71
Towards Deep Learning Models Resistant to Adversarial Attacks
72
Generative Poisoning Attack Method Against Neural Networks
73
Very deep convolutional neural networks for raw waveforms
74
Towards Evaluating the Robustness of Neural Networks
75
Stealing Machine Learning Models via Prediction APIs
76
Deep Learning with Differential Privacy
77
Practical Black-Box Attacks against Machine Learning
78
Deep Residual Learning for Image Recognition
79
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
80
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
81
Explaining and Harnessing Adversarial Examples
82
Very Deep Convolutional Networks for Large-Scale Image Recognition
83
Convex Optimization: Algorithms and Complexity
84
Intriguing properties of neural networks
85
Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition
86
Poisoning Attacks against Support Vector Machines
87
Support Vector Machines Under Adversarial Label Noise
88
ImageNet: A large-scale hierarchical image database
89
A tutorial on conformal prediction
90
Effective Backdoor Defense by Exploiting Sensitivity of Poisoned Samples
91
IEEE Trojan Removal Competition
92
Trojan Detection Challenge NeurIPS 2022
93
What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors
94
Backdoor Attack with Imperceptible Input and Latent Modification
95
IARPA TrojAI: Trojans in artificial intelligence
96
Trojaning Attack on Neural Networks
99
Learning Multiple Layers of Features from Tiny Images
100
Misleading Learners: Co-opting Your Spam Filter
102
WaNet - Imperceptible Warping-based Backdoor Attack
104
“chessboard” “1-pixel”
105
Experiment Details A.1. Details
106
Not robust against adaptive attacks
107
High false positive rate for datasets with few classes
108
chessboard is a ‘chessboard’ pattern [18], with one and only one of two adjacent