Research Connect
Research PapersAboutContact

The Limitations of Deep Learning in Adversarial Settings

Published in European Symposium on Security and... (2015-11-24)
aionlincourseaionlincourseaionlincourseaionlincourseaionlincourseaionlincourse
Generate GraphDownload

On This Page

  • TL;DR
  • Abstract
  • Authors
  • Datasets
  • References
TL

TL;DR

This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.

Abstract

Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.

Authors

Nicolas Papernot

3 Papers

P. Mcdaniel

2 Papers

S. Jha

3 Papers

Datasets

MNIST

References45 items

1

Going deeper with convolutions

2

ImageNet classification with deep convolutional neural networks

3

Gradient-based learning applied to document recognition

4

How transferable are features in deep neural networks?

5

A Fast Learning Algorithm for Deep Belief Nets

6

Research Impact

3774

Citations

45

References

1

Datasets

6

Matt Fredrikson

2 Papers

Z. B. Celik

2 Papers

A. Swami

2 Papers

Learning Deep Architectures for AI

7

Intriguing properties of neural networks

8

GENERATIVE ADVERSARIAL NETS

9

A unified architecture for natural language processing: deep neural networks with multitask learning

10

Learning representations by back-propagating errors

11

Multilayer feedforward networks are universal approximators

12

Explaining and Harnessing Adversarial Examples

13

Machine learning - a probabilistic perspective

14

Multi-column deep neural network for traffic sign classification

15

DeepFace: Closing the Gap to Human-Level Performance in Face Verification

16

Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

17

Evasion Attacks against Machine Learning at Test Time

18

Exploring Strategies for Training Deep Neural Networks

19

The mnist database of handwritten digits

20

Adversarial machine learning

21

Towards Deep Neural Network Architectures Robust to Adversarial Examples

22

Poisoning Attacks against Support Vector Machines

23

Support Vector Machines Under Adversarial Label Noise

24

Adversarial Machine Learning

25

The security of machine learning

26

Can machine learning be secure?

27

Convolutional, Long Short-Term Memory, fully connected Deep Neural Networks

28

Deep neural networks are easily fooled: High confidence predictions for unrecognizable images

29

Poisoning behavioral malware clustering

30

Pattern Recognition Systems under Attack: Design Issues and Research Challenges

31

Security Evaluation of Pattern Classifiers under Attack

32

Large-scale malware classification using random projections and neural networks

33

Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition

34

Evading network anomaly detection systems: formal reasoning and practical techniques

35

How paypal beats the bad guys with machine learning . http://www.infoworld.com/article/2907877/machine-learning/how- paypal-reduces-fraud-with-machine-learning

36

Long short-term memory recurrent neural network architectures for large scale acoustic modeling

37

LISA lab

38

Theano: a CPU and GPU math expression compiler

39

Mnist handwritten digit database. AT&T Labs [Online]

40

Network security - private communication in a public world

41

Fundamentals of computer security technology

42

learning parameter of η = 0 . 1 for 200 epochs, the learned network exhibits a 98 . 93% accuracy rate on the MNIST training set and 99 . 41% accuracy rate on the MNIST test

43

architecture, the training through back-propagation forward derivative computation

44

suggested in the Theano Documentation [ LeNet-5 architecture. Nevertheless, once

45

A. Validation setup details

Authors

Field of Study

Computer ScienceMathematics

Journal Information

Name

2016 IEEE European Symposium on Security and Privacy (EuroS&P)

Venue Information

Name

European Symposium on Security and Privacy

Type

conference

URL

N/A

Alternate Names

  • EuroS&P
  • IEEE European Symposium on Security and Privacy
  • Eur Symp Secur Priv
  • IEEE Eur Symp Secur Priv
  • EUROS&P