Choisir la langue :

R. Pinot (Dauphine): Robustness through randomization: from differential privacy to adversarial examples

Institutional tag: 
Thematic tag(s): 

Deep neural networks achieve state of the art performances in a variety of domains such as image recognition and graph processing. However, it has been shown that such neural networks are vulnerable to adversarial examples, i.e. imperceptible variations of natural examples, crafted to deliberately mislead the models. Since their discovery, a variety of algorithms have been developed to defend neural networks against such threats. Our work focuses on techniques injecting noise to the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments.
We present a theoretical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we notice that being robust to adversarial examples and ensuring differential privacy are similar problems. We, then, leverage this similarity to show that noise injection can, in some cases, give provable robustness against adversarial examples. Finally we present experiments for an image classification task, and discuss further work on graph classification.

Dates: 
Thursday, February 13, 2020 - 11:00 to 12:00
Location: 
Inria B21
Speaker(s): 
Rafael Pinot
Affiliation(s): 
Université Paris-Dauphine