Deep neural networks (e.g. classifiers) are vulnerable to adversarial examples. Adversarial images are intentionally perturbed images that mislead classifiers. However, the existing adversarial images can be easily detected using defence frameworks, can be noticed by humans or are not transferable to unseen classifiers, as they neglect the content of images and semantic relationships between labels. We propose semantic adversarial images that exploit image properties (e.g. colours, objects and structure) and the characteristics of the human visual system, with the objective of reducing detectability and noticeability, and improving transferability. In particular, we will show how to generate natural and enhanced adversarial images by selectively modify colors within chosen ranges that are perceived as natural by humans and enhancing image details, respectively. Finally, we will describe how to exploit the adversarial images to protect visual content of images against automatic inferences of classifiers on social media.
Zoom link: https://univ-lille-fr.zoom.us/j/97722191935