Deep learning has achieved tremendous success in various application areas, such as computer vision, natural language processing, game playing (AlphaGo), etc. Unfortunately, recent works show that an adversary is able to fool the deep learning models into producing incorrect predictions by manipulating the inputs maliciously. The corresponding manipulated samples are called adversarial examples. This vulnerability issue dramatically hinders the deployment of deep learning, particularly in safety-critical applications.
In this talk, I will introduce various approaches for how to construct adversarial examples. Then I will present a framework, named as adversarial training, for improving robustness of deep networks to defense the adversarial examples, and how to accelerate the training. Moreover,I will show that the introduced adversarial learning framework can be extended as an effective regularization strategy to improve the generalization in semi-supervised learning.