loading page

Distributional Smoothing with Virtual Adversarial Training
  • Shin-ichi Maeda
Shin-ichi Maeda

Corresponding Author:[email protected]

Author Profile

Abstract

Smoothness regularization is a popular method to decrease generalization error. We propose a novel regularization technique that rewards local distributional smoothness (LDS), a KL-distance based measure of the model’s robustness against perturbation. The LDS is defined in terms of the direction to which the model distribution is most sensitive. Our technique closely resembles the adversarial training \cite[]{goodfellow2014explaining}, but distinguishes itself in that it determines the adversarial direction from the model distribution alone, and does not use the information from labeled data. The technique is therefore applicable to semi-supervised training. When we applied our technique to the classification problem on permutation invariant MNIST, it not only eclipsed all the models that are not dependent on generative models and pre-training, but also performed well even in comparison to the state of the art method \cite[]{rasmus2015semi} that uses highly advanced generative models.