Generative Adversarial Networks
An overview of the paper “Generative Adversarial Networks”. The author presents a new framework for estimating generative models via an adversarial process. All images and tables in this post are from their paper.
Introduction
The GAN usually consists of two networks - a Generator() and a Discriminator(). The idea is that the two networks play a minimax two-player game where, the goal of is to create data capable of fooling D, and the goal of is to not get fooled by . If allowed to run for millions of iterations, the entire network reaches a very good understanding of the distribution of the data, and can replicate it.
Adversarial Nets
The idea is that we try to train to maximize the probability of assigning the correct label to both training examples and generated examples. Furthermore, we train to minimize . The two players play a minimax game with the value function .
The authors also suggest that training the two networks in an iterative way leads to overfitting on small datasets. Instead, we train steps of and one step of optimizing . Hence, will remain around optimal as long as changes slowly enough.
There are also a few proofs to support their claims. A simple summary is given below.
Theorem 1
For fixed , the optimal is:
While training , our goal is to maximize . Just simplifying the “expected” terms to integral allows us to reach a very simple equation of . The value of which maximizes this equation is as long as and are both not equal to 0. Hence, we prove the above statement.
Theorem 2
The global minimum of the virtual training criterion is achieved if and only if . At that point, achieves the value .
With a simple addition and subtraction of , we can explain the same equation in terms of KL divergence. This can further be simplified to Jensen Shannon Divergence. The final equation we have is now . Since, JSD is a function in the range of 0-1. The minimum of this equation will occur when JSD gives a output of 0. This only happens when . Hence, we have proved the above statement.
Proving Convergence of the Algorithm
To the best of my knowledge, there is no proof which claims that neural networks converge at global optimum. However, its excellent performance in practice suggests that they are a reasonable model despite their lack of theoretical guarantees.