Do GANs Actually Learn the Distribution?
MetadataShow full item record
Generative Adversarial Nets (GANs) is a framework for training deep generative models, due to Goodfellow et al'13. It involves a competition between a generator net that tries to produce realistic images, and a discriminator that tries to distinguish the output from real images. The framework has been applied to many settings, but it has been open to quantify how well it does, though the images often look reasonable. In our paper in ICML'17 (joint with Ge, Liang, Ma, Zhang) we give an analysis for the case of finite discriminators and generators. On the positive side, we can show the existence of an equilibrium where generator succeeds in fooling the discriminator. On the negative side, we show that in this equilibrium, generator produces a distribution of fairly low support. This can be seen as a failure mode of the GANs framework. But in subsequent work in ICLR'18 (joint with Risteski and Zhang) we show that this failure mode exists in popular GANs frameworks, which we show learn distributions with fairly small support. We quantify this using our new "birthday paradox" test.