3 best solutions to improve training stability of GANs (Ep. 88)

Generative Adversarial Networks or GANs are very powerful tools to generate data. However, training a GAN is not easy. More specifically, GANs suffer of three major issues such as instability of the training procedure, mode collapse and vanishing gradients.3 best solutions to improve training stability of GANs

In this episode I not only explain the most challenging issues one would encounter while designing and training Generative Adversarial Networks. But also some methods and architectures to mitigate them. In addition I elucidate the three specific strategies that researchers are considering to improve the accuracy and the reliability of GANs.

The most tedious issues of GANs

Below are the 3 most challenging issues of training GANs. We describe also the 3 best solutions to improve training stability of GANs.

Convergence to equilibrium

A typical GAN is formed by at least two networks: a generator G and a discriminator D. The generator’s task is to generate samples from random noise. In turn, the discriminator has to learn to distinguish fake samples from real ones. While it is theoretically possible that generators and discriminators converge to a Nash Equilibrium (at which both networks are in their optimal state), reaching such equilibrium is not easy.

Vanishing gradients

Moreover, a very accurate discriminator would push the loss function towards lower and lower values. This in turn, might cause the gradient to vanish and the entire network to stop learning completely.

Mode collapse

Another phenomenon that is easy to observe when dealing with GANs is mode collapse. That is the incapability of the model to generate diverse samples. This in turn, leads to generated data that are more and more similar to the previous ones. Hence, the entire generated dataset would be just concentrated around a particular statistical value.

The solution

Researchers have taken into consideration several approaches to overcome such issues. They have been playing with architectural changes, different loss functions and game theory.
In this episode you can read about the 3 best solutions to improve training stability of GANs.

Listen to the full episode to know more about the most effective strategies to build GANs that are reliable and robust. Don’t forget to join the conversation on our new Discord channel. See you there!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Data Science

Discord community chat

Join our Discord community to discuss the show, suggest new episodes and chat with other listeners!

Support us