An introduction to Generative Adversarial Networks (with code in TensorFlow)
You can read this in one minute.
- Generative models create data similiar to what they trained on
- Training these models is very hard
- One approach is using Generative Adversarial Networks (GANs)
- Facebook’s Yann LeCun considers them “the most interesting idea in the last 10 years in ML”
- What are the differences between Discrimitive and Generative models?
- A discriminative model learns a function that maps the input data (x) to some desired output class label (y). In probabilistic terms, they directly learn the conditional distribution P(y|x)
- A generative model tries to learn the joint probability of the input data and labels simultaneously, i.e. P(x,y). This can be converted to P(y|x) for classification via Bayes rule, but the generative ability could be used for something else as well, such as creating likely new (x, y) samples
- GANs were first introduced in 2014
- GANs are two competing neural network model
- One takes noise and generates samples (generator)
- The other recieves samples from generator and training data (discriminator)
- Trick is to discern between them in training
- Models then converge until the generated data is similar to actual trained data
- They are being applied to image generation tasks
- See article for Python Code using TensorFlow
- Or visit Github (https://github.com/AYLIEN/gan-intro)
Read the entire source article here.
If you want to learn more about GANs we recommend starting with the following publications: