The Deep Learning universe grows every day. The whole field is somewhat in the phase of “crossing the chasm” and entering
One of such interesting cases is Generative Adversarial Network or GAN. This architecture was first presented in the paper by Ian Goodfellow from the University of Montreal back in 2014. and they took the world by storm. Even Yann LeCun, Facebook’s AI research director, called them “the most interesting idea in the last 10 years in Machine Learning”. Using this concept people started creating surreal combinations of Kubrick and Picasso and even sold art created by GANs for a lot of money (and I mean a lot of money).
Since then the GAN Zoo itself became so big that just scrolling through all papers that are utilizing this concept can cause pain in your finger. All jokes aside GANs main concepts changed the world of deep learning. Their simple architecture that is consisting of two neural networks which are competing against each other, opened a completely new chapter in the fields history.
Adversarial training, however, was not a new idea event at the moment of GAN’s emerging. It can be traced back to machine learning legend Arthur L. Samuel. His two main papers (Samuel 1959; Samuel 1967) are landmarks in Artificial Intelligence. In his 1959. paper, which explored computer checkers, he described the problem of an agent which is playing a game of chess against itself. This is a typical example of the Adversarial process. Ian Goodfellow, the inventor of GANs, defined the adversarial process as “Training a model in a worst-case scenario, with inputs chosen by an adversary”.
If you want to know more about how is this all things are connected together, check out this series of articles:
- Introduction to Generative Adversarial Networks (GANs)
- Implementing GAN & DCGAN with Python
- Introduction to Adversarial Autoencoders
- Generating Images using Adversarial Autoencoders and Python
- Introduction to CycleGAN
- Implementing CycleGAN Using Python
Thank you for reading!
This article is a part of Artificial Neural Networks Series, which you can check out here.
Read more posts from the author at Rubik’s Code.