When they were first presented back in 2014., Generative Adversarial Networks (GAN) took the world of Deep Learning by storm. Their two folded architecture opened up the path to many creative solutions and combinations. Even Yann LeCun concluded that this is “the most interesting idea in the last 10 years in Machine Learning”. Since then, GAN zoo grew a lot. New architectures that harvest this adversarial premise are created on a regular basis. One of those solutions is Adversarial Autoencoders (AAE).
Generative Adversarial Networks (GAN) shook up the deep learning world. When they first appeared in 2014, they proposed a new and fresh approach to modeling and gave a possibility for new neural network architectures to emerge. Since standard GAN architecture is composed of two neural networks, we can play around and use different approaches for those networks and thus create new and shiny architectures, like Adversarial Autoencoder.