The Variational Autoencoder (VAE) is a paragon for neural networks that try to learn the shape of the input space. Once trained, the model can be used to generate new samples from the input space. If we have labels for our input data, it’s also possible to condition the generation process on the label. In the MNIST case, it means we can specify … [Read more...] about Mixture of Variational Autoencoders – a Fusion Between MoE and VAE
Variational Autoencoders Explained in Detail
In the previous post of this series I introduced the Variational Autoencoder (VAE) framework, and explained the theory behind it. In this post I’ll explain the VAE in more detail, or in other words – I’ll provide some code 🙂 After reading this post, you’ll understand the technical details needed to implement VAE. As a bonus point, I’ll show you how by … [Read more...] about Variational Autoencoders Explained in Detail
Variational Autoencoders Explained
Ever wondered how the Variational Autoencoder (VAE) model works? Do you want to know how VAE is able to generate new examples similar to the dataset it was trained on? After reading this post, you'll be equipped with the theoretical understanding of the inner workings of VAE, as well as being able to implement one yourself. In a future post I'll provide you with a working code … [Read more...] about Variational Autoencoders Explained