Likewise, we admire the story of musicians, artists, writers and every creative human because of their personal struggles, how they overcome life’s challenges and find inspiration from everything they’ve been through. That’s the true nature of human art. That’s something that can’t be automated, even if we achieve the always-elusive general artificial intelligence. — Ray … [Read more...] about Neural Style Transfer and Visualization of Convolutional Networks
CV Tutorials
An Introduction to Super Resolution Using Deep Learning
Introduction Super Resolution is the process of recovering a High Resolution (HR) image from a given Low Resolution (LR) image. An image may have a “lower resolution” due to a smaller spatial resolution (i.e. size) or due to a result of degradation (such as blurring). We can relate the HR and LR images through the following equation: [latex]LR = … [Read more...] about An Introduction to Super Resolution Using Deep Learning
How to Create a Fake Video of a Real Person
Recent advances in artificial intelligence have made creating convincing fake videos, or deepfakes, of real people possible. While the ethical implications and the creative capabilities of this amazing technology are only beginning to be explored, there are growing concerns that this technology could be used maliciously to ruin reputations and cause extensive damages. Can … [Read more...] about How to Create a Fake Video of a Real Person
Intuitively Understanding Variational Autoencoders
In contrast to the more standard uses of neural networks as regressors or classifiers, Variational Autoencoders (VAEs) are powerful generative models, now having applications as diverse as from generating fake human faces, to producing purely synthetic music. This post will explore what a VAE is, the intuition behind why it works so well, and its uses as a powerful … [Read more...] about Intuitively Understanding Variational Autoencoders
Mixture of Variational Autoencoders – a Fusion Between MoE and VAE
The Variational Autoencoder (VAE) is a paragon for neural networks that try to learn the shape of the input space. Once trained, the model can be used to generate new samples from the input space. If we have labels for our input data, it’s also possible to condition the generation process on the label. In the MNIST case, it means we can specify … [Read more...] about Mixture of Variational Autoencoders – a Fusion Between MoE and VAE
Variational Autoencoders Explained in Detail
In the previous post of this series I introduced the Variational Autoencoder (VAE) framework, and explained the theory behind it. In this post I’ll explain the VAE in more detail, or in other words – I’ll provide some code 🙂 After reading this post, you’ll understand the technical details needed to implement VAE. As a bonus point, I’ll show you how by … [Read more...] about Variational Autoencoders Explained in Detail
Variational Autoencoders Explained
Ever wondered how the Variational Autoencoder (VAE) model works? Do you want to know how VAE is able to generate new examples similar to the dataset it was trained on? After reading this post, you'll be equipped with the theoretical understanding of the inner workings of VAE, as well as being able to implement one yourself. In a future post I'll provide you with a working code … [Read more...] about Variational Autoencoders Explained