Ever wondered how the Variational Autoencoder (VAE) model works? Do you want to know how VAE is able to generate new examples similar to the dataset it was trained on? After reading this post, you'll be equipped with the theoretical understanding of the inner workings of VAE, as well as being able to implement one yourself. In a future post I'll provide you with a working code … [Read more...] about Variational Autoencoders Explained
Content Generation
OpenAI GPT-2: Understanding Language Generation through Visualization
Are you interested in receiving more in-depth technical education about language models and NLP applications? Subscribe below to receive relevant updates. In the eyes of most NLP researchers, 2018 was a year of great technological advancement, with new pre-trained NLP models shattering records on tasks ranging from sentiment analysis to … [Read more...] about OpenAI GPT-2: Understanding Language Generation through Visualization
Novel Methods For Text Generation Using Adversarial Learning & Autoencoders
Just two years ago, text generation models were so unreliable that you needed to generate hundreds of samples in hopes of finding even one plausible sentence. Nowadays, OpenAI's pre-trained language model can generate relatively coherent news articles given only two sentence of context. Other approaches like Generative Adversarial Networks (GANs) and Variational … [Read more...] about Novel Methods For Text Generation Using Adversarial Learning & Autoencoders
5 New Generative Adversarial Network (GAN) Architectures For Image Synthesis
AI image synthesis has made impressive progress since Generative Adversarial Networks (GANs) were introduced in 2014. GANs were originally only capable of generating small, blurry, black-and-white pictures, but now we can generate high-resolution, realistic and colorful pictures that you can hardly distinguish from real photographs. Here we have summarized for you 5 … [Read more...] about 5 New Generative Adversarial Network (GAN) Architectures For Image Synthesis
What Every NLP Engineer Needs to Know About Pre-Trained Language Models
Practical applications of Natural Language Processing (NLP) have gotten significantly cheaper, faster, and easier due to the transfer learning capabilities enabled by pre-trained language models. Transfer learning enables engineers to pre-train an NLP model on one large dataset and then quickly fine-tune the model to adapt to other NLP tasks. This new approach enables NLP … [Read more...] about What Every NLP Engineer Needs to Know About Pre-Trained Language Models
10 Cutting Edge Research Papers In Computer Vision & Image Generation
UPDATE: We've also summarized the top 2019 and top 2020 Computer Vision research papers. Ever since convolutional neural networks began outperforming humans in specific image recognition tasks, research in the field of computer vision has proceeded at breakneck pace. The basic architecture of CNNs (or ConvNets) was developed in the 1980s. Yann LeCun improved upon … [Read more...] about 10 Cutting Edge Research Papers In Computer Vision & Image Generation