Autoencoders
NN can generate new images? Creativity? Random numbers?
Last updated
NN can generate new images? Creativity? Random numbers?
Last updated
Autoencoders are NN with two main characteristics:
The input and output layers have the same dimensions (hidden layers preferably lesser)
It is a unsupervised model, so no labels
An autoencoder is, in fact, composed by two concatenated NN, generally trained as a whole:
Encoder: tries to get a selected set of features (latent space) by compressing the input into a lesser dimensional space
Decoder: tries to reconstruct the original input as close as possible
Some applications: denoising images, anomalies detection, recommendation engines, generative models etc.
Variational autoencoders works by making the latent space more predictable, more continuous, less sparse.
They do that by forcing latent variables to become normally distributed, so VAE gain control over the latent space.
VAE allows the similar elements in latent space to be closer, and make much easier to generate new 'good' variations in generative modeling.