Overview

Variational Autoencoders (VAEs) are a class of generative models that combine deep learning with Bayesian inference. Unlike standard autoencoders that map inputs to a single point in a latent space, VAEs map inputs to a distribution (usually a Gaussian).

Key Concepts

  • Encoder: Maps the input data to the parameters of a probability distribution (mean and variance) in the latent space.
  • Latent Space: A compressed representation of the data where similar items are grouped together.
  • Decoder: Samples a point from the latent distribution and reconstructs the original input.
  • Reparameterization Trick: A mathematical technique that allows gradients to flow through the random sampling process during training.

Applications

  • Image generation and editing.
  • Data denoising.
  • Molecular discovery in chemistry.
  • Anomaly detection.