Variational Autoencoders (VAEs)
Variational Autoencoders (VAEs)
Question : Describe the encoder and decoder networks in a VAE. What are their roles, and what do they output? What is the reparameterization trick, and why is it essential for training VAEs with gradient-based optimization? Illustrate with equations. Compare the training objective of VAEs with GANs. What does each optimize, and what are the implications for sample quality?
Video - Introduction to VAE - Martin Keen IBM
Video - Stanford CS229: Variational Autoencoders (VAEs) Anand Avati
Introduction to Auto Encoders
- Autoencoders
- Expectation Maximization (EM)
- MCMC (Markov Chain Monte Carlo) Expectation Maximization
- Variational Inference
- Variatoinal Expectation Maximization (VEM)
- Variational Auto Encoders (VAE)
Video - CMU CS 15-418/618: Variational Autoencoders (VAEs) 1
Video - CMU CS 15-418/618: Variational Autoencoders (VAEs) 2
Introduction :
Autoencorders are unsupervised neural networks that learn to compress data into a lower-dimensional representation and then reconstruct the original data from this representation. They consist of two main components: an encoder and a decoder.
graph LR
A[Input Data] --> B[Encoder]
B --> C[Latent Space Representation]
C --> D[Decoder]
D --> E[Reconstructed Data]
style A fill:#f9f,stroke:#333,stroke-width:2px
style B fill:#bbf,stroke:#333,stroke-width:2px
style C fill:#bfb,stroke:#333,stroke-width:2px
style D fill:#ffb,stroke:#333,stroke-width:2px
style E fill:#fbb,stroke:#333,stroke-width:2px
Tushar Kumar’s Explanation of VAE
https://www.linkedin.com/in/tushar-kumar-40299b19/