LSH Seminar Jakub Tomczak (UvA)

Deep generative modeling using Variational Auto-Encoders

Title:      Deep generative modeling using Variational Auto-Encoders

Learning generative models that are capable of capturing rich distributions from vast amounts of data like image collections remains one of the major challenges of artificial intelligence. In recent years, different approaches to achieve this goal were proposed by formulating alternative training objectives to the log-likelihood like the adversarial loss or by utilizing variational inference. The latter approach could be made especially efficient through the application of the reparameterization trick resulting in a highly scalable framework now known as the variational auto-encoders (VAE). VAEs are scalable and powerful generative models that can be easily utilized in any probabilistic framework. The tractability and the flexibility of the VAE follow from the choice of the variational posterior (the encoder), the prior over latent variables and the decoder.

In this presentation I will outline different manners of improving the VAE. Moreover, I will discuss current applications and possible future directions.