
Variational Autoencoders are not autoencoders - ml_basics
http://paulrubenstein.co.uk/variational-autoencoders-are-not-autoencoders/
======
angel_j
Pretty sure all this says is to minimize KL-Divergence instead of log-
likelihood (for the encoder), or your latent variables are garbage. Judging by
many examples I've seen of VAEs in ML, this is not news.

This post [0] does a good job showing the difference between latent variables
for log-likelihood and KL losses.

[0][https://towardsdatascience.com/intuitively-understanding-
var...](https://towardsdatascience.com/intuitively-understanding-variational-
autoencoders-1bfe67eb5daf)

------
yorwba
Getting a resource exhaustion error from Namecheap, here's a cache:

[http://web.archive.org/web/20190129144610/http://paulrubenst...](http://web.archive.org/web/20190129144610/http://paulrubenstein.co.uk/variational-
autoencoders-are-not-autoencoders/)

------
no_identd
What about variational homoencoders tho? See
[https://arxiv.org/abs/1807.08919](https://arxiv.org/abs/1807.08919)

