Latest YouTube Video

Sunday, October 30, 2016

Improving Sampling from Generative Autoencoders with Markov Chains. (arXiv:1610.09296v1 [cs.LG])

We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly learn a generative model alongside an inference model. We define generative autoencoders as autoencoders which are trained to softly enforce a prior on the latent distribution learned by the model. However, the model does not necessarily learn to match the prior. We formulate a Markov chain Monte Carlo (MCMC) sampling process, equivalent to iteratively encoding and decoding, which allows us to sample from the learned latent distribution. Using this we can improve the quality of samples drawn from the model, especially when the learned distribution is far from the prior. Using MCMC sampling, we also reveal previously unseen differences between generative autoencoders trained either with or without the denoising criterion.



from cs.AI updates on arXiv.org http://ift.tt/2e2Cspa
via IFTTT

No comments: