Skip to main content Accessibility help
×
Home

Learning priors for adversarial autoencoders

  • Hui-Po Wang (a1), Wen-Hsiao Peng (a1) and Wei-Jan Ko (a1)

Abstract

Most deep latent factor models choose simple priors for simplicity, tractability, or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders (AAEs). We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task.

  • View HTML
    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Learning priors for adversarial autoencoders
      Available formats
      ×

      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

      Learning priors for adversarial autoencoders
      Available formats
      ×

      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

      Learning priors for adversarial autoencoders
      Available formats
      ×

Copyright

This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

Corresponding author

Corresponding authors: Wen-Hsiao Peng. E-mail: wpeng@cs.nctu.edu.tw

References

Hide All
1Larsen, A.B.L.; Snderby, S.K.; Larochelle, H.; Winther, O.: Autoencoding beyond pixels using a learned similarity metric, in Proc. Int. Conf. Machine Learning (ICML), 2016, 15581566.
2Dilokthanakul, N.; Mediano, P.A.; Garnelo, M.; Lee, M.C.; Salimbeni, H.; Arulkumaran, K.; Shanahan, M.: Deep unsupervised clustering with gaussian mixture variational autoencoders, arXiv preprint arXiv:1611.02648, 2016.
3Makhzani, A.; Shlens, J.; Jaitly, N.; Goodfellow, I.; Frey, B.: Adversarial autoencoders, arXiv preprint arXiv:1511.05644, 2015.
4Wu, J.; Zhang, C.; Xue, T.; Freeman, B.; Tenenbaum, J.: Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling, in Advances in Neural Information Processing Systems (NIPS), 2016, 8290.
5Li, H.; Jialin Pan, S.; Wang, S.; Kot, A.C.: Domain generalization with adversarial feature learning, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2018, 54005409.
6Zhang, Y.; Fu, Y.; Wang, P.; Li, X.; Zheng, Y.: Unifying inter-region autocorrelation and intra-region structures for spatial embedding via collective adversarial learning, in Proc. 25th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, 2019, 17001708.
7Kingma, D.P.; Welling, M.: Auto-encoding variational bayes, in Proc. Int. Conf. Learning Representations (ICLR), 2013.
8Burda, Y.; Grosse, R.; Salakhutdinov, R.: Importance weighted autoencoders, arXiv preprint arXiv:1509.00519, 2015.
9Hoffman, M.D.; Johnson, M.J.: Elbo surgery: yet another way to carve up the variational evidence lower bound, in Workshop in Advances in Approximate Bayesian Inference, NIPS, 2016.
10Goyal, P.; Hu, Z.; Liang, X.; Wang, C.; Xing, E.P.: Nonparametric variational auto-encoders for hierarchical representation learning, in Proc. IEEE Int. Conf. Computer Vision (ICCV), 2017, 50945102.
11Tomczak, J.; Welling, M.: Vae with a vampprior, in Proc. Int. Conf. Artificial Intelligence and Statistics (AISTATS), 2018, 12141223.
12Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; Abbeel, P.: Infogan: interpretable representation learning by information maximizing generative adversarial nets, in Advances in Neural Information Processing Systems (NIPS), 2016, 21722180.
13Johnson, J.; Alahi, A.; Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution, in Proc. Eur. Conf. Computer Vision (ECCV), 2016, 694711.
14Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y.: Generative adversarial nets, in Advances in Neural Information Processing Systems (NIPS), 2014, 26722680.
15Berthelot, D.; Schumm, T.; Metz, L.: Began: Boundary equilibrium generative adversarial networks, arXiv preprint arXiv:1703.10717, 2017.
16Radford, A.; Metz, L.; Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv preprint arXiv:1511.06434, 2015.
17Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S.: Least squares generative adversarial networks, in Proc. IEEE Int. Conf. Computer Vision (ICCV), 2017, 27942802.
18Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C.: Improved training of wasserstein gans, in Advances in Neural Information Processing Systems (NIPS), 2017, 57675777.
19Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X.: Improved techniques for training gans, in Advances in Neural Information Processing Systems (NIPS), 2016, 22342242.
20Nilsback, M.-E.; Zisserman, A.: Automated flower classification over a large number of classes, in Proc. Indian Conf. Computer Vision, Graphics and Image Processing, 2008.

Keywords

Learning priors for adversarial autoencoders

  • Hui-Po Wang (a1), Wen-Hsiao Peng (a1) and Wei-Jan Ko (a1)

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed.