Understanding And Enhancing Diversity In Generative Models

Authors

  • Munir Ahmad ABASYN University Peshawar
  • Muhammad Kamran Chohan HITEC University Taxila
  • Muhammad Zarif Qureshi ABASYN University Peshawar
  • Hassan Gul Bahria University Islamabad

DOI:

https://doi.org/10.62951/ijamc.v1i2.16

Keywords:

Generative models, Diversity, Evaluation metrics, Novel architectures

Abstract

This research delves into the crucial aspect of diversity within generative models, exploring both its understanding and potential enhancement. Diversity in generative models refers to the ability of the model to produce a wide range of outputs that cover the variability present in the underlying data distribution. Understanding diversity is fundamental for assessing the quality and applicability of generative models across various domains, including natural language processing, computer vision, and creative arts. We discusses existing methods and metrics for evaluating diversity in generative models and highlights the importance of diversity in promoting fairness, robustness, and creativity. It explores strategies for enhancing diversity in generative models, such as regularization techniques, diversity-promoting objectives, and novel architectures. By advancing our understanding of diversity and implementing techniques to enhance it, generative models can better capture the complexity and richness of real-world data, leading to improved performance and broader applicability.

References

Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep Learning (Vol. 1). MIT Press. [Book]

Kingma, D. P., & Welling, M. (2013). Auto-Encoding Variational Bayes. arXiv preprint arXiv:1312.6114.

Rezende, D. J., & Mohamed, S. (2015). Variational Inference with Normalizing Flows. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15) (pp. 1530-1538).

Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv preprint arXiv:1511.06434.

van den Oord, A., Kalchbrenner, N., & Kavukcuoglu, K. (2016). Pixel Recurrent Neural Networks. arXiv preprint arXiv:1601.06759.

Diversity in Generative Models: Concepts and Importance:

Che, T., Li, Y., Jacob, A. P., Bengio, Y., & Li, W. (2016). Mode Regularized Generative Adversarial Networks. arXiv preprint arXiv:1612.02136.

Zhao, J., Mathieu, M., & LeCun, Y. (2016). Energy-based Generative Adversarial Network. arXiv preprint arXiv:1609.03126.

Li, C., & Wand, M. (2016). Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks. In European Conference on Computer Vision (pp. 702- 716). Springer, Cham.

Metz, L., Poole, B., Pfau, D., & Sohl-Dickstein, J. (2017). Unrolled Generative Adversarial Networks. arXiv preprint arXiv:1611.02163.

Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved Techniques for Training GANs. In Advances in Neural Information Processing Systems (pp. 2234-2242).

Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein Generative Adversarial Networks. In Proceedings of the 34th International Conference on Machine Learning (Vol. 70, pp. 214-223).

Brock, A., Donahue, J., & Simonyan, K. (2018). Large Scale GAN Training for High Fidelity Natural Image Synthesis. arXiv preprint arXiv:1809.11096.

Chen, T. Q., Rubanova, Y., Bettencourt, J., & Duvenaud, D. (2018). Neural Ordinary Differential Equations. In Advances in Neural Information Processing Systems (pp. 6571-6583).

Creswell, A., Bharath, A. A., & Bharath, A. A. (2018). Generative Models for Graph-Based Protein Design. arXiv preprint arXiv:1807.01271.

Dai, Z., Yang, Z., Yang, F., Cohen, W. W., Salakhutdinov, R., & Le, Q. V. (2019). Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 2978-2988).

Esser, P., Rombach, R., & Ommer, B. (2018). A Variational U-Net for Conditional Appearance and Shape Generation. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 816-832).

Grover, A., Dhar, M., & Ermon, S. (2018). Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models. arXiv preprint arXiv:1811.00926.

Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., & Bengio, Y. (2018). Learning Deep Representations by Mutual Information Estimation and Maximization. arXiv preprint arXiv:1808.06670.

Huang, H., Dhingra, B., Yuan, H., Guu, K., & Goyal, A. (2019). Enhanced Deep Generative Models for Incremental Text Classification. arXiv preprint arXiv:1903.06112.

Kingma, D. P., & Dhariwal, P. (2018). Glow: Generative Flow with Invertible 1x1 Convolutions. In Advances in Neural Information Processing Systems (pp. 10215-10224).

Liu, H., Simonyan, K., & Yang, Y. (2019). DARTS: Differentiable Architecture Search. In Proceedings of the 36th International Conference on Machine Learning (Vol. 97, pp. 3365-3374).

Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Paul Smolley, S. (2017). Least Squares Generative Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2794-2802).

Mescheder, L., Geiger, A., & Nowozin, S. (2018). Which Training Methods for GANs do actually Converge? In International Conference on Machine Learning (pp. 3481- 3490).

Rezende, D. J., & Viola, F. (2018). Taming VAEs. arXiv preprint arXiv:1810.00597.

Salimans, T., Karpathy, A., Chen, X., & Kingma, D. P. (2017). PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications. arXiv preprint arXiv:1701.05517.

Schmidhuber, J. (2015). Deep Learning in Neural Networks: An Overview. Neural Networks, 61, 85-117.

Sønderby, C. K., Raiko, T., Maaløe, L., Sønderby, S. K., Winther, O., & Lakshminarayanan, B. (2016). Ladder Variational Autoencoders. In Advances in Neural Information Processing Systems (pp. 3738-3746).

Srivastava, N., Mansimov, E., & Salakhudinov, R. (2015). Unsupervised Learning of Video Representations using LSTMs. In International Conference on Machine Learning (pp. 843-852).

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is All You Need. In Advances in Neural Information Processing Systems (pp. 5998-6008).

Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 586-595).

Downloads

Published

2024-04-19

Similar Articles

You may also start an advanced similarity search for this article.