Hostname: page-component-848d4c4894-m9kch Total loading time: 0 Render date: 2024-05-18T02:25:13.026Z Has data issue: false hasContentIssue false

ARE GENERATIVE ADVERSARIAL NETWORKS CAPABLE OF GENERATING NOVEL AND DIVERSE DESIGN CONCEPTS? AN EXPERIMENTAL ANALYSIS OF PERFORMANCE

Published online by Cambridge University Press:  19 June 2023

Parisa Ghasemi
Affiliation:
Northeastern University
Chenxi Yuan
Affiliation:
Northeastern University
Tucker Marion
Affiliation:
Northeastern University
Mohsen Moghaddam*
Affiliation:
Northeastern University
*
Moghaddam, Mohsen, Northeastern University, United States of America, mohsen@northeastern.ed

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Generative Adversarial Networks (GANs) have shown stupendous power in generating realistic images to an extend that human eyes are not capable of recognizing them as synthesized. State-of-the-art GAN models are capable of generating realistic and high-quality images, which promise unprecedented opportunities for generating design concepts. Yet, the preliminary experiments reported in this paper shed light on a fundamental limitation of GANs for generative design: lack of novelty and diversity in generated samples. This article conducts a generative design study on a large-scale sneaker dataset based on StyleGAN, a state-of-the-art GAN architecture, to advance the understanding of the performance of these generative models in generating novel and diverse samples (i.e., sneaker images). The findings reveal that although StyleGAN can generate samples with quality and realism, the generated and style-mixed samples highly resemble the training dataset (i.e., existing sneakers). This article aims to provide future research directions and insights for the engineering design community to further realize the untapped potentials of GANs for generative design.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
The Author(s), 2023. Published by Cambridge University Press

References

Abdi, H. and Williams, L.J. (2010), “Principal component analysis”, Wiley Interdisciplinary Reviews: Computational Statistics.CrossRefGoogle Scholar
Ahmed, F. (2019), Diversity and Novelty: Measurement, Learning, Optimization, PhD dissertation, University of Maryland.Google Scholar
Basulto-Lantsova, A., Padilla-Medina, J.A., Perez-Pinal, F.J. and Barranco-Gutierrez, A.I. (2020), “Performance comparative of opencv template matching method on jetson tx2 and jetson nano developer kits”, 2020 10th Annual Computing and Communication Workshop and Conference (CCWC).CrossRefGoogle Scholar
Buonamici, F., Carfagni, M., Furferi, R. and Governi, L. (2020), “Generative design: An explorative study”, Journal of Computer-aided Design and Applications, Vol. 18, pp. 144155.CrossRefGoogle Scholar
Dieter, G.E., Schmidt, L.C. et al. (2009), Engineering design, Vol. 4, McGraw-Hill Higher Education Boston.Google Scholar
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y. (2014), “Generative adversarial nets”, in: Advances in neural information processing systems, pp. 26722680.Google Scholar
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B. and Hochreiter, S. (2017), “Gans trained by a two time- scale update rule converge to a local nash equilibrium”, Advances in neural information processing systems, Vol. 30.Google Scholar
Karras, T., Laine, S. and Aila, T. (2019), “A style-based generator architecture for generative adversarial networks”, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 44014410.CrossRefGoogle Scholar
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J. and Aila, T. (2020), “Analyzing and improving the image quality of stylegan”, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 81108119.CrossRefGoogle Scholar
Liu, H., Wan, Z., Huang, W., Song, Y., Han, X. and Liao, J. (2021), “Pd-gan: Probabilistic diverse gan for image inpainting”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 93719381.CrossRefGoogle Scholar
Oh, S., Jung, Y., Kim, S., Lee, I. and Kang, N. (2019), “Deep generative design: Integration of topology optimization and generative models”, Journal of Mechanical Design, Vol. 141 No. 11.CrossRefGoogle Scholar
Shah, J.J., Smith, S.M. and Vargas-Hernandez, N. (2003), “Metrics for measuring ideation effectiveness”, Design studies, Vol. 24 No. 2, pp. 111134.CrossRefGoogle Scholar
Shmelkov, K., Schmid, C. and Alahari, K. (2018), “How good is my gan?”, Proceedings of the European Conference on Computer Vision (ECCV), pp. 213229.CrossRefGoogle Scholar
Shu, D., Cunningham, J., Stump, G., Miller, S.W., Yukish, M.A., Simpson, T.W. and Tucker, C.S. (2020), “3d design using generative adversarial networks and physics-based validation”, Journal ofMechanical Design, Vol. 142 No. 7, p. 071701.CrossRefGoogle Scholar
Singh, V. and Gu, N. (2012), “Towards an integrated generative design framework”, Journal of Design Studies, Vol. 33, pp. 185207.CrossRefGoogle Scholar
Toh, C.A., Miller, S.R. and Okudan Kremer, G.E. (2014), “The impact of team-based product dissection on design novelty”, Journal of Mechanical Design, Vol. 136 No. 4, p. 041004.CrossRefGoogle Scholar
Wang, Z., She, Q. and Ward, T.E. (2021), “Generative adversarial networks in computer vision: A survey and taxonomy”, ACM Computing Surveys (CSUR), Vol. 54 No. 2, pp. 138.Google Scholar
Wu, Q., Liu, Y., Miao, C., Zhao, B., Zhao, Y. and Guan, L. (2019), “Pd-gan: Adversarial learning for personalized diversity-promoting recommendation”, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19), p. 38703876.Google Scholar
Xu, J., Ren, J., Lin, J. and Sun, X. (2018), “Diversity promoting gan: A cross entropy-based generative adverse- rial network for diversified text generation”, Proceedings of the European Conference on Computer Vision (ECCV), pp. 39403949.CrossRefGoogle Scholar
Yuan, C., Marion, T. and Moghaddam, M. (2022), “Leveraging end-user data for enhanced design concept evaluation: A multimodal deep regression model”, Journal of Mechanical Design, Vol. 144 No. 2.Google Scholar
Yuan, C. and Moghaddam, M. (2020), “Attribute-aware generative design with generative adversarial networks”, IEEE Access, Vol. 8, pp. 190710190721.CrossRefGoogle Scholar
acatech (2021), Circular Economy Roadmap for Germany, Technical report. https://en.acatech.de/publication/circular-economy-roadmap-for-germany/Google Scholar