Beyond Statistical Similarity: Rethinking Metrics for Deep Generative Models in Engineering Design

Deep generative models such as Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), Diffusion Models, and Transformers, have shown great promise in a variety of applications, including image and speech synthesis, natural language processing, and drug discovery. However, when appl...

Full description

Saved in:
Bibliographic Details
Published inComputer aided design Vol. 165; p. 103609
Main Authors Regenwetter, Lyle, Srivastava, Akash, Gutfreund, Dan, Ahmed, Faez
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep generative models such as Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), Diffusion Models, and Transformers, have shown great promise in a variety of applications, including image and speech synthesis, natural language processing, and drug discovery. However, when applied to engineering design problems, evaluating the performance of these models can be challenging, as traditional statistical metrics based on likelihood may not fully capture the requirements of engineering applications. This paper doubles as a review and practical guide to evaluation metrics for deep generative models (DGMs) in engineering design. We first summarize the well-accepted ‘classic’ evaluation metrics for deep generative models grounded in machine learning theory. Using case studies, we then highlight why these metrics seldom translate well to design problems but see frequent use due to the lack of established alternatives. Next, we curate a set of design-specific metrics which have been proposed across different research communities and can be used for evaluating deep generative models. These metrics focus on unique requirements in design and engineering, such as constraint satisfaction, functional performance, novelty, and conditioning. Throughout our discussion, we apply the metrics to models trained on simple-to-visualize 2-dimensional example problems. Finally, we evaluate four deep generative models on a bicycle frame design problem and structural topology generation problem. In particular, we showcase the use of proposed metrics to quantify performance target achievement, design novelty, and geometric constraints. We publicly release the code for the datasets, models, and metrics used throughout the paper at https://decode.mit.edu/projects/metrics/. [Display omitted] •Present a practical guide to evaluation metrics for deep generative models in design.•Discuss 25+ metrics measuring similarity, diversity, performance, and validity.•Train and evaluate six deep generative models on easy-to-visualize 2D problems.•Evaluate state-of-the-art models on bike frames and optimal topology design problems.•Release all datasets, models, metrics, scoring utilities, and visualization code publicly.
ISSN:0010-4485
1879-2685
DOI:10.1016/j.cad.2023.103609