Neuro-evolutionary Topology Optimization with Adaptive Improvement Threshold
Recently a hybrid combination of neuro-evolution with a gradient-based topology optimization method was proposed, facilitating topology optimization of structures subject to objective functions for which gradient information is difficult to obtain. The approach substitutes analytical sensitivity inf...
Saved in:
Published in | Applications of Evolutionary Computation Vol. 9028; pp. 655 - 666 |
---|---|
Main Authors | , |
Format | Book Chapter |
Language | English |
Published |
Switzerland
Springer International Publishing AG
2015
Springer International Publishing |
Series | Lecture Notes in Computer Science |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recently a hybrid combination of neuro-evolution with a gradient-based topology optimization method was proposed, facilitating topology optimization of structures subject to objective functions for which gradient information is difficult to obtain. The approach substitutes analytical sensitivity information by an update signal represented by a neural network approximation model. Topology optimization is performed by optimizing the network parameters by an evolutionary algorithm in order to devise an update signal for each design step. However, the typically very large number of required evaluations renders the method difficult to apply in practice. In this paper, we aim at a more efficient use of computational resources by augmenting the original approach by an adaptive improvement threshold as stopping criterion for the neuro-evolution. The original and augmented methods are studied on the minimum compliance problem for different feature types and different number of hidden neurons. It is demonstrated that the number of evaluations can be reduced by up to $$80\,\%$$ with very little change of the resulting objective values and structures. |
---|---|
Bibliography: | Original Abstract: Recently a hybrid combination of neuro-evolution with a gradient-based topology optimization method was proposed, facilitating topology optimization of structures subject to objective functions for which gradient information is difficult to obtain. The approach substitutes analytical sensitivity information by an update signal represented by a neural network approximation model. Topology optimization is performed by optimizing the network parameters by an evolutionary algorithm in order to devise an update signal for each design step. However, the typically very large number of required evaluations renders the method difficult to apply in practice. In this paper, we aim at a more efficient use of computational resources by augmenting the original approach by an adaptive improvement threshold as stopping criterion for the neuro-evolution. The original and augmented methods are studied on the minimum compliance problem for different feature types and different number of hidden neurons. It is demonstrated that the number of evaluations can be reduced by up to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$80\,\%$$\end{document} with very little change of the resulting objective values and structures. |
ISBN: | 9783319165486 3319165488 |
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-319-16549-3_53 |