Neuro-evolutionary Topology Optimization with Adaptive Improvement Threshold

Recently a hybrid combination of neuro-evolution with a gradient-based topology optimization method was proposed, facilitating topology optimization of structures subject to objective functions for which gradient information is difficult to obtain. The approach substitutes analytical sensitivity inf...

Full description

Saved in:
Bibliographic Details
Published inApplications of Evolutionary Computation Vol. 9028; pp. 655 - 666
Main Authors Aulig, Nikola, Olhofer, Markus
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2015
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently a hybrid combination of neuro-evolution with a gradient-based topology optimization method was proposed, facilitating topology optimization of structures subject to objective functions for which gradient information is difficult to obtain. The approach substitutes analytical sensitivity information by an update signal represented by a neural network approximation model. Topology optimization is performed by optimizing the network parameters by an evolutionary algorithm in order to devise an update signal for each design step. However, the typically very large number of required evaluations renders the method difficult to apply in practice. In this paper, we aim at a more efficient use of computational resources by augmenting the original approach by an adaptive improvement threshold as stopping criterion for the neuro-evolution. The original and augmented methods are studied on the minimum compliance problem for different feature types and different number of hidden neurons. It is demonstrated that the number of evaluations can be reduced by up to $$80\,\%$$ with very little change of the resulting objective values and structures.
Bibliography:Original Abstract: Recently a hybrid combination of neuro-evolution with a gradient-based topology optimization method was proposed, facilitating topology optimization of structures subject to objective functions for which gradient information is difficult to obtain. The approach substitutes analytical sensitivity information by an update signal represented by a neural network approximation model. Topology optimization is performed by optimizing the network parameters by an evolutionary algorithm in order to devise an update signal for each design step. However, the typically very large number of required evaluations renders the method difficult to apply in practice. In this paper, we aim at a more efficient use of computational resources by augmenting the original approach by an adaptive improvement threshold as stopping criterion for the neuro-evolution. The original and augmented methods are studied on the minimum compliance problem for different feature types and different number of hidden neurons. It is demonstrated that the number of evaluations can be reduced by up to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$80\,\%$$\end{document} with very little change of the resulting objective values and structures.
ISBN:9783319165486
3319165488
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-319-16549-3_53