Demonstration and validation of constructive initialization method for neural networks to approximate nonlinear functions in engineering mechanics applications

This paper is a sequel paper to Pei et al. (Nonlinear Dyn 71(1–2):371–399, 2013 ) by the authors and their co-authors published in this journal. The main contribution lies in using complex data collected from real-world problems to validate the methodology and techniques given in Pei et al. (Nonline...

Full description

Saved in:
Bibliographic Details
Published inNonlinear dynamics Vol. 79; no. 3; pp. 2099 - 2119
Main Authors Pei, Jin-Song, Masri, Sami F.
Format Journal Article
LanguageEnglish
Published Dordrecht Springer Netherlands 01.02.2015
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper is a sequel paper to Pei et al. (Nonlinear Dyn 71(1–2):371–399, 2013 ) by the authors and their co-authors published in this journal. The main contribution lies in using complex data collected from real-world problems to validate the methodology and techniques given in Pei et al. (Nonlinear Dyn 71(1–2):371–399, 2013 ) and others to initialize neural networks in a systematic and constructive manner. Two sets of real-world laboratory data are analyzed: One set is from testing a full-sized nonlinear viscous fluid damper with harmonic displacements at different frequencies, and the other set a well-controlled complex nonlinear mechanism involving either rotational or torsional motion under random excitations. The force-state mapping formulation is adopted to analyze these data sets. In addition, we use the fractional derivative Maxwell model for the nonlinear viscous fluid damper application. This study is focused on determining the number of hidden nodes and the initial values of weights and biases for both multilayer feedforward or tapped delay line neural networks, before training them using the well-established backpropagation approach. This critical but subjective design issue is handled in a transparent manner that directly utilizes either the feature of the data, or the governing mathematical expression, and is carried out in a nearly deterministic manner. This contrasts sharply with Nguyen and Widrow ( 1990 ), a random initialization approach that prevails in training neural networks for function approximation. The initialization methodology and techniques developed in our prior and current work starting with Pei ( 2001 ) are explained and demonstrated, in the context of nonlinear problems encountered in typical applied mechanics applications, by using the two sets of data, and validated by examining the generalization capability of the trained neural networks. Future work and relevant technical hurdles are identified.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0924-090X
1573-269X
DOI:10.1007/s11071-014-1797-z