On feedforward control using physics-guided neural networks: Training cost regularization and optimized initialization

Performance of model-based feedforward controllers is typically limited by the accuracy of the inverse system dynamics model. Physics-guided neural networks (PGNN), where a known physical model cooperates in parallel with a neural network, were recently proposed as a method to achieve high accuracy...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Bolderman, Max, Lazar, Mircea, Butler, Hans
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 28.01.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Performance of model-based feedforward controllers is typically limited by the accuracy of the inverse system dynamics model. Physics-guided neural networks (PGNN), where a known physical model cooperates in parallel with a neural network, were recently proposed as a method to achieve high accuracy of the identified inverse dynamics. However, the flexible nature of neural networks can create overparameterization when employed in parallel with a physical model, which results in a parameter drift during training. This drift may result in parameters of the physical model not corresponding to their physical values, which increases vulnerability of the PGNN to operating conditions not present in the training data. To address this problem, this paper proposes a regularization method via identified physical parameters, in combination with an optimized training initialization that improves training convergence. The regularized PGNN framework is validated on a real-life industrial linear motor, where it delivers better tracking accuracy and extrapolation.
ISSN:2331-8422