Layer-wise Feedback Propagation

In this paper, we present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors that utilizes explainability, specifically Layer-wise Relevance Propagation(LRP), to assign rewards to individual connections based on their respective contributions to solvi...

Full description

Saved in:
Bibliographic Details
Main Authors Weber, Leander, Berend, Jim, Binder, Alexander, Wiegand, Thomas, Samek, Wojciech, Lapuschkin, Sebastian
Format Journal Article
LanguageEnglish
Published 23.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
Abstract In this paper, we present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors that utilizes explainability, specifically Layer-wise Relevance Propagation(LRP), to assign rewards to individual connections based on their respective contributions to solving a given task. This differs from traditional gradient descent, which updates parameters towards anestimated loss minimum. LFP distributes a reward signal throughout the model without the need for gradient computations. It then strengthens structures that receive positive feedback while reducingthe influence of structures that receive negative feedback. We establish the convergence of LFP theoretically and empirically, and demonstrate its effectiveness in achieving comparable performance to gradient descent on various models and datasets. Notably, LFP overcomes certain limitations associated with gradient-based methods, such as reliance on meaningful derivatives. We further investigate how the different LRP-rules can be extended to LFP, what their effects are on training, as well as potential applications, such as training models with no meaningful derivatives, e.g., step-function activated Spiking Neural Networks (SNNs), or for transfer learning, to efficiently utilize existing knowledge.
AbstractList In this paper, we present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors that utilizes explainability, specifically Layer-wise Relevance Propagation(LRP), to assign rewards to individual connections based on their respective contributions to solving a given task. This differs from traditional gradient descent, which updates parameters towards anestimated loss minimum. LFP distributes a reward signal throughout the model without the need for gradient computations. It then strengthens structures that receive positive feedback while reducingthe influence of structures that receive negative feedback. We establish the convergence of LFP theoretically and empirically, and demonstrate its effectiveness in achieving comparable performance to gradient descent on various models and datasets. Notably, LFP overcomes certain limitations associated with gradient-based methods, such as reliance on meaningful derivatives. We further investigate how the different LRP-rules can be extended to LFP, what their effects are on training, as well as potential applications, such as training models with no meaningful derivatives, e.g., step-function activated Spiking Neural Networks (SNNs), or for transfer learning, to efficiently utilize existing knowledge.
Author Binder, Alexander
Weber, Leander
Berend, Jim
Samek, Wojciech
Lapuschkin, Sebastian
Wiegand, Thomas
Author_xml – sequence: 1
  givenname: Leander
  surname: Weber
  fullname: Weber, Leander
– sequence: 2
  givenname: Jim
  surname: Berend
  fullname: Berend, Jim
– sequence: 3
  givenname: Alexander
  surname: Binder
  fullname: Binder, Alexander
– sequence: 4
  givenname: Thomas
  surname: Wiegand
  fullname: Wiegand, Thomas
– sequence: 5
  givenname: Wojciech
  surname: Samek
  fullname: Samek, Wojciech
– sequence: 6
  givenname: Sebastian
  surname: Lapuschkin
  fullname: Lapuschkin, Sebastian
BackLink https://doi.org/10.48550/arXiv.2308.12053$$DView paper in arXiv
BookMark eNotzrsOgjAUgOEOOnh7ACd9AbAXTqmjId4SEh3YySltDVHBFKPy9io6_dufb0h6VV1ZQqaMhpECoAv0r_IRckFVyDgFMSCzFFvrg2fZ2PnGWqOxOM-Pvr7hCe9lXY1J3-GlsZN_RyTbrLNkF6SH7T5ZpQHKWAQFOB4DBSa405IaDVrLWBafneRKAXdMR0xYLmOqIueY4UiVMBa0UculEyMy-207YX7z5RV9m3-leScVb5OcOQA
ContentType Journal Article
Copyright http://creativecommons.org/licenses/by-sa/4.0
Copyright_xml – notice: http://creativecommons.org/licenses/by-sa/4.0
DBID AKY
GOX
DOI 10.48550/arxiv.2308.12053
DatabaseName arXiv Computer Science
arXiv.org
DatabaseTitleList
Database_xml – sequence: 1
  dbid: GOX
  name: arXiv.org
  url: http://arxiv.org/find
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
ExternalDocumentID 2308_12053
GroupedDBID AKY
GOX
ID FETCH-LOGICAL-a673-c5f27505132fb60db5bb676cdba628852f1b413e267084ff1d2a083de5bd899f3
IEDL.DBID GOX
IngestDate Mon Jan 08 05:44:23 EST 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a673-c5f27505132fb60db5bb676cdba628852f1b413e267084ff1d2a083de5bd899f3
OpenAccessLink https://arxiv.org/abs/2308.12053
ParticipantIDs arxiv_primary_2308_12053
PublicationCentury 2000
PublicationDate 2023-08-23
PublicationDateYYYYMMDD 2023-08-23
PublicationDate_xml – month: 08
  year: 2023
  text: 2023-08-23
  day: 23
PublicationDecade 2020
PublicationYear 2023
Score 1.896773
SecondaryResourceType preprint
Snippet In this paper, we present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors that utilizes explainability,...
SourceID arxiv
SourceType Open Access Repository
SubjectTerms Computer Science - Artificial Intelligence
Computer Science - Learning
Computer Science - Neural and Evolutionary Computing
Title Layer-wise Feedback Propagation
URI https://arxiv.org/abs/2308.12053
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV07T8MwED61nVgQCFB5Z2A1JH7FGREiVIjXUKRskZ-oQkIobYGfz9kJgoXVPg93ftx3tu87gLPCOzwQK024NYrwUAaiKhs3npLcUimoi_nO9w9y9sxvG9GMIPvJhdHd1-Kj5wc2ywvEx-q8oLhQxjCmNH7Zunls-sfJRMU1yP_KIcZMTX-cRL0FmwO6yy776diGkX_bgdM7jbiWfC6WPqvRWRhtX7OnDqPVl2SWXZjX1_OrGRnqEhAtS0asCJEUXWAcF4zMnRHGyFJaHB5r9woaCoOuwVNZ5oqHUDiqEeg4L4zD6CawPZhgaO-nkOVMsGBKKSvFOQ2ychLxvUCzMqqsN_swTdq07z31RBsVbZOiB_93HcJGLIoebz4pO4LJqlv7Y3SdK3OS7PcNI35saQ
link.rule.ids 228,230,786,891
linkProvider Cornell University
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Layer-wise+Feedback+Propagation&rft.au=Weber%2C+Leander&rft.au=Berend%2C+Jim&rft.au=Binder%2C+Alexander&rft.au=Wiegand%2C+Thomas&rft.date=2023-08-23&rft_id=info:doi/10.48550%2Farxiv.2308.12053&rft.externalDocID=2308_12053