Merging Back-propagation and Hebbian Learning Rules for Robust Classifications

By imposing saturation requirements on hidden-layer neural activations, a new learning algorithm is developed to improve robustness on classification performance of a multi-layer Perceptron. Derivatives of the sigmoid functions at hidden-layers are added to the standard output error with relative si...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 9; no. 7; pp. 1213 - 1222
Main Authors JEONG, D.-G, LEE, S.-Y
Format Journal Article
LanguageEnglish
Published Oxford Elsevier Ltd 01.10.1996
Elsevier Science
Subjects
Online AccessGet full text

Cover

Loading…
Abstract By imposing saturation requirements on hidden-layer neural activations, a new learning algorithm is developed to improve robustness on classification performance of a multi-layer Perceptron. Derivatives of the sigmoid functions at hidden-layers are added to the standard output error with relative significance factors, and the total error is minimized by the steepest-descent method. The additional gradient-descent terms become Hebbian, and this new algorithm merges two popular learning algorithms, i.e., error back-propagation and Hebbian learning rules. Only slight modifications are needed for the standard back-propagation algorithm, and additional computational requirements are negligible. This saturation requirement effectively reduces output sensitivity to the input, which results in improved robustness and better generalization for classifier networks. Also distributed representations at hidden-layers are successfully suppressed to accomplish efficient utilization of hidden neurons. Computer simulations demonstrates much faster learning convergence as well as improved robustness for classifications and hetero-associations of binary patterns. Copyright © 1996 Elsevier Science Ltd
AbstractList By imposing saturation requirements on hidden-layer neural activations, a new learning algorithm is developed to improve robustness on classification performance of a multi-layer Perceptron. Derivatives of the sigmoid functions at hidden-layers are added to the standard output error with relative significance factors, and the total error is minimized by the steepest-descent method. The additional gradient-descent terms become Hebbian, and this new algorithm merges two popular learning algorithms, i.e., error back-propagation and Hebbian learning rules. Only slight modifications are needed for the standard back-propagation algorithm, and additional computational requirements are negligible. This saturation requirement effectively reduces output sensitivity to the input, which results in improved robustness and better generalization for classifier networks. Also distributed representations at hidden-layers are successfully suppressed to accomplish efficient utilization of hidden neurons. Computer simulations demonstrates much faster learning convergence as well as improved robustness for classifications and hetero-associations of binary patterns. Copyright 1996 Elsevier Science Ltd
By imposing saturation requirements on hidden-layer neural activations, a new learning algorithm is developed to improve robustness on classification performance of a multi-layer Perceptron. Derivatives of the sigmoid functions at hidden-layers are added to the standard output error with relative significance factors, and the total error is minimized by the steepest-descent method. The additional gradient-descent terms become Hebbian, and this new algorithm merges two popular learning algorithms, i.e., error back-propagation and Hebbian learning rules. Only slight modifications are needed for the standard back-propagation algorithm, and additional computational requirements are negligible. This saturation requirement effectively reduces output sensitivity to the input, which results in improved robustness and better generalization for classifier networks. Also distributed representations at hidden-layers are successfully suppressed to accomplish efficient utilization of hidden neurons. Computer simulations demonstrates much faster learning convergence as well as improved robustness for classifications and hetero-associations of binary patterns.
By imposing saturation requirements on hidden-layer neural activations, a new learning algorithm is developed to improve robustness on classification performance of a multi-layer Perceptron. Derivatives of the sigmoid functions at hidden-layers are added to the standard output error with relative significance factors, and the total error is minimized by the steepest-descent method. The additional gradient-descent terms become Hebbian, and this new algorithm merges two popular learning algorithms, i.e., error back-propagation and Hebbian learning rules. Only slight modifications are needed for the standard back-propagation algorithm, and additional computational requirements are negligible. This saturation requirement effectively reduces output sensitivity to the input, which results in improved robustness and better generalization for classifier networks. Also distributed representations at hidden-layers are successfully suppressed to accomplish efficient utilization of hidden neurons. Computer simulations demonstrates much faster learning convergence as well as improved robustness for classifications and hetero-associations of binary patterns. Copyright © 1996 Elsevier Science Ltd
Author Soo-Young, Lee
Dong-Gyu, Jeong
Author_xml – sequence: 1
  givenname: D.-G
  surname: JEONG
  fullname: JEONG, D.-G
  organization: Korea Advanced Institute of Science and Technology, Korea, Republic of
– sequence: 2
  givenname: S.-Y
  surname: LEE
  fullname: LEE, S.-Y
  organization: Korea Advanced Institute of Science and Technology, Korea, Republic of
BackLink http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=3258192$$DView record in Pascal Francis
https://www.ncbi.nlm.nih.gov/pubmed/12662594$$D View this record in MEDLINE/PubMed
BookMark eNp90E9rFDEYBvAgFbutfgOROUiph9H8n-Qi1KW2wqpQ9BySzJslOptZk5mC374Zd6m3nkLg9yTv-5yhkzQmQOg1we8JJvIDVpq1Eit8qeU7jDGnLXmGVkR1uqWdoido9UhO0VkpvyqSirMX6JRQKanQfIW-fYW8jWnbfLL-d7vP495u7RTH1NjUN7fgXLSp2YDNaVF38wClCWNu7kY3l6lZD7aUGKL_Fyov0fNghwKvjuc5-vn5-sf6tt18v_myvtq0nnM9tS50mPSqw9x7EBJsp4jutAXdyxAEF5RroURHiYfgWE-D7bCV2jEnKXGBnaOLw7t14j8zlMnsYvEwDDbBOBdDJcUdU6LCyychUUIzrRjFlfID9XksJUMw-xx3Nv81BJulcrP0aZY-ja6XpXJDauzN8YfZ7aD_Hzp2XMHbI7DF2yFkm3wsj45RUZenlX08MKi93UfIpvgIyUMfM_jJ9GN8epAHBKudxw
CitedBy_id crossref_primary_10_1016_S1364_6613_98_01241_8
crossref_primary_10_1016_j_eswa_2021_115508
crossref_primary_10_1109_72_572117
crossref_primary_10_1007_s00521_007_0135_5
crossref_primary_10_1016_j_neucom_2006_10_143
crossref_primary_10_1623_hysj_50_2_299_61794
crossref_primary_10_1007_s00521_009_0274_y
crossref_primary_10_4316_aece_2011_03003
crossref_primary_10_1016_j_patrec_2020_01_013
crossref_primary_10_1016_S0167_8655_02_00081_8
crossref_primary_10_1016_j_amc_2008_05_025
crossref_primary_10_1016_j_ins_2007_09_008
Cites_doi 10.1109/72.80236
10.1162/neco.1989.1.1.151
10.1109/72.165600
10.1016/0893-6080(91)90005-P
10.1109/72.248452
10.1109/72.248466
10.1109/72.392264
10.1162/neco.1989.1.4.541
10.1162/neco.1992.4.4.473
10.1109/72.80206
10.1109/29.21701
10.1016/0893-6080(91)90033-2
10.1016/0893-6080(88)90014-7
10.1162/neco.1990.2.2.210
10.1109/ICNN.1988.23865
10.1109/IJCNN.1993.714126
ContentType Journal Article
Copyright 1996 Elsevier Science Ltd
1996 INIST-CNRS
Copyright_xml – notice: 1996 Elsevier Science Ltd
– notice: 1996 INIST-CNRS
DBID IQODW
NPM
AAYXX
CITATION
7X8
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1016/0893-6080(96)00042-1
DatabaseName Pascal-Francis
PubMed
CrossRef
MEDLINE - Academic
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle PubMed
CrossRef
MEDLINE - Academic
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList PubMed
Computer and Information Systems Abstracts

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
Applied Sciences
EISSN 1879-2782
EndPage 1222
ExternalDocumentID 10_1016_0893_6080_96_00042_1
12662594
3258192
0893608096000421
Genre Journal Article
GroupedDBID ---
--K
--M
-~X
.DC
.~1
0R~
123
186
1B1
1RT
1~.
1~5
29N
4.4
457
4G.
53G
5RE
5VS
6TJ
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AADPK
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXLA
AAXUO
AAYFN
ABAOU
ABBOA
ABCQJ
ABEFU
ABFNM
ABFRF
ABHFT
ABIVO
ABJNI
ABLJU
ABMAC
ABXDB
ABYKQ
ACAZW
ACDAQ
ACGFO
ACGFS
ACIUM
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADJOM
ADMUD
ADRHT
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
G8K
GBLVA
GBOLZ
HLZ
HMQ
HVGLF
HZ~
IHE
J1W
JJJVA
K-O
KOM
KZ1
LG9
LMP
M2V
M41
MHUIS
MO0
MOBAO
MVM
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SCC
SDF
SDG
SDP
SES
SEW
SNS
SPC
SPCBC
SSN
SST
SSV
SSW
SSZ
T5K
TAE
UAP
UNMZH
VOH
WUQ
XPP
ZMT
~G-
08R
ABPIF
ABPTK
IQODW
AAXKI
AFJKZ
AKRWK
NPM
AAYXX
CITATION
7X8
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c449t-bf701d8704cce56ea781979ae9d6ff545249585721cefb3d2fa70a69b3b621bf3
IEDL.DBID .~1
ISSN 0893-6080
IngestDate Thu Oct 24 23:21:34 EDT 2024
Fri Oct 25 06:22:52 EDT 2024
Thu Sep 26 15:22:27 EDT 2024
Sat Sep 28 07:46:21 EDT 2024
Sun Oct 29 17:07:42 EDT 2023
Fri Feb 23 02:29:01 EST 2024
IsPeerReviewed true
IsScholarly true
Issue 7
Keywords Generalization
Learning
Backpropagation
Classification
Robustness
Neural network
Performance
Algorithm
Language English
License CC BY 4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c449t-bf701d8704cce56ea781979ae9d6ff545249585721cefb3d2fa70a69b3b621bf3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ObjectType-Article-2
ObjectType-Feature-1
PMID 12662594
PQID 1859398320
PQPubID 23479
PageCount 10
ParticipantIDs proquest_miscellaneous_26207385
proquest_miscellaneous_1859398320
crossref_primary_10_1016_0893_6080_96_00042_1
pubmed_primary_12662594
pascalfrancis_primary_3258192
elsevier_sciencedirect_doi_10_1016_0893_6080_96_00042_1
PublicationCentury 1900
PublicationDate 1996-10-01
PublicationDateYYYYMMDD 1996-10-01
PublicationDate_xml – month: 10
  year: 1996
  text: 1996-10-01
  day: 01
PublicationDecade 1990
PublicationPlace Oxford
PublicationPlace_xml – name: Oxford
– name: United States
PublicationTitle Neural networks
PublicationTitleAlternate Neural Netw
PublicationYear 1996
Publisher Elsevier Ltd
Elsevier Science
Publisher_xml – name: Elsevier Ltd
– name: Elsevier Science
References Stevenson, Winter, Widrow (BIB22) 1990; 1
Hanson, S. J., & Pratt, L. Y. (1989). Comparing biases for minimal network construction with back-propagation. In D. Touretzky (Ed.)
Weigend, A. S., Rumelhart, D. E., & Huberman, B. A. (1990). Back-propagation, weight elimination and time-series prediction.
Mozer, M. C., & Smolensky, P. (1989). Skeletonization: a technique for trimming the fat from a network via relevance assessment. In D. Touretzky (Ed.)
(pp. 598–605). San Mateo, CA: Morgan Kaufmann.
(pp. 65–80). San Mateo, CA: Morgan Kaufmann.
Drucker, Le Cun (BIB3) 1992; 3
Ishikawa, M. (1994). Structural learning in neural networks.
(III, pp. 325–330). San Diego, CA.
(pp. 107–115). San Mateo, CA: Morgan Kaufmann.
LeCun, Y., Denker, J. S., & Solla, S. A. (1990). Optimal brain damage, In D. Touretzky (Ed.)
Waibel, Hanazawa, Hinton, Shikano, Lang (BIB24) 1989; 37
(pp. 950–957). San Mateo, CA: Morgan Kaufmann.
Karnin (BIB10) 1990; 1
(pp. 177–185). San Matero, CA: Morgan Kaufmann.
(I, pp. 647–650). Washington DC., USA.
Krogh, A., & Hertz, J. A. (1992). A simple weight decay can improve generalization. In D. Touretzky (Ed.)
Sietsma, Dow (BIB21) 1991; 4
(ETL Report TR-90-7), Electrotechnical Laboratory, Tsukuba, Japan.
6 1005-1007 (1995)
Koh, S. H., Lee, S. Y., Jang, J. S., & Shin, S. Y. (1990). Merging Hebbian learning rule and least-mean-square error algorithm for two layer neural networks.
LeCun, Boser, Denker, Henderson, Howard, Hubbard, Jackel (BIB13) 1989; 1
Von Lehmann, A. et al. (1988). Factors influencing learning by back propagation.
Fukushima, K. (1993). Improved generalization ability using constrained neural network architecture.
Reed (BIB20) 1993; 4
(pp. 335–341). San Diego, CA.
(pp. 37–44). Iizuka, Japan.
Lee, S. Y., & Jeong, D. G. (1994). Error minimization, generalization, and hardware implementability of supervised learning.
Oh, S.H., Lee, Y. Sensitivity analysis of single hidden-layer neural networks with threshold functions.
Bishop (BIB2) 1993; 4
Baum, Haussler (BIB1) 1989; 1
Lee, Kil (BIB15) 1991; 4
(pp. 2049–2054). Nagoya, Japan.
Fukushima (BIB4) 1989; 1
Ishikawa, M. (1990).
Nowlan, Hinton (BIB18) 1992; 4
Hartman, Keeler, Kowalski (BIB7) 1990; 2
Fukushima (10.1016/0893-6080(96)00042-1_BIB4) 1989; 1
10.1016/0893-6080(96)00042-1_BIB19
10.1016/0893-6080(96)00042-1_BIB9
Nowlan (10.1016/0893-6080(96)00042-1_BIB18) 1992; 4
Drucker (10.1016/0893-6080(96)00042-1_BIB3) 1992; 3
Reed (10.1016/0893-6080(96)00042-1_BIB20) 1993; 4
10.1016/0893-6080(96)00042-1_BIB12
10.1016/0893-6080(96)00042-1_BIB8
10.1016/0893-6080(96)00042-1_BIB5
10.1016/0893-6080(96)00042-1_BIB6
10.1016/0893-6080(96)00042-1_BIB11
Lee (10.1016/0893-6080(96)00042-1_BIB15) 1991; 4
10.1016/0893-6080(96)00042-1_BIB16
10.1016/0893-6080(96)00042-1_BIB17
10.1016/0893-6080(96)00042-1_BIB14
Stevenson (10.1016/0893-6080(96)00042-1_BIB22) 1990; 1
Sietsma (10.1016/0893-6080(96)00042-1_BIB21) 1991; 4
Hartman (10.1016/0893-6080(96)00042-1_BIB7) 1990; 2
Waibel (10.1016/0893-6080(96)00042-1_BIB24) 1989; 37
Baum (10.1016/0893-6080(96)00042-1_BIB1) 1989; 1
10.1016/0893-6080(96)00042-1_BIB23
LeCun (10.1016/0893-6080(96)00042-1_BIB13) 1989; 1
Bishop (10.1016/0893-6080(96)00042-1_BIB2) 1993; 4
Karnin (10.1016/0893-6080(96)00042-1_BIB10) 1990; 1
10.1016/0893-6080(96)00042-1_BIB25
References_xml – volume: 4
  start-page: 473
  year: 1992
  end-page: 493
  ident: BIB18
  article-title: Simplifying neural networks by soft weight sharing
  publication-title: Neural Computation
  contributor:
    fullname: Hinton
– volume: 1
  start-page: 119
  year: 1989
  end-page: 130
  ident: BIB4
  article-title: Neocognitron: A hierarchical neural network capable of visual pattern recognition
  publication-title: Neural Networks
  contributor:
    fullname: Fukushima
– volume: 37
  start-page: 328
  year: 1989
  end-page: 339
  ident: BIB24
  article-title: Phoneme recognition using time-delay neural networks
  publication-title: IEEE Transactions on Acoustics, Speech, Signal Processing
  contributor:
    fullname: Lang
– volume: 4
  start-page: 67
  year: 1991
  end-page: 79
  ident: BIB21
  article-title: Creating artificial neural networks that generalize
  publication-title: Neural Networks
  contributor:
    fullname: Dow
– volume: 1
  start-page: 239
  year: 1990
  end-page: 242
  ident: BIB10
  article-title: A simple procedure for pruning back-propagation trained neural networks
  publication-title: IEEE Transactions on Neural Networks
  contributor:
    fullname: Karnin
– volume: 4
  start-page: 882
  year: 1993
  end-page: 884
  ident: BIB2
  article-title: Curvature-driven smoothing: a learning algorithm for feedforward networks
  publication-title: IEEE Transactions on Neural Networks
  contributor:
    fullname: Bishop
– volume: 1
  start-page: 151
  year: 1989
  end-page: 160
  ident: BIB1
  article-title: What size net gives valid generalization?
  publication-title: Neural Computation
  contributor:
    fullname: Haussler
– volume: 1
  start-page: 541
  year: 1989
  end-page: 551
  ident: BIB13
  article-title: Back propagation applied to handwritten zip code recognition
  publication-title: Neural Computation
  contributor:
    fullname: Jackel
– volume: 1
  start-page: 71
  year: 1990
  end-page: 80
  ident: BIB22
  article-title: Sensitivity of feedforward neural networks to weight errors
  publication-title: IEEE Transactions on Neural Networks
  contributor:
    fullname: Widrow
– volume: 4
  start-page: 740
  year: 1993
  end-page: 747
  ident: BIB20
  article-title: Pruning algorithms—a survey
  publication-title: IEEE Transactions on Neural Networks
  contributor:
    fullname: Reed
– volume: 2
  start-page: 210
  year: 1990
  end-page: 215
  ident: BIB7
  article-title: Layered neural networks with Gaussian hidden units with universal approximations
  publication-title: Neural Computation
  contributor:
    fullname: Kowalski
– volume: 3
  start-page: 991
  year: 1992
  end-page: 997
  ident: BIB3
  article-title: Improving generalization performance using double backpropagation
  publication-title: IEEE Transactions on Neural Networks
  contributor:
    fullname: Le Cun
– volume: 4
  start-page: 207
  year: 1991
  end-page: 224
  ident: BIB15
  article-title: A Gaussian potential function network with hierarchically self-organizing learning
  publication-title: Neural Networks
  contributor:
    fullname: Kil
– volume: 1
  start-page: 239
  year: 1990
  ident: 10.1016/0893-6080(96)00042-1_BIB10
  article-title: A simple procedure for pruning back-propagation trained neural networks
  publication-title: IEEE Transactions on Neural Networks
  doi: 10.1109/72.80236
  contributor:
    fullname: Karnin
– volume: 1
  start-page: 151
  year: 1989
  ident: 10.1016/0893-6080(96)00042-1_BIB1
  article-title: What size net gives valid generalization?
  publication-title: Neural Computation
  doi: 10.1162/neco.1989.1.1.151
  contributor:
    fullname: Baum
– volume: 3
  start-page: 991
  year: 1992
  ident: 10.1016/0893-6080(96)00042-1_BIB3
  article-title: Improving generalization performance using double backpropagation
  publication-title: IEEE Transactions on Neural Networks
  doi: 10.1109/72.165600
  contributor:
    fullname: Drucker
– ident: 10.1016/0893-6080(96)00042-1_BIB11
– volume: 4
  start-page: 207
  year: 1991
  ident: 10.1016/0893-6080(96)00042-1_BIB15
  article-title: A Gaussian potential function network with hierarchically self-organizing learning
  publication-title: Neural Networks
  doi: 10.1016/0893-6080(91)90005-P
  contributor:
    fullname: Lee
– volume: 4
  start-page: 740
  year: 1993
  ident: 10.1016/0893-6080(96)00042-1_BIB20
  article-title: Pruning algorithms—a survey
  publication-title: IEEE Transactions on Neural Networks
  doi: 10.1109/72.248452
  contributor:
    fullname: Reed
– ident: 10.1016/0893-6080(96)00042-1_BIB8
– ident: 10.1016/0893-6080(96)00042-1_BIB25
– volume: 4
  start-page: 882
  year: 1993
  ident: 10.1016/0893-6080(96)00042-1_BIB2
  article-title: Curvature-driven smoothing: a learning algorithm for feedforward networks
  publication-title: IEEE Transactions on Neural Networks
  doi: 10.1109/72.248466
  contributor:
    fullname: Bishop
– ident: 10.1016/0893-6080(96)00042-1_BIB19
  doi: 10.1109/72.392264
– volume: 1
  start-page: 541
  year: 1989
  ident: 10.1016/0893-6080(96)00042-1_BIB13
  article-title: Back propagation applied to handwritten zip code recognition
  publication-title: Neural Computation
  doi: 10.1162/neco.1989.1.4.541
  contributor:
    fullname: LeCun
– ident: 10.1016/0893-6080(96)00042-1_BIB16
– ident: 10.1016/0893-6080(96)00042-1_BIB6
– volume: 4
  start-page: 473
  year: 1992
  ident: 10.1016/0893-6080(96)00042-1_BIB18
  article-title: Simplifying neural networks by soft weight sharing
  publication-title: Neural Computation
  doi: 10.1162/neco.1992.4.4.473
  contributor:
    fullname: Nowlan
– volume: 1
  start-page: 71
  year: 1990
  ident: 10.1016/0893-6080(96)00042-1_BIB22
  article-title: Sensitivity of feedforward neural networks to weight errors
  publication-title: IEEE Transactions on Neural Networks
  doi: 10.1109/72.80206
  contributor:
    fullname: Stevenson
– ident: 10.1016/0893-6080(96)00042-1_BIB12
– ident: 10.1016/0893-6080(96)00042-1_BIB14
– volume: 37
  start-page: 328
  year: 1989
  ident: 10.1016/0893-6080(96)00042-1_BIB24
  article-title: Phoneme recognition using time-delay neural networks
  publication-title: IEEE Transactions on Acoustics, Speech, Signal Processing
  doi: 10.1109/29.21701
  contributor:
    fullname: Waibel
– volume: 4
  start-page: 67
  year: 1991
  ident: 10.1016/0893-6080(96)00042-1_BIB21
  article-title: Creating artificial neural networks that generalize
  publication-title: Neural Networks
  doi: 10.1016/0893-6080(91)90033-2
  contributor:
    fullname: Sietsma
– ident: 10.1016/0893-6080(96)00042-1_BIB9
– volume: 1
  start-page: 119
  year: 1989
  ident: 10.1016/0893-6080(96)00042-1_BIB4
  article-title: Neocognitron: A hierarchical neural network capable of visual pattern recognition
  publication-title: Neural Networks
  doi: 10.1016/0893-6080(88)90014-7
  contributor:
    fullname: Fukushima
– volume: 2
  start-page: 210
  year: 1990
  ident: 10.1016/0893-6080(96)00042-1_BIB7
  article-title: Layered neural networks with Gaussian hidden units with universal approximations
  publication-title: Neural Computation
  doi: 10.1162/neco.1990.2.2.210
  contributor:
    fullname: Hartman
– ident: 10.1016/0893-6080(96)00042-1_BIB17
– ident: 10.1016/0893-6080(96)00042-1_BIB23
  doi: 10.1109/ICNN.1988.23865
– ident: 10.1016/0893-6080(96)00042-1_BIB5
  doi: 10.1109/IJCNN.1993.714126
SSID ssj0006843
Score 1.6580989
Snippet By imposing saturation requirements on hidden-layer neural activations, a new learning algorithm is developed to improve robustness on classification...
SourceID proquest
crossref
pubmed
pascalfrancis
elsevier
SourceType Aggregation Database
Index Database
Publisher
StartPage 1213
SubjectTerms Applied sciences
Artificial intelligence
Computer science; control theory; systems
Connectionism. Neural networks
Exact sciences and technology
Generalization
Robustness – Mapping sensitivity – Hidden-neuron saturation – Error back-propagation – Hebbian – Hybrid learning – Classifier networks
Title Merging Back-propagation and Hebbian Learning Rules for Robust Classifications
URI https://dx.doi.org/10.1016/0893-6080(96)00042-1
https://www.ncbi.nlm.nih.gov/pubmed/12662594
https://search.proquest.com/docview/1859398320
https://search.proquest.com/docview/26207385
Volume 9
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1BT9swFH6a4II0AQPGMigYiQM7eG0Sx66PMIG6TfSAQOJm2Yk9oU1pRdsrv33vOUmBQ4XENXJs5z37-bPz-XsAp9rizicNGQ-hUFwoW3GNG2cutXWVyEiekM47rsdydCd-3Rf3L-7CEK2yjf1NTI_Run3Sb63Znz489Ae40krEOwjBaeTFC-xFnJvfn55ZHnLYEOeG1D6W7m7PpbK_fHam5bdYB09XrU4fp3aGNgtNsovVaDSuSlfbsNnCSXbe9PgTfPD1Dmx1qRpYO3N3YXztHykdEbuw5V-O1WIciT5htq7YyDuSHmet2OofdrP452cM8Sy7mbjFbM5i7kxiFTUHfHtwd3V5-2PE21QKvBRCz7kLapBWODdFWfpCeqsQCSgS5q4kekkUlIJ6WOB2sPTB5VUWrBpYqV3uZJa6kH-GtXpS-y_A0Iu5qtSw1IRdiCYlvAs2z1OHAcOFBHhnQjNtFDNMRyUjkxsyudHEpkOTmzQB1dnZvPK8waD-xpu9V25ZNpdnBem8JXDSucngpKE_Ibb2k8XMpKTypjGYDRI4XlGGlPpJ6ieB_cbDz1-DoAZ3jeLru3t-ABuR_R1JgYewNn9c-B6Cm7k7isP3CNbPf_4ejf8DQg_yhQ
link.rule.ids 315,783,787,4511,24130,27938,27939,45599,45693
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT9wwEB4heqBSRYHSkvJYV-qhHMxuEsdeH6HqannsHhBI3Cw7sdGqKLtid6_89s7kAeWwQuJqOYkzY4-_cb58A_BTW8x84pDwEDLFhbIF15g4c6mtK0RC8oR03jEay-GtuLjL7v77F4ZolU3sr2N6Fa2blm5jze5sMun2cKeViHcQgtPMwwzogyD5LJzTJ08vNA_Zr5lzfRoAdm9_n4tl97ntl5bH1U14vGp7-jSzczRaqKtdrIaj1bY02ILNBk-y03rI27Dmyx343NZqYM3S_QLjkX-kekTszOZ_Od4WA0nlFGbLgg29I-1x1qit3rPr5YOfMwS07HrqlvMFq4pnEq2oPuHbhdvBn5vfQ97UUuC5EHrBXVC9uMDFKfLcZ9JbhVBAkTJ3IdFNIqMa1P0M88HcB5cWSbCqZ6V2qZNJ7EL6FdbLaen3gKEbU1Wofq4JvBBPSngXbJrGDiOGCxHw1oRmVktmmJZLRiY3ZHKjiU6HJjdxBKq1s3nleoNR_Y0rD1-55flxaZKR0FsEP1o3GVw19CnEln66nJuYZN40RrNeBJ0VfUiqn7R-IvhWe_jlbRDVYNoovr975B3YGN6MrszV-fhyHz5WVPCKIXgA64vHpT9EpLNwR9VU_gcJ3_Qn
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Merging+back-propagation+and+Hebbian+learning+rules+for+robust+classifications&rft.jtitle=Neural+networks&rft.au=Jeong%2C+Dong-Gyu&rft.au=Lee%2C+Soo-Young&rft.date=1996-10-01&rft.issn=0893-6080&rft.volume=9&rft.issue=7&rft.spage=1213&rft.epage=1222&rft_id=info:doi/10.1016%2F0893-6080%2896%2900042-1&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon