Autoencoders reloaded

In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra,...

Full description

Saved in:
Bibliographic Details
Published inBiological cybernetics Vol. 116; no. 4; pp. 389 - 406
Main Authors Bourlard, Hervé, Kabil, Selen Hande
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.08.2022
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
Abstract In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417–441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the eigenvalues that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in “deep neural networks” (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1–17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder–decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas.
AbstractList In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417–441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the eigenvalues that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in “deep neural networks” (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1–17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder–decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas.
In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417–441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the eigenvalues that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in “deep neural networks” (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1–17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder–decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas.
Abstract In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417–441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the eigenvalues that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in “deep neural networks” (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1–17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder–decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas.
Author Bourlard, Hervé
Kabil, Selen Hande
Author_xml – sequence: 1
  givenname: Hervé
  surname: Bourlard
  fullname: Bourlard, Hervé
  organization: Idiap Research Institute, Ecole polytechnique fédérale de Lausanne (EPFL)
– sequence: 2
  givenname: Selen Hande
  orcidid: 0000-0002-2588-4047
  surname: Kabil
  fullname: Kabil, Selen Hande
  email: selen.kabil@idiap.ch
  organization: Idiap Research Institute, Ecole polytechnique fédérale de Lausanne (EPFL)
BookMark eNp9kMlLxDAUh4OMOIsevXgSvHipZmmWXoRhcIMBL3oOafI6ztBpxqQV_O9N6eB28PBIQr738d5vikaNbwChM4KvCMbyOmKcU5rhvnDBZCYO0ITkLD2lxKMf9zGaxrjBiaK8OEJjxiWVjJMJOp13rYfGegchngeovXHgjtFhZeoIJ_tzhl7ubp8XD9ny6f5xMV9mNuekzYhg0lBRlaR0lWC2yCvMKUhrSyNFpUhpLXa4tMABG8VylfNcKQJpDidczmboZvDuunILzkLTBlPrXVhvTfjQ3qz1759m_apX_l0XVMm0ShJc7gXBv3UQW71dRwt1bRrwXdRUyIIyWQiR0Is_6MZ3oUnrJUoVhcJC8UTRgbLBxxig-hqGYN2nrofUNe6rT133ajY0xQQ3Kwjf6n-6PgFuBYPc
CitedBy_id crossref_primary_10_2514_1_J063080
crossref_primary_10_1016_j_neuroimage_2024_120696
crossref_primary_10_1007_s00521_023_09172_x
crossref_primary_10_1088_1748_9326_ad0607
crossref_primary_10_1007_s00170_024_13932_x
Cites_doi 10.1137/0914086
10.1007/BFb0006275
10.1145/1390156.1390294
10.1214/aoms/1177729694
10.1007/978-3-030-01171-0_12
10.1007/978-1-4757-1904-8
10.1109/JAS.2019.1911393
10.1016/j.ijinfomgt.2020.102282
10.1007/978-3-662-39778-7_10
10.1016/0893-6080(89)90014-2
10.1016/j.patcog.2016.07.014
10.1016/j.engappai.2016.02.015
10.3115/v1/P15-1107
10.1007/978-1-4757-3264-1
10.18653/v1/2020.emnlp-demos.6
10.21236/ADA164453
10.1016/j.conb.2004.07.007
10.1016/j.conb.2021.10.011
10.1109/TIP.2010.2103949
10.1007/978-3-642-21735-7_7
10.1145/1553374.1553463
10.1109/ICASSP.1985.1168285
10.1016/j.cobeha.2016.05.012
10.1146/annurev-neuro-090919-022842
10.1007/BF00332918
10.1017/CBO9780511801389
10.1007/BF01397471
10.1109/ICASSP.2014.6854900
10.1007/978-3-642-23783-6_41
10.7551/mitpress/7503.003.0024
10.1016/j.neucom.2020.04.057
10.2307/2531996
10.1016/0893-6080(89)90020-8
10.1109/72.392248
10.1016/j.csda.2004.07.010
10.1037/h0071325
10.1561/9781680836233
10.1126/science.1127647
ContentType Journal Article
Copyright The Author(s) 2022
The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: The Author(s) 2022
– notice: The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID C6C
AAYXX
CITATION
3V.
7QO
7TK
7X7
7XB
88A
88E
88I
8AL
8AO
8FD
8FE
8FG
8FH
8FI
8FJ
8FK
ABUWG
AFKRA
ARAPS
AZQEC
BBNVY
BENPR
BGLVJ
BHPHI
CCPQU
DWQXO
FR3
FYUFA
GHDGH
GNUQQ
H8D
HCIFZ
JQ2
K7-
K9.
L7M
LK8
M0N
M0S
M1P
M2P
M7P
P5Z
P62
P64
PQEST
PQQKQ
PQUKI
PRINS
Q9U
7X8
5PM
DOI 10.1007/s00422-022-00937-6
DatabaseName Springer_OA刊
CrossRef
ProQuest Central (Corporate)
Biotechnology Research Abstracts
Neurosciences Abstracts
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Biology Database (Alumni Edition)
Medical Database (Alumni Edition)
Science Database (Alumni Edition)
Computing Database (Alumni Edition)
ProQuest Pharma Collection
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Natural Science Collection
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central
Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
Biological Science Collection
AUTh Library subscriptions: ProQuest Central
Technology Collection
Natural Science Collection
ProQuest One Community College
ProQuest Central
Engineering Research Database
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Central Student
Aerospace Database
SciTech Premium Collection
ProQuest Computer Science Collection
Computer Science Database
ProQuest Health & Medical Complete (Alumni)
Advanced Technologies Database with Aerospace
Biological Sciences
Computing Database
Health & Medical Collection (Alumni Edition)
PML(ProQuest Medical Library)
Science Database
Biological Science Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
Biotechnology and BioEngineering Abstracts
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
ProQuest Central Basic
MEDLINE - Academic
PubMed Central (Full Participant titles)
DatabaseTitle CrossRef
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Natural Science Collection
ProQuest Pharma Collection
ProQuest Central China
ProQuest Biology Journals (Alumni Edition)
ProQuest Central
Aerospace Database
Health Research Premium Collection
Biotechnology Research Abstracts
Health and Medicine Complete (Alumni Edition)
Natural Science Collection
ProQuest Central Korea
Biological Science Collection
Advanced Technologies Database with Aerospace
ProQuest Medical Library (Alumni)
Advanced Technologies & Aerospace Collection
ProQuest Computing
ProQuest Science Journals (Alumni Edition)
ProQuest Biological Science Collection
ProQuest Central Basic
ProQuest Science Journals
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
ProQuest Technology Collection
Health Research Premium Collection (Alumni)
Biological Science Database
ProQuest SciTech Collection
Neurosciences Abstracts
ProQuest Hospital Collection (Alumni)
Biotechnology and BioEngineering Abstracts
Advanced Technologies & Aerospace Database
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
Engineering Research Database
ProQuest One Academic
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList
Computer Science Database

CrossRef
Database_xml – sequence: 1
  dbid: C6C
  name: SpringerOpen
  url: http://www.springeropen.com/
  sourceTypes: Publisher
– sequence: 2
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1432-0770
EndPage 406
ExternalDocumentID 10_1007_s00422_022_00937_6
GrantInformation_xml – fundername: Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung
  funderid: http://dx.doi.org/10.13039/501100001711
– fundername: ;
GroupedDBID -4W
-56
-5G
-BR
-EM
-Y2
-~C
-~X
.4S
.86
.DC
.GJ
.VR
06C
06D
0R~
0VY
1N0
203
23N
28-
29~
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
36B
3SX
3V.
4.4
406
408
409
40D
40E
4P2
53G
5QI
5VS
67N
67Z
6NX
78A
7X7
88A
88E
88I
8AO
8FE
8FG
8FH
8FI
8FJ
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AABYN
AAFGU
AAHNG
AAIAL
AAJKR
AANXM
AANZL
AAPBV
AARHV
AARTL
AATNV
AATVU
AAUYE
AAWCG
AAYFA
AAYIU
AAYQN
AAYTO
ABBBX
ABBXA
ABDBF
ABDZT
ABECU
ABELW
ABFGW
ABFTV
ABHLI
ABHQN
ABIVO
ABJNI
ABJOX
ABKAS
ABKCH
ABKTR
ABLJU
ABMNI
ABMQK
ABNWP
ABPLI
ABPTK
ABQBU
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACBMV
ACBRV
ACBXY
ACBYP
ACGFS
ACGOD
ACHSB
ACHXU
ACIGE
ACIPQ
ACIWK
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPRK
ACTTH
ACVWB
ACWMK
ADBBV
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMDM
ADOAH
ADOXG
ADRFC
ADTPH
ADURQ
ADYFF
ADYPR
ADZKW
AEBTG
AEEQQ
AEFIE
AEFTE
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AENEX
AEOHA
AEPYU
AESKC
AESTI
AETLH
AEVLU
AEVTX
AEXYK
AFEXP
AFGCZ
AFKRA
AFLOW
AFNRJ
AFQWF
AFRAH
AFWTZ
AFZKB
AGAYW
AGDGC
AGGBP
AGGDS
AGJBK
AGMZJ
AGQMX
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHMBA
AHSBF
AHYZX
AIAKS
AIIXL
AILAN
AIMYW
AITGF
AJBLW
AJDOV
AJRNO
AJZVZ
AKMHD
AKQUC
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AOCGG
AOSHJ
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AZFZN
AZQEC
B-.
B0M
BA0
BBNVY
BBWZM
BDATZ
BENPR
BGLVJ
BGNMA
BHPHI
BPHCQ
BVXVI
C6C
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EAD
EAP
EBC
EBD
EBLON
EBS
ECS
EDH
EDO
EIOEI
EJD
EMB
EMK
EMOBN
EN4
EPAXT
EPL
ESBYG
EST
ESX
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
FYUFA
G-Y
G-Z
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GXS
HCIFZ
HF~
HG5
HG6
HMCUK
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K6V
K7-
KDC
KOV
KOW
KPH
LAS
LK8
LLZTM
M0L
M0N
M1P
M2P
M4Y
M7P
MA-
MK~
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P62
PF0
PQQKQ
PROAC
PSQYO
PT4
PT5
Q2X
QOK
QOR
QOS
R4E
R89
R9I
RHV
RIG
RNI
RNS
ROL
RPX
RRX
RSV
RZK
S16
S1Z
S26
S27
S28
S3A
S3B
SAP
SBL
SBY
SCLPG
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
SSXJD
STPWE
SV3
SZN
T13
T16
TEORI
TN5
TSG
TSK
TSV
TUC
TUS
U2A
U9L
UG4
UKHRP
UNUBA
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WH7
WJK
WK6
WK8
YLTOR
Z45
Z7R
Z7X
Z7Z
Z83
Z88
Z8M
Z8R
Z8T
Z8W
Z92
ZMTXR
ZOVNA
ZXP
~8M
~EX
~KM
AACDK
AAEOY
AAJBT
AAQLM
AASML
AAYXX
ABAKF
ACAOD
ACDTI
ACZOJ
AEFQL
AEMSY
AFBBN
AGQEE
AGRTI
AIGIU
ALIPV
CITATION
H13
7QO
7TK
7XB
8AL
8FD
8FK
FR3
H8D
JQ2
K9.
L7M
P64
PQEST
PQUKI
PRINS
Q9U
7X8
5PM
ID FETCH-LOGICAL-c451t-1637a26fb1bdf63c94f052e7ccba76f81bcc0d0bce5e0a8348454881e092d6d43
IEDL.DBID C6C
ISSN 1432-0770
0340-1200
IngestDate Tue Sep 17 21:24:58 EDT 2024
Fri Aug 16 00:20:01 EDT 2024
Fri Sep 13 06:32:45 EDT 2024
Thu Sep 12 18:55:05 EDT 2024
Sat Dec 16 12:10:08 EST 2023
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 4
Keywords Deep neural networks
Autoencoders
Auto-associative multilayer perceptrons
Principal component analysis
Singular value decomposition
Language English
License Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c451t-1637a26fb1bdf63c94f052e7ccba76f81bcc0d0bce5e0a8348454881e092d6d43
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Communicated by Benjamin Lindner.
ORCID 0000-0002-2588-4047
OpenAccessLink https://doi.org/10.1007/s00422-022-00937-6
PMID 35727351
PQID 2689980685
PQPubID 54056
PageCount 18
ParticipantIDs pubmedcentral_primary_oai_pubmedcentral_nih_gov_9287259
proquest_miscellaneous_2679237966
proquest_journals_2689980685
crossref_primary_10_1007_s00422_022_00937_6
springer_journals_10_1007_s00422_022_00937_6
PublicationCentury 2000
PublicationDate 2022-08-01
PublicationDateYYYYMMDD 2022-08-01
PublicationDate_xml – month: 08
  year: 2022
  text: 2022-08-01
  day: 01
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
– name: Heidelberg
PublicationSubtitle Advances in Computational Neuroscience and in Control and Information Theory for Biological Systems
PublicationTitle Biological cybernetics
PublicationTitleAbbrev Biol Cybern
PublicationYear 2022
Publisher Springer Berlin Heidelberg
Springer Nature B.V
Publisher_xml – name: Springer Berlin Heidelberg
– name: Springer Nature B.V
References Baldi P (2012) Autoencoders, unsupervised learning, and deep architectures. In: Proceedings of ICML workshop on unsupervised and transfer learning. JMLR Workshop and Conference Proceedings, pp 37–49
Mairal J, Bach F, Ponce J, Sapiro G (2009) Online dictionary learning for sparse coding. In: Proceedings of the 26th annual international conference on machine learning, pp 689–696
Ng A (2011) Cs294a lecture notes–sparse autoencoder. https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Adv Neural Inf Process Syst vol 27
LuXTsaoYMatsudaSHoriCSpeech enhancement based on deep denoising autoencoderInterspeech20132013436440
Rumelhart DE, Hinton GE, Williams RJ (1985) Learning internal representations by error propagation. California Univ San Diego La Jolla Inst for Cognitive Science, Tech. Rep
De LeeuwJPrincipal component analysis of binary data by iterated singular value decompositionComput Stat Data Anal2006501213910.1016/j.csda.2004.07.010
JolliffeIPrincipal component analysis, springer series in statistics19862New YorkSpringer10.1007/978-1-4757-1904-8
TibshiraniRRegression shrinkage and selection via the lassoJ Roy Stat Soc: Ser B (Methodol)1996581267288
Morgan N, Bourlard H (1990) Generalization and parameter estimation in feedforward nets: some experiments. In: Advances in neural information processing systems 2. Morgan Kaufmann, pp 630–637
Schein AI, Saul LK, Ungar LH (2003) A generalized linear model for principal component analysis of binary data. In: International workshop on artificial intelligence and statistics. PMLR, pp 240–247
FukaiTAsabukiTHagaTNeural mechanisms for learning hierarchical structures of informationCurr Opin Neurobiol2021701451531:CAS:528:DC%2BB3MXisVOksrbM10.1016/j.conb.2021.10.011
HornRAJohnsonCRMatrix analysis20132CambridgeCambridge University Press
Krzanowski W (1987) Cross-validation in principal component analysis. Biometrics, pp 575–584
Refinetti M, Goldt S (2022) The dynamics of representation learning in shallow, non-linear autoencoders. arXiv preprint arXiv:2201.02115
Gutiérrez L, Keith B (2018) A systematic literature review on word embeddings. In: International conference on software process improvement. Springer, pp 132–141
MageeJCGrienbergerCSynaptic plasticity forms and functionsAnnu Rev Neurosci202043951171:CAS:528:DC%2BB3cXjsVSktLc%3D10.1146/annurev-neuro-090919-022842
Makhzani A, Shlens J, Jaitly N, Goodfellow I, Frey B (2015) Adversarial autoencoders. arXiv preprint arXiv:1511.05644
WienerNCybernetics or control and communication in the animal and the machine1948CambridgeMIT Press
KullbackSLeiblerRAOn information and sufficiencyAnn Math Stat1951221798610.1214/aoms/1177729694
Vandewalle J, Staar J, Moor BD, Lauwers J (1984) An adaptive singular value decomposition algorithm and its application to adaptive realization Springer, Berlin, vol 63
Qi Y, Wang Y, Zheng X, Wu Z (2014) Robust feature learning by stacked autoencoder with maximum correntropy criterion. In: 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 6716–6720
CaytonLAlgorithms for manifold learningUniv California San Diego Tech Rep2005121–171
OlshausenBAFieldDJSparse coding of sensory inputsCurr Opin Neurobiol20041444814871:CAS:528:DC%2BD2cXmsVKmtr8%3D10.1016/j.conb.2004.07.007
Povey D, Ghoshal A, Boulianne G, Burget L, Glembek O, Goel N, Hannemann M, Motlicek P, Qian Y, Schwarz P et al (2011) The kaldi speech recognition toolkit. In: IEEE 2011 workshop on automatic speech recognition and understanding, no. IEEE Signal Processing Society, CONF
Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. In: Advances in neural information processing systems, pp 153–160
Srivastava N, Mansimov E, Salakhudinov R (2015) Unsupervised learning of video representations using lstms. In: International conference on machine learning. PMLR, pp 843–852
LevyOGoldbergYNeural word embedding as implicit matrix factorizationAdv Neural Inf Process Syst20142721772185
Golub G, Reinsch C (1971) Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer
Xie J, Xu L, Chen E (2012) Image denoising and inpainting with deep neural networks. In: Adv Neural Inf Process Syst, pp 341–349
HeRHuB-GZhengW-SKongX-WRobust principal component analysis based on maximum correntropy criterionIEEE Trans Image Process20112061485149410.1109/TIP.2010.2103949
HotellingHAnalysis of a complex of statistical variables into principal componentsJ Educ Psychol1933246/741744110.1037/h0071325
StewartGIntroduction to matrix computation1973New-YorkAcademic Press
Kingma DP, Welling M (2019) An introduction to variational autoencoders. arXiv preprint arXiv:1906.02691
Rifai S, Mesnil G, Vincent P, Muller X, Bengio Y, Dauphin Y, Glorot X (2011) Higher order contractive auto-encoder. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, pp 645–660
XiongPWangHLiuMZhouSHouZLiuXEcg signal enhancement based on improved denoising auto-encoderEng Appl Artif Intell20165219420210.1016/j.engappai.2016.02.015
HansenPCO’LearyDPThe use of the l-curve in the regularization of discrete ill-posed problemsSIAM J Sci Comput19931461487150310.1137/0914086
BaldiPHornikKNeural networks and principal component analysis: Learning from examples without local minimaNeural Netw198921535810.1016/0893-6080(89)90014-2
BunchJNielsenCUpdating the singular value decompositionNum Math19783111112910.1007/BF01397471
NguyenHTranKPThomasseySHamadMForecasting and anomaly detection approaches using lstm and lstm autoencoder techniques with the applications in supply chain managementInt J Inf Manage20215710.1016/j.ijinfomgt.2020.102282
CristianiniNShawe-TaylorJAn introduction to support vector machines and other kernel-based learning methods2000CambrigdeCambridge University Press10.1017/CBO9780511801389
Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. science, 313(5786), 504–507
Masci J, Meier U, Cireşan D, Schmidhuber J (2011) Stacked convolutional auto-encoders for hierarchical feature extraction. In: International conference on artificial neural networks. Springer, pp 52–59
Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M et al. (2020) Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pp 38–45
AshbyWRAn introduction to cybernetics1961New YorkChapman & Hall Ltd
BaldiPFHornikKLearning in linear neural networks: a surveyIEEE Trans Neural Netw1995648378581:STN:280:DC%2BD1c7gsFeitw%3D%3D10.1109/72.392248
Vapnik V (1999) The nature of statistical learning theory. Springer Science & Business Media, Berlin
PrincipiERossettiDSquartiniSPiazzaFUnsupervised electric motor fault detection by using deep autoencodersIEEE/CAA J Automatica Sinica20196244145110.1109/JAS.2019.1911393
BreaJGerstnerWDoes computational neuroscience need new synaptic learning paradigms?Curr Opin Behav Sci201611616610.1016/j.cobeha.2016.05.012
LuG-FZouJWangYWangZL1-norm-based principal component analysis with adaptive regularizationPattern Recogn20166090190710.1016/j.patcog.2016.07.014
Bourlard H, Kamp Y, Wellekens C (1985) Speaker dependent connected speech recognition via phonetic markov models. In: ICASSP’85. IEEE international conference on acoustics, speech, and signal processing, vol 10. IEEE, pp 1213–1216
Ben-HurAHornDSiegelmannHTVapnikVSupport vector clusteringJ Mach Learn Res20022125137
Li J, Luong M-T, Jurafsky D (2015) A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057
Vincent P, Larochelle H, Bengio Y, Manzagol P (2008) Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on machine learning, pp 1096–1103
Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems, pp 3104–3112
CharteDCharteFdel JesusMJHerreraFAn analysis on the use of autoencoders for representation learning: Fundamentals, learning task case studies, explainability and challengesNeurocomputing20204049310710.1016/j.neucom.2020.04.057
Zou W, Socher R, Cer D, Manning C (2013) Bilingual word embeddings for phrase-based machine translation. In: Proceedings of the 2013 conference on empirical methods in natural language processing, pp 1393–1398
DosovitskiyABroxTGenerating images with perceptual similarity metrics based on deep networksAdv Neural Inf Process Syst201629658666
ChenSDonohoDSaundersMAtomic decomposition by basis pursuitSIAM J Sci Comput2001431129159
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT press, Cambridge
Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781
GolubGVan LoanCMatrix computation1983OxfordOxford Academic Press
BourlardHKampYAuto-association by multilayer perceptrons and singular value decompositionBiol Cybern19885942912941:STN:280:DyaL1M%2Fmt1GksQ%3D%3D10.1007/BF00332918
Guo Z, Yue H, Wang H (2004) A modified pca based on the minimum error entropy. In: Proceedings of the 2004 American control conference, vol 4. IEEE, pp 3800–3801
HornikKStinchcombeMWhiteHMultilayer feedforward networks are universal approximatorsNeural Netw19892535936610.1016/0893-6080(89)90020-8
937_CR39
937_CR32
937_CR35
G Golub (937_CR18) 1983
R Tibshirani (937_CR57) 1996; 58
E Principi (937_CR48) 2019; 6
S Chen (937_CR13) 2001; 43
H Hotelling (937_CR29) 1933; 24
J De Leeuw (937_CR15) 2006; 50
P Xiong (937_CR64) 2016; 52
G Stewart (937_CR55) 1973
S Kullback (937_CR33) 1951; 22
937_CR31
G-F Lu (937_CR37) 2016; 60
X Lu (937_CR36) 2013; 2013
937_CR26
JC Magee (937_CR38) 2020; 43
O Levy (937_CR34) 2014; 27
937_CR22
937_CR21
937_CR65
J Brea (937_CR9) 2016; 11
937_CR23
P Baldi (937_CR3) 1989; 2
WR Ashby (937_CR1) 1961
L Cayton (937_CR11) 2005; 12
RA Horn (937_CR27) 2013
937_CR62
937_CR5
937_CR8
937_CR20
937_CR63
937_CR2
H Nguyen (937_CR45) 2021; 57
A Dosovitskiy (937_CR16) 2016; 29
937_CR60
BA Olshausen (937_CR46) 2004; 14
937_CR59
937_CR58
937_CR54
K Hornik (937_CR28) 1989; 2
937_CR56
937_CR19
N Cristianini (937_CR14) 2000
PC Hansen (937_CR24) 1993; 14
937_CR51
J Bunch (937_CR10) 1978; 31
937_CR50
937_CR53
937_CR52
N Wiener (937_CR61) 1948
937_CR47
937_CR49
H Bourlard (937_CR7) 1988; 59
937_CR44
937_CR43
T Fukai (937_CR17) 2021; 70
D Charte (937_CR12) 2020; 404
A Ben-Hur (937_CR6) 2002; 2
R He (937_CR25) 2011; 20
937_CR40
937_CR42
937_CR41
I Jolliffe (937_CR30) 1986
PF Baldi (937_CR4) 1995; 6
References_xml – volume: 14
  start-page: 1487
  issue: 6
  year: 1993
  ident: 937_CR24
  publication-title: SIAM J Sci Comput
  doi: 10.1137/0914086
  contributor:
    fullname: PC Hansen
– ident: 937_CR58
  doi: 10.1007/BFb0006275
– ident: 937_CR44
– ident: 937_CR60
  doi: 10.1145/1390156.1390294
– ident: 937_CR63
– volume-title: Introduction to matrix computation
  year: 1973
  ident: 937_CR55
  contributor:
    fullname: G Stewart
– volume: 22
  start-page: 79
  issue: 1
  year: 1951
  ident: 937_CR33
  publication-title: Ann Math Stat
  doi: 10.1214/aoms/1177729694
  contributor:
    fullname: S Kullback
– ident: 937_CR50
– ident: 937_CR23
  doi: 10.1007/978-3-030-01171-0_12
– volume-title: Principal component analysis, springer series in statistics
  year: 1986
  ident: 937_CR30
  doi: 10.1007/978-1-4757-1904-8
  contributor:
    fullname: I Jolliffe
– volume: 6
  start-page: 441
  issue: 2
  year: 2019
  ident: 937_CR48
  publication-title: IEEE/CAA J Automatica Sinica
  doi: 10.1109/JAS.2019.1911393
  contributor:
    fullname: E Principi
– volume: 57
  year: 2021
  ident: 937_CR45
  publication-title: Int J Inf Manage
  doi: 10.1016/j.ijinfomgt.2020.102282
  contributor:
    fullname: H Nguyen
– ident: 937_CR53
– ident: 937_CR19
  doi: 10.1007/978-3-662-39778-7_10
– volume: 27
  start-page: 2177
  year: 2014
  ident: 937_CR34
  publication-title: Adv Neural Inf Process Syst
  contributor:
    fullname: O Levy
– ident: 937_CR47
– volume: 2
  start-page: 53
  issue: 1
  year: 1989
  ident: 937_CR3
  publication-title: Neural Netw
  doi: 10.1016/0893-6080(89)90014-2
  contributor:
    fullname: P Baldi
– volume: 60
  start-page: 901
  year: 2016
  ident: 937_CR37
  publication-title: Pattern Recogn
  doi: 10.1016/j.patcog.2016.07.014
  contributor:
    fullname: G-F Lu
– ident: 937_CR20
– volume: 52
  start-page: 194
  year: 2016
  ident: 937_CR64
  publication-title: Eng Appl Artif Intell
  doi: 10.1016/j.engappai.2016.02.015
  contributor:
    fullname: P Xiong
– ident: 937_CR35
  doi: 10.3115/v1/P15-1107
– ident: 937_CR59
  doi: 10.1007/978-1-4757-3264-1
– ident: 937_CR62
  doi: 10.18653/v1/2020.emnlp-demos.6
– volume: 29
  start-page: 658
  year: 2016
  ident: 937_CR16
  publication-title: Adv Neural Inf Process Syst
  contributor:
    fullname: A Dosovitskiy
– volume-title: Cybernetics or control and communication in the animal and the machine
  year: 1948
  ident: 937_CR61
  contributor:
    fullname: N Wiener
– ident: 937_CR52
  doi: 10.21236/ADA164453
– volume: 14
  start-page: 481
  issue: 4
  year: 2004
  ident: 937_CR46
  publication-title: Curr Opin Neurobiol
  doi: 10.1016/j.conb.2004.07.007
  contributor:
    fullname: BA Olshausen
– volume-title: Matrix computation
  year: 1983
  ident: 937_CR18
  contributor:
    fullname: G Golub
– ident: 937_CR2
– ident: 937_CR56
– volume-title: Matrix analysis
  year: 2013
  ident: 937_CR27
  contributor:
    fullname: RA Horn
– volume: 70
  start-page: 145
  year: 2021
  ident: 937_CR17
  publication-title: Curr Opin Neurobiol
  doi: 10.1016/j.conb.2021.10.011
  contributor:
    fullname: T Fukai
– ident: 937_CR21
– volume: 20
  start-page: 1485
  issue: 6
  year: 2011
  ident: 937_CR25
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2010.2103949
  contributor:
    fullname: R He
– ident: 937_CR42
– ident: 937_CR41
  doi: 10.1007/978-3-642-21735-7_7
– ident: 937_CR39
  doi: 10.1145/1553374.1553463
– volume: 58
  start-page: 267
  issue: 1
  year: 1996
  ident: 937_CR57
  publication-title: J Roy Stat Soc: Ser B (Methodol)
  contributor:
    fullname: R Tibshirani
– ident: 937_CR65
– ident: 937_CR8
  doi: 10.1109/ICASSP.1985.1168285
– volume: 11
  start-page: 61
  year: 2016
  ident: 937_CR9
  publication-title: Curr Opin Behav Sci
  doi: 10.1016/j.cobeha.2016.05.012
  contributor:
    fullname: J Brea
– volume: 43
  start-page: 95
  year: 2020
  ident: 937_CR38
  publication-title: Annu Rev Neurosci
  doi: 10.1146/annurev-neuro-090919-022842
  contributor:
    fullname: JC Magee
– volume: 59
  start-page: 291
  issue: 4
  year: 1988
  ident: 937_CR7
  publication-title: Biol Cybern
  doi: 10.1007/BF00332918
  contributor:
    fullname: H Bourlard
– volume: 43
  start-page: 129
  issue: 1
  year: 2001
  ident: 937_CR13
  publication-title: SIAM J Sci Comput
  contributor:
    fullname: S Chen
– volume-title: An introduction to cybernetics
  year: 1961
  ident: 937_CR1
  contributor:
    fullname: WR Ashby
– volume-title: An introduction to support vector machines and other kernel-based learning methods
  year: 2000
  ident: 937_CR14
  doi: 10.1017/CBO9780511801389
  contributor:
    fullname: N Cristianini
– volume: 31
  start-page: 111
  year: 1978
  ident: 937_CR10
  publication-title: Num Math
  doi: 10.1007/BF01397471
  contributor:
    fullname: J Bunch
– ident: 937_CR49
  doi: 10.1109/ICASSP.2014.6854900
– ident: 937_CR51
  doi: 10.1007/978-3-642-23783-6_41
– ident: 937_CR22
– ident: 937_CR5
  doi: 10.7551/mitpress/7503.003.0024
– volume: 404
  start-page: 93
  year: 2020
  ident: 937_CR12
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.04.057
  contributor:
    fullname: D Charte
– ident: 937_CR32
  doi: 10.2307/2531996
– ident: 937_CR43
– volume: 2
  start-page: 359
  issue: 5
  year: 1989
  ident: 937_CR28
  publication-title: Neural Netw
  doi: 10.1016/0893-6080(89)90020-8
  contributor:
    fullname: K Hornik
– volume: 6
  start-page: 837
  issue: 4
  year: 1995
  ident: 937_CR4
  publication-title: IEEE Trans Neural Netw
  doi: 10.1109/72.392248
  contributor:
    fullname: PF Baldi
– volume: 2013
  start-page: 436
  year: 2013
  ident: 937_CR36
  publication-title: Interspeech
  contributor:
    fullname: X Lu
– ident: 937_CR54
– volume: 12
  start-page: 1
  issue: 1–17
  year: 2005
  ident: 937_CR11
  publication-title: Univ California San Diego Tech Rep
  contributor:
    fullname: L Cayton
– volume: 50
  start-page: 21
  issue: 1
  year: 2006
  ident: 937_CR15
  publication-title: Comput Stat Data Anal
  doi: 10.1016/j.csda.2004.07.010
  contributor:
    fullname: J De Leeuw
– volume: 24
  start-page: 417
  issue: 6/7
  year: 1933
  ident: 937_CR29
  publication-title: J Educ Psychol
  doi: 10.1037/h0071325
  contributor:
    fullname: H Hotelling
– ident: 937_CR31
  doi: 10.1561/9781680836233
– ident: 937_CR26
  doi: 10.1126/science.1127647
– volume: 2
  start-page: 125
  year: 2002
  ident: 937_CR6
  publication-title: J Mach Learn Res
  contributor:
    fullname: A Ben-Hur
– ident: 937_CR40
SSID ssj0009259
Score 2.4558504
Snippet In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called...
Abstract In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called...
SourceID pubmedcentral
proquest
crossref
springer
SourceType Open Access Repository
Aggregation Database
Publisher
StartPage 389
SubjectTerms Artificial neural networks
Bioinformatics
Biomedical and Life Sciences
Biomedicine
Coders
Complex Systems
Computer Appl. in Life Sciences
Decomposition
Eigenvalues
Empirical analysis
Feature extraction
Linear algebra
Machine learning
Manifolds (mathematics)
Multilayer perceptrons
Neural networks
Neurobiology
Neurosciences
Principal components analysis
Prospects
Singular value decomposition
Statistical analysis
SummonAdditionalLinks – databaseName: AUTh Library subscriptions: ProQuest Central
  dbid: BENPR
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3dS8MwED_m9uKLnxOrUyr4pmFpmybtk0zZGIJDxMHeSpKmKEg7Z_f_m-vajQ30oU8pJLm75O5yd78DuBURy5gKDOE6NIRpYew9SBmRQvHUipAxIdY7v0z4eMqeZ-GsBeOmFgbTKps7sbqo00LjG3nf5-gZUB6FfanwFUCX_Yf5N8H-URhnrZtp7EHH9xgGbDuPw8nr2waA168ap9EAY5lWNOoCmqqMrgLCIpjXjg6-IHxbSW0sz928yZ3gaaWTRkdwUBuT7mDF_WNomfwEDptGDW59bk-hO1iWBSJWYtayuzBfhUxN2oXpaPj-NCZ1PwSiWeiVxJpOQvo8U55KMx7omGU09I3QWknBM2uAak1TqrQJDZVRwCJm_ZHIM3bzKU9ZcAbtvMjNObiKU2U1vcd1oFA9yVhqxWJuvaPAGoTCgbuGAMl8BXuRrAGOK3IlFD8kV8Id6DU0Suoj8JNsGObAzXrYCi9GJGRuiiX-g_CFwrpcDogt2q5nRfjr7ZH886OCwY6ts2eZ6sB9w4XN5H-v9eL_tV7Cvl8JAqb49aBdLpbmypodpbquJeoXC3DUBA
  priority: 102
  providerName: ProQuest
Title Autoencoders reloaded
URI https://link.springer.com/article/10.1007/s00422-022-00937-6
https://www.proquest.com/docview/2689980685/abstract/
https://search.proquest.com/docview/2679237966
https://pubmed.ncbi.nlm.nih.gov/PMC9287259
Volume 116
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB5se_His2K0lgjeNJDHPpJjLH2gWEQs1FPY3WxQkERq-v-dTZOWFj14yOawGzaZnc18w8x8C3DDQ5IRGWiHKaodorjG_6BLHMElS1GFtKam3vlpyiYz8jCn85omx9TC7MTvK7JPdJdMzrlxvrnDWtChCFPMJhywwYZgF3F8XRTz-3PbhmeDJndzIXcCopWdGR3BQQ0Q7Xi1osewp_MTOGwOX7DrvXgK3XhZFoaF0mQi2wv9WYhUp12YjYavg4lTn3HgKEK90kE4xIXPMunJNGOBikjmUl9zpaTgLENQqZSbulJpql0RBiQk6GOEnsYPTllKgjNo50Wuz8GWzJVovT2mAmlMjoiEkiRCUZEAQR634LYRQPK1orJI1qTFlbgS11xGXAmzoNfIKKnV-jvxmXHPXBZSC67X3aiQJsogcl0szRhDScjRjbKAb8l2PauhtN7uyT_eK2rrCB04XEgL7ppV2Ez-97te_G_4Jez7lWKYNL4etMvFUl8htChlH1p8zrENR-M-dOLx2-MQ7_fD6fNLv9I3bGd-_AMHDMuW
link.rule.ids 230,315,786,790,891,12083,12792,21416,27957,27958,31754,31755,33408,33409,33779,33780,41116,41155,41558,42185,42224,42627,43345,43635,43840,51611,52146,52269,74102,74392,74659
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3dT8IwEL8oPuiLnxinqJj4po1l7drxZIiRoAJPkPC2rF0XTcyGMP5_e2WDQKIPe-qSdne3--jd_Q7gXoY85YoZInRgCNfSWD1IOYmlEokVIWMC7HceDEVvzN8nwaS8cJuXZZWVTnSKOsk13pE_-QIjAyrC4Hn6Q3BqFGZXyxEau7DHGeNY0icncg2667thaZRh_tKKQ9k041rnHPgVwVp2DOolEZuGae1tbtdKbiVMnR3qHsNh6UA2O0uOn8COyU7hqBrO0Cz_1TOodxZFjiiVWKncnJnvPE5MUodx93X00iPlDASiedAqiHWXZOyLVLVUkgqm2zylgW-k1iqWIrVOp9Y0oUqbwNA4ZDzkNgYJW8Z-fCISzs6hluWZuYCmElRZ694Smik0SXE71oq3hY2ImHUCpQcPFQGi6RLqIlqBGjtyRRQfJFckPGhUNIpKsZ9HayZ5cLdatgKLWYg4M_kC30HIQmnDLA_kBm1XuyLk9eZK9vXpoK_bNsCzTPXgseLCevO_z3r5_1lvYb83GvSj_tvw4woOfCcUWOLXgFoxW5hr63YU6sbJ1i8KQNHR
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LT8MwDLZgSIgLzyHKc0jcIFrWpkl3QgiYeIsDSLtVTZoKJNSOPf4_dpZuGhIcekqlpLYT2_WXzwBnKhGF0JFl0sSWCaMsnoNcsExpmaMJWRvTfefnF3n3Lh76cd_jn0YeVlmfie6gzitD_8jboaTMgMskbhceFvF607scfDPqIEWVVt9OYxlW0Ety6mag-mpOwBu6xmk8olommoa_QOOu0TkiLEa4dkrwFZOLTmoeef7GTf4qnjqf1NuEdR9Mtq6m2t-CJVtuw0bdqKHl9-0ONK8m44oYKwm13BraryrLbd6E997t2_Ud8_0QmBFxZ8wwdFJZKAvd0XkhI9MVBY9Dq4zRmZIFBqDG8JxrY2PLsyQSicB8JOlY_Phc5iLahUZZlXYPWlpyjZ6-I02kyT1l3cxo0ZWYHUUYEKoAzmsBpIMp7UU6Izh24ko5PSSuVAZwWMso9VtglM4VFsDpbBiNlyoSWWmrCb1D9IUKU64A1IJsZ7MS_fXiSPn54Wiwu5jsoVIDuKi1MJ_877Xu_7_WE1hFs0qf7l8eD2AtdDZBaL9DaIyHE3uEEchYHzvT-gHiU9X9
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Autoencoders+reloaded&rft.jtitle=Biological+cybernetics&rft.au=Bourlard%2C+Herv%C3%A9&rft.au=Kabil%2C+Selen+Hande&rft.date=2022-08-01&rft.issn=1432-0770&rft.eissn=1432-0770&rft.volume=116&rft.issue=4&rft.spage=389&rft.epage=406&rft_id=info:doi/10.1007%2Fs00422-022-00937-6&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s00422_022_00937_6
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1432-0770&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1432-0770&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1432-0770&client=summon