Cellular Automata Can Reduce Memory Requirements of Collective-State Computing

Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superim...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 33; no. 6; pp. 2701 - 2713
Main Authors Kleyko, Denis, Frady, Edward Paxon, Sommer, Friedrich T.
Format Journal Article
LanguageEnglish
Published United States IEEE 01.06.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns-rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory.
AbstractList Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space–time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns—rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory.
Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns--rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory. 
Various non-classical approaches of distributed information processing, such as neural networks, reservoir computing, vector symbolic architectures, and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in a computation are superimposed into a single high-dimensional state vector, the collective-state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. Here we show that an elementary cellular automaton with rule 90 (CA90) enables space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns – rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using reservoir computing and vector symbolic architectures. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudo-random number generator and then stored in a large memory.
Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns-rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory.Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns-rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory.
Author Kleyko, Denis
Frady, Edward Paxon
Sommer, Friedrich T.
Author_xml – sequence: 1
  givenname: Denis
  orcidid: 0000-0002-6032-6155
  surname: Kleyko
  fullname: Kleyko, Denis
  email: denkle@berkeley.edu
  organization: Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA, USA
– sequence: 2
  givenname: Edward Paxon
  surname: Frady
  fullname: Frady, Edward Paxon
  email: epaxon@berkeley.edu
  organization: Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA, USA
– sequence: 3
  givenname: Friedrich T.
  orcidid: 0000-0002-6738-9263
  surname: Sommer
  fullname: Sommer, Friedrich T.
  email: fsommer@berkeley.edu
  organization: Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA, USA
BackLink https://www.ncbi.nlm.nih.gov/pubmed/34699370$$D View this record in MEDLINE/PubMed
https://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-116044$$DView record from Swedish Publication Index
https://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-57075$$DView record from Swedish Publication Index
BookMark eNqFkl1v0zAUhiM0xMbYHwAJReIGCaUc27Fj3yBV4VMqRWIDcWc57knxlMSdnQzt3-PSrmK7AN_463lfna_H2dHgB8yypwRmhIB6fbFcLs5nFCiZMUIUL9mD7IQSQQvKpDw6nKsfx9lZjJeQlgAuSvUoO2alUIpVcJIta-y6qTMhn0-j781o8toM-VdcTRbzz9j7cJNuV5ML2OMwxty3ee27Du3orrE4H82I6aHfTKMb1k-yh63pIp7t99Ps2_t3F_XHYvHlw6d6vigsVzAWpOGNUACWl8pAU1pE2lCmKmlkQ6FlhhOoSGOpAtk0iLLFcmVaZQQ0jCI7zV7tfOMv3EyN3gTXm3CjvXH6rfs-1z6sdXCaV1DxRBf_p32YNCECyjLxb3Z8gntc2ZR3MN0d2d2fwf3Ua3-tFSWclSoZvNwbBH81YRx176JNlTYD-ilqymUFwEGQhL64h176KQypeJqKioJSErYZPP87okMot41MgNwBNvgYA7bautQa57cBuk4T0Nux0X_GRm_HRu_HJknpPemt-z9Fz3Yih4gHgeJSQKXYb16GzqI
CODEN ITNNAL
CitedBy_id crossref_primary_10_1109_TNNLS_2023_3237381
crossref_primary_10_1109_TNNLS_2020_3043309
crossref_primary_10_1109_TBCAS_2022_3187944
crossref_primary_10_1038_s41467_023_38299_7
crossref_primary_10_1109_TCASAI_2024_3462692
crossref_primary_10_1109_ACCESS_2023_3299296
crossref_primary_10_1145_3538531
crossref_primary_10_3389_fnins_2022_867568
crossref_primary_10_3390_rs17020329
Cites_doi 10.1109/72.377968
10.1016/0167-2789(90)90064-V
10.1109/TNNLS.2020.3043309
10.25088/ComplexSystems.28.4.433
10.1109/IJCNN52387.2021.9533805
10.1038/nrn2558
10.1109/ICMLA.2019.00069
10.1109/MM.2018.112130359
10.1109/IJCNN.2010.5596589
10.1162/089976602760407955
10.1109/69.917565
10.1073/pnas.79.8.2554
10.5772/intechopen.79812
10.1109/IECON.2017.8216554
10.1162/NECO_a_00787
10.1016/0196-8858(86)90028-X
10.1109/TIT.2006.871582
10.1007/978-3-319-52289-0_21
10.1016/j.procs.2016.07.421
10.1109/TIT.2011.2111670
10.1109/ISBI.2017.7950697
10.1109/BioCAS49922.2021.9645008
10.1162/NECO_a_00467
10.1007/s12559-009-9009-8
10.1109/72.471375
10.1109/IJCNN52387.2021.9533316
10.1145/3314326
10.1007/BF00339943
10.1109/TNNLS.2015.2462721
10.1109/TNNLS.2020.3015971
10.1109/TCSI.2017.2705051
10.1109/TNNLS.2016.2535338
10.1109/TNN.2010.2089641
10.1109/TNNLS.2021.3105949
10.1007/978-3-319-63940-6_13
10.1016/S0022-0000(03)00025-4
10.25088/ComplexSystems.26.4.319
10.1016/j.neucom.2005.12.126
10.1162/neco_a_01331
10.1007/BF01223745
10.1063/1.5120412
10.25088/ComplexSystems.26.3.225
10.11591/eei.v9i3.1720
10.1016/S0031-3203(02)00030-4
10.1016/j.neuron.2016.09.038
10.1613/jair.1.12664
10.1109/IJCNN.2017.7966151
10.1109/ISSSE.2007.4294483
10.1162/neco_a_01084
10.1162/neco_a_01329
10.1007/978-3-642-35289-8_36
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
5PM
AABEP
ADTPV
AOWAS
D8T
D91
ZZAVC
DOI 10.1109/TNNLS.2021.3119543
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
PubMed Central (Full Participant titles)
SWEPUB Örebro universitet full text
SwePub
SwePub Articles
SWEPUB Freely available online
SWEPUB Örebro universitet
SwePub Articles full text
DatabaseTitle CrossRef
PubMed
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList Materials Research Database




MEDLINE - Academic
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 2713
ExternalDocumentID oai_DiVA_org_ri_57075
oai_DiVA_org_oru_116044
PMC9215349
34699370
10_1109_TNNLS_2021_3119543
9586079
Genre orig-research
Journal Article
GrantInformation_xml – fundername: European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie Individual Fellowship
  grantid: 839179
  funderid: 10.13039/100010665
– fundername: NIH
  grantid: R01-EB026955
  funderid: 10.13039/100000002
– fundername: Defense Advanced Research Projects Agency’s (DARPA’s) Virtual Intelligence Processing (VIP, Super-HD Project) and Artificial Intelligence Exploration (AIE, HyDDENN Project) Programs.
  funderid: 10.13039/100000185
– fundername: NIBIB NIH HHS
  grantid: R01 EB026955
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
RIG
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
5PM
AABEP
ADTPV
AOWAS
D8T
D91
ZZAVC
ID FETCH-LOGICAL-c590t-1b5b6900c549a0b4cee2b23978a8b20f3a51071bc2908bbee8fe4daf9a60b32e3
IEDL.DBID RIE
ISSN 2162-237X
2162-2388
IngestDate Thu Aug 21 06:54:42 EDT 2025
Thu Aug 21 07:17:12 EDT 2025
Thu Aug 21 18:37:00 EDT 2025
Fri Jul 11 16:48:37 EDT 2025
Mon Jun 30 04:53:35 EDT 2025
Mon Jul 21 05:58:15 EDT 2025
Tue Jul 01 00:27:42 EDT 2025
Thu Apr 24 22:57:05 EDT 2025
Wed Aug 27 02:24:37 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Issue 6
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c590t-1b5b6900c549a0b4cee2b23978a8b20f3a51071bc2908bbee8fe4daf9a60b32e3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-6738-9263
0000-0002-6032-6155
OpenAccessLink https://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-116044
PMID 34699370
PQID 2672099805
PQPubID 85436
PageCount 13
ParticipantIDs pubmed_primary_34699370
proquest_miscellaneous_2587005061
proquest_journals_2672099805
crossref_citationtrail_10_1109_TNNLS_2021_3119543
crossref_primary_10_1109_TNNLS_2021_3119543
swepub_primary_oai_DiVA_org_oru_116044
swepub_primary_oai_DiVA_org_ri_57075
pubmedcentral_primary_oai_pubmedcentral_nih_gov_9215349
ieee_primary_9586079
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-06-01
PublicationDateYYYYMMDD 2022-06-01
PublicationDate_xml – month: 06
  year: 2022
  text: 2022-06-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref56
MacKay (ref62) 2003
ref15
ref59
ref14
ref58
ref53
Gayler (ref46)
ref52
ref11
ref55
ref10
ref54
Kleyko (ref36) 2020
ref16
ref19
ref18
ref51
ref50
Gritsenko (ref65) 2017; 2
McDonald (ref60)
ref48
Cook (ref45) 2004; 15
ref42
ref41
Wolfram (ref17) 2002
ref44
ref43
ref49
ref8
ref7
ref9
ref4
ref6
ref5
ref40
Kleyko (ref29) 2021
ref35
ref34
Jaeger (ref3) 2002
ref31
Gayler (ref24)
ref30
ref33
ref2
ref1
ref39
Yerxa (ref32)
Kanerva (ref23)
Yilmaz (ref47) 2015; 10
ref26
ref25
ref20
ref64
ref63
ref22
Plate (ref38) 2003
ref66
ref21
Plate (ref37)
ref28
ref27
ref61
References_xml – volume: 10
  start-page: 435
  issue: 5
  year: 2015
  ident: ref47
  article-title: Machine learning using cellular automata based feature expansion and reservoir computing
  publication-title: J. Cellular Automata
– ident: ref13
  doi: 10.1109/72.377968
– ident: ref44
  doi: 10.1016/0167-2789(90)90064-V
– ident: ref40
  doi: 10.1109/TNNLS.2020.3043309
– volume-title: A New Kind of Science
  year: 2002
  ident: ref17
– start-page: 358
  volume-title: Proc. Real World Computing Symp. (RWC)
  ident: ref23
  article-title: Fully distributed representation
– volume: 15
  start-page: 1
  issue: 1
  year: 2004
  ident: ref45
  article-title: Universality in elementary cellular automata
  publication-title: Complex Syst.
– volume-title: Information Theory, Inference and Learning Algorithms
  year: 2003
  ident: ref62
– ident: ref52
  doi: 10.25088/ComplexSystems.28.4.433
– ident: ref27
  doi: 10.1109/IJCNN52387.2021.9533805
– ident: ref35
  doi: 10.1038/nrn2558
– ident: ref58
  doi: 10.1109/ICMLA.2019.00069
– ident: ref64
  doi: 10.1109/MM.2018.112130359
– year: 2020
  ident: ref36
  article-title: Perceptron theory for predicting the accuracy of neural networks
  publication-title: arXiv:2012.07881
– ident: ref66
  doi: 10.1109/IJCNN.2010.5596589
– ident: ref2
  doi: 10.1162/089976602760407955
– ident: ref14
  doi: 10.1109/69.917565
– ident: ref6
  doi: 10.1073/pnas.79.8.2554
– ident: ref57
  doi: 10.5772/intechopen.79812
– start-page: 1
  volume-title: Advances in Analogy Research: Integration of Theory and Data From the Cognitive, Computational, and Neural Sciences
  ident: ref24
  article-title: Multiplicative binding, representation operators & analogy
– ident: ref31
  doi: 10.1109/IECON.2017.8216554
– ident: ref18
  doi: 10.1162/NECO_a_00787
– ident: ref55
  doi: 10.1016/0196-8858(86)90028-X
– ident: ref9
  doi: 10.1109/TIT.2006.871582
– ident: ref30
  doi: 10.1007/978-3-319-52289-0_21
– ident: ref34
  doi: 10.1016/j.procs.2016.07.421
– ident: ref10
  doi: 10.1109/TIT.2011.2111670
– year: 2002
  ident: ref3
  article-title: Tutorial on training recurrent neural networks, covering BPTT, RTRL, EKF and the echo state network approach
– ident: ref59
  doi: 10.1109/ISBI.2017.7950697
– ident: ref63
  doi: 10.1109/BioCAS49922.2021.9645008
– ident: ref19
  doi: 10.1162/NECO_a_00467
– start-page: 1
  volume-title: Proc. Cogn. Comput., Merging Concepts Hardw.
  ident: ref32
  article-title: The hyperdimensional stack machine
– ident: ref15
  doi: 10.1007/s12559-009-9009-8
– ident: ref11
  doi: 10.1109/72.471375
– ident: ref28
  doi: 10.1109/IJCNN52387.2021.9533316
– ident: ref16
  doi: 10.1145/3314326
– ident: ref7
  doi: 10.1007/BF00339943
– ident: ref26
  doi: 10.1109/TNNLS.2015.2462721
– ident: ref12
  doi: 10.1109/TNNLS.2020.3015971
– ident: ref20
  doi: 10.1109/TCSI.2017.2705051
– start-page: 34
  volume-title: Proc. Adv. Neural Inf. Process. Syst. (NIPS)
  ident: ref37
  article-title: Holographic recurrent networks
– ident: ref25
  doi: 10.1109/TNNLS.2016.2535338
– ident: ref4
  doi: 10.1109/TNN.2010.2089641
– ident: ref21
  doi: 10.1109/TNNLS.2021.3105949
– volume: 2
  start-page: 5
  issue: 188
  year: 2017
  ident: ref65
  article-title: Neural distributed autoassociative memories: A survey
  publication-title: Cybern. Comput. Eng.
– start-page: 1
  volume-title: Proc. Unconventional Comput. Natural Comput. (UCNC)
  ident: ref60
  article-title: Complete & orthogonal replication of hyperdimensional memory via elementary cellular automata
– ident: ref50
  doi: 10.1007/978-3-319-63940-6_13
– ident: ref8
  doi: 10.1016/S0022-0000(03)00025-4
– ident: ref48
  doi: 10.25088/ComplexSystems.26.4.319
– ident: ref22
  doi: 10.1016/j.neucom.2005.12.126
– ident: ref42
  doi: 10.1162/neco_a_01331
– ident: ref54
  doi: 10.1007/BF01223745
– ident: ref1
  doi: 10.1063/1.5120412
– ident: ref51
  doi: 10.25088/ComplexSystems.26.3.225
– ident: ref33
  doi: 10.11591/eei.v9i3.1720
– ident: ref53
  doi: 10.1016/S0031-3203(02)00030-4
– start-page: 133
  volume-title: Proc. Joint Int. Conf. Cognit. Sci. (ICCS/ASCS)
  ident: ref46
  article-title: Vector symbolic architectures answer Jackendoff’s challenges for cognitive neuroscience
– ident: ref61
  doi: 10.1016/j.neuron.2016.09.038
– ident: ref41
  doi: 10.1613/jair.1.12664
– ident: ref49
  doi: 10.1109/IJCNN.2017.7966151
– ident: ref56
  doi: 10.1109/ISSSE.2007.4294483
– year: 2021
  ident: ref29
  article-title: Vector symbolic architectures as a computing framework for nanoscale hardware
  publication-title: arXiv:2106.05268
– ident: ref5
  doi: 10.1162/neco_a_01084
– ident: ref43
  doi: 10.1162/neco_a_01329
– volume-title: Holographic Reduced Representation: Distributed Representation for Cognitive Structures
  year: 2003
  ident: ref38
– ident: ref39
  doi: 10.1007/978-3-642-35289-8_36
SSID ssj0000605649
Score 2.459642
Snippet Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs),...
Various non-classical approaches of distributed information processing, such as neural networks, reservoir computing, vector symbolic architectures, and...
SourceID swepub
pubmedcentral
proquest
pubmed
crossref
ieee
SourceType Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 2701
SubjectTerms Automata
Automaton
Cellular automata
Cellular automata (CA)
Cellular automaton
Cellular automatons
collective-state computing
Computational modeling
Computational modelling
Computer memory
Data processing
Decoding
Distributed representation
distributed representations
hyperdimensional computing
Information processing
Job analysis
Memory architecture
Memory management
Network architecture
Neural networks
Neurons
Pseudorandom
Random processes
Random-number generation
Randomization
Representations
Reservoir Computing
reservoir computing (RC)
Reservoir management
Reservoirs
rule 90
State vectors
Task analysis
Vector symbolic architecture
vector symbolic architectures (VSAs)
Title Cellular Automata Can Reduce Memory Requirements of Collective-State Computing
URI https://ieeexplore.ieee.org/document/9586079
https://www.ncbi.nlm.nih.gov/pubmed/34699370
https://www.proquest.com/docview/2672099805
https://www.proquest.com/docview/2587005061
https://pubmed.ncbi.nlm.nih.gov/PMC9215349
https://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-116044
https://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-57075
Volume 33
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB61PXGhQHkECjJSxQWydey8fFwtVBWie4AW7S2yHUddsUpQmxzg1zPjPKRAVXGIlCiTKPaMPd9M5gFwkhtqM2XQLJEyCeOYugFK5cLKorKtXE5uN4q2WKfnV_HnTbLZgw9TLoxzzgefuQWd-n_5ZWM7cpWdqiRPeab2YR8Ntz5Xa_KncMTlqUe7IkpFKGS2GXNkuDq9XK-_fENrUERopFKRM-qfI9E0RO3MZyrJ91i5C27-GzU5qy3q9dHZIVyMI-nDUH4sutYs7O-_ijz-71AfwcMBmLJlL0mPYc_VT-BwbPrAhj3gCNYrt9tR7Cpbdm2DeFezla7ZVyoB69gFBe7-wisKMPaex1vWVMy7J_zOGnp0y_r3otp8Cldnny5X5-HQlCG0ieJtGJnEoEXNLRqWmpsYlawwAlFNrnMjeCU1rvIsMlYonhvjXF65uNSV0ik3Ujj5DA7qpnYvgBmjdBRrKt9jY2tSXUlhlZEulWVaRmUA0ciXwg4Vy6lxxq7wlgtXhWdrQWwtBrYG8H565mdfr-Ne6iOa-olymPUAjkf2F8OSvi1EmlGacc6TAN5Ot3Ex0h8WXbumQ5oEtz-eIEYK4HkvLdO7R2kLIJvJ0URAhb7nd-rttS_4rRCXyRg_610vcbNHPm6_LwuUHTw6HCdOZhzAyX2EN9siyRAqvrx79K_ggaAcD-9qOoaD9qZzrxF5teaNX3J_AJ2LKoA
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB6VcoALBcojUMBIhQvK1rHz8oHDaku1pds9wBbtLbUTR6xYJaibCJXfwl_hvzF2HlKg6q0Sh0gbZWKt7c-ebybjGYD9WJkyUwrNEs4D1_dNNUAutJunqGxzHRu3m4m2mIfTM__jMlhuwa_-LIzW2gaf6ZH5ab_lZ2VaG1fZgQjikEaiDaE80Zc_0EDbvD8-xNl8w9jRh8Vk6rY1BNw0ELRyPRUoNABpinaQpMpHncAUQyUcy1gxmnOJoIw8lTJBY6W0jnPtZzIXMqSKM82x3VtwG3lGwJrTYb0Hh6IlEFp-zbyQuYxHy-5UDhUHi_l89hntT-ahWWzSqpmKPRyNUeQDdKAEbVWXqwjuv3Gag2ymVgMe7cDvbuyawJdvo7pSo_TnX2kl_9fBvQ_3WupNxs1aeQBbungIO11ZC9Lucrswn-j12kTnknFdlcjoJZnIgnwySW41OTWhyZd4Z0KorW91Q8qcWAeM1R2u5e-kaReJwSM4u5FePYbtoiz0UyBKCen50iQoSv1UhTLnLBWK65BnYeZlDngdDpK0zcluSoOsE2ubUZFYGCUGRkkLIwfe9e98bzKSXCu9a6a6l2xn2YG9Dm5Ju2ltEhZG5iB1TAMHXvePcbsx35BkocsaZQLc4GmALNCBJw06-7Y7dDsQDXDbC5hU5sMnxeqrTWkukHlyH__W2wbhg1cOV1_GCWIVrxr7iYPpO7B_neDFKgkiJMPPru79K7gzXZzOktnx_OQ53GXmRIt1rO3BdnVR6xfIMyv10i53Auc3vQT-AFeJiRY
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Cellular+Automata+Can+Reduce+Memory+Requirements+of+Collective-State+Computing&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Kleyko%2C+Denis&rft.au=Frady%2C+Edward+Paxon&rft.au=Sommer%2C+Friedrich+T&rft.date=2022-06-01&rft.issn=2162-2388&rft.eissn=2162-2388&rft.volume=33&rft.issue=6&rft.spage=2701&rft_id=info:doi/10.1109%2FTNNLS.2021.3119543&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon