Image classification and reconstruction from low-density EEG

Recent advances in visual decoding have enabled the classification and reconstruction of perceived images from the brain. However, previous approaches have predominantly relied on stationary, costly equipment like fMRI or high-density EEG, limiting the real-world availability and applicability of su...

Full description

Saved in:
Bibliographic Details
Published inScientific reports Vol. 14; no. 1; pp. 16436 - 14
Main Authors Guenther, Sven, Kosmyna, Nataliya, Maes, Pattie
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 16.07.2024
Nature Publishing Group
Nature Portfolio
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Recent advances in visual decoding have enabled the classification and reconstruction of perceived images from the brain. However, previous approaches have predominantly relied on stationary, costly equipment like fMRI or high-density EEG, limiting the real-world availability and applicability of such projects. Additionally, several EEG-based paradigms have utilized artifactual, rather than stimulus-related information yielding flawed classification and reconstruction results. Our goal was to reduce the cost of the decoding paradigm, while increasing its flexibility. Therefore, we investigated whether the classification of an image category and the reconstruction of the image itself is possible from the visually evoked brain activity measured by a portable, 8-channel EEG. To compensate for the low electrode count and to avoid flawed predictions, we designed a theory-guided EEG setup and created a new experiment to obtain a dataset from 9 subjects. We compared five contemporary classification models with our setup reaching an average accuracy of 34.4% for 20 image classes on hold-out test recordings. For the reconstruction, the top-performing model was used as an EEG-encoder which was combined with a pretrained latent diffusion model via double-conditioning. After fine-tuning, we reconstructed images from the test set with a 1000 trial 50-class top-1 accuracy of 35.3%. While not reaching the same performance as MRI-based paradigms on unseen stimuli, our approach greatly improved the affordability and mobility of the visual decoding technology.
AbstractList Recent advances in visual decoding have enabled the classification and reconstruction of perceived images from the brain. However, previous approaches have predominantly relied on stationary, costly equipment like fMRI or high-density EEG, limiting the real-world availability and applicability of such projects. Additionally, several EEG-based paradigms have utilized artifactual, rather than stimulus-related information yielding flawed classification and reconstruction results. Our goal was to reduce the cost of the decoding paradigm, while increasing its flexibility. Therefore, we investigated whether the classification of an image category and the reconstruction of the image itself is possible from the visually evoked brain activity measured by a portable, 8-channel EEG. To compensate for the low electrode count and to avoid flawed predictions, we designed a theory-guided EEG setup and created a new experiment to obtain a dataset from 9 subjects. We compared five contemporary classification models with our setup reaching an average accuracy of 34.4% for 20 image classes on hold-out test recordings. For the reconstruction, the top-performing model was used as an EEG-encoder which was combined with a pretrained latent diffusion model via double-conditioning. After fine-tuning, we reconstructed images from the test set with a 1000 trial 50-class top-1 accuracy of 35.3%. While not reaching the same performance as MRI-based paradigms on unseen stimuli, our approach greatly improved the affordability and mobility of the visual decoding technology.
Recent advances in visual decoding have enabled the classification and reconstruction of perceived images from the brain. However, previous approaches have predominantly relied on stationary, costly equipment like fMRI or high-density EEG, limiting the real-world availability and applicability of such projects. Additionally, several EEG-based paradigms have utilized artifactual, rather than stimulus-related information yielding flawed classification and reconstruction results. Our goal was to reduce the cost of the decoding paradigm, while increasing its flexibility. Therefore, we investigated whether the classification of an image category and the reconstruction of the image itself is possible from the visually evoked brain activity measured by a portable, 8-channel EEG. To compensate for the low electrode count and to avoid flawed predictions, we designed a theory-guided EEG setup and created a new experiment to obtain a dataset from 9 subjects. We compared five contemporary classification models with our setup reaching an average accuracy of 34.4% for 20 image classes on hold-out test recordings. For the reconstruction, the top-performing model was used as an EEG-encoder which was combined with a pretrained latent diffusion model via double-conditioning. After fine-tuning, we reconstructed images from the test set with a 1000 trial 50-class top-1 accuracy of 35.3%. While not reaching the same performance as MRI-based paradigms on unseen stimuli, our approach greatly improved the affordability and mobility of the visual decoding technology.Recent advances in visual decoding have enabled the classification and reconstruction of perceived images from the brain. However, previous approaches have predominantly relied on stationary, costly equipment like fMRI or high-density EEG, limiting the real-world availability and applicability of such projects. Additionally, several EEG-based paradigms have utilized artifactual, rather than stimulus-related information yielding flawed classification and reconstruction results. Our goal was to reduce the cost of the decoding paradigm, while increasing its flexibility. Therefore, we investigated whether the classification of an image category and the reconstruction of the image itself is possible from the visually evoked brain activity measured by a portable, 8-channel EEG. To compensate for the low electrode count and to avoid flawed predictions, we designed a theory-guided EEG setup and created a new experiment to obtain a dataset from 9 subjects. We compared five contemporary classification models with our setup reaching an average accuracy of 34.4% for 20 image classes on hold-out test recordings. For the reconstruction, the top-performing model was used as an EEG-encoder which was combined with a pretrained latent diffusion model via double-conditioning. After fine-tuning, we reconstructed images from the test set with a 1000 trial 50-class top-1 accuracy of 35.3%. While not reaching the same performance as MRI-based paradigms on unseen stimuli, our approach greatly improved the affordability and mobility of the visual decoding technology.
Abstract Recent advances in visual decoding have enabled the classification and reconstruction of perceived images from the brain. However, previous approaches have predominantly relied on stationary, costly equipment like fMRI or high-density EEG, limiting the real-world availability and applicability of such projects. Additionally, several EEG-based paradigms have utilized artifactual, rather than stimulus-related information yielding flawed classification and reconstruction results. Our goal was to reduce the cost of the decoding paradigm, while increasing its flexibility. Therefore, we investigated whether the classification of an image category and the reconstruction of the image itself is possible from the visually evoked brain activity measured by a portable, 8-channel EEG. To compensate for the low electrode count and to avoid flawed predictions, we designed a theory-guided EEG setup and created a new experiment to obtain a dataset from 9 subjects. We compared five contemporary classification models with our setup reaching an average accuracy of 34.4% for 20 image classes on hold-out test recordings. For the reconstruction, the top-performing model was used as an EEG-encoder which was combined with a pretrained latent diffusion model via double-conditioning. After fine-tuning, we reconstructed images from the test set with a 1000 trial 50-class top-1 accuracy of 35.3%. While not reaching the same performance as MRI-based paradigms on unseen stimuli, our approach greatly improved the affordability and mobility of the visual decoding technology.
ArticleNumber 16436
Author Guenther, Sven
Kosmyna, Nataliya
Maes, Pattie
Author_xml – sequence: 1
  givenname: Sven
  surname: Guenther
  fullname: Guenther, Sven
  email: sven.guenther@tum.de
  organization: School of Computation, Information and Technology, Technical University of Munich
– sequence: 2
  givenname: Nataliya
  surname: Kosmyna
  fullname: Kosmyna, Nataliya
  organization: Media Lab, Massachusetts Institute of Technology
– sequence: 3
  givenname: Pattie
  surname: Maes
  fullname: Maes, Pattie
  organization: Media Lab, Massachusetts Institute of Technology
BackLink https://www.ncbi.nlm.nih.gov/pubmed/39013929$$D View this record in MEDLINE/PubMed
BookMark eNp9kk1vFSEUhompsR_2D7gwk7hxM8oBho_ExJjmWm_SxI2uCcPAlZsZqDCj6b-X3mlr60I2kMPzvucE3lN0FFN0CL0C_A4wle8Lg07JFhPWck6IbOEZOiGYdS2hhBw9Oh-j81L2uK6OKAbqBTqmCgNVRJ2gD9vJ7FxjR1NK8MGaOaTYmDg02dkUy5wXeyj5nKZmTL_bwcUS5ptms7l8iZ57MxZ3frefoe-fN98uvrRXXy-3F5-uWssYla31ynoOvFNCKmYUk144wR3hgg5Cit72mAxSSg-ccsNBWcM6InsBjmHp6Rnarr5DMnt9ncNk8o1OJuhDIeWdNnkOdnS69wSsAtED7hhV0DOBAZsOD5YwZXn1-rh6XS_95Abr4pzN-MT06U0MP_Qu_dIApCNEsOrw9s4hp5-LK7OeQrFuHE10aSmaYglCYAW3zd78g-7TkmN9qwPFhAKFK_X68UgPs9z_UgXICticSsnOPyCA9W0a9JoGXdOgD2nQUEV0FZUKx53Lf3v_R_UHweG0Cw
Cites_doi 10.1109/CVPR.2017.479
10.1371/journal.pone.0135697
10.1016/j.bspc.2020.102174
10.1109/CVPR46437.2021.01268
10.1016/j.conb.2019.09.009
10.1038/s41597-021-01102-7
10.1016/j.neuropsychologia.2017.02.0110.1016/j.neuropsychologia.2017.02.0110.1016/j.neuropsychologia.2017.02.013
10.1109/CVPR.2015.7298594
10.1109/CVPR52729.2023.02175
10.1007/s11263-015-0816-y
10.1109/IJCNN55064.2022.9892673
10.1523/JNEUROSCI.0158-20.2020
10.1016/s1364-6613(02)01870-3
10.1111/psyp.14320
10.1073/pnas.1315275111
10.1167/13.10.1
10.3390/electronics11172706
10.1109/CVPR.2017.195
10.1002/hbm.23730
10.1017/thg.2012.6
10.1109/IDSTA55301.2022.9923087
10.1109/IJCNN48605.2020.9206750
10.1145/3123266.3127907
10.1523/ENEURO.0244-17.2017
10.1038/s42003-022-04194-y
10.1016/j.jneumeth.2021.109080
10.1088/1741-2552/aace8c
10.1080/2326263X.2023.2287719
10.1109/TPAMI.2020.2973153
10.1109/CVPR46437.2021.00384
10.48550/arXiv.2310.19812
10.1109/TNSRE.2022.3230250
10.1371/journal.pone.0274847
10.3389/fpsyg.2010.00028
10.1145/1015330.1015435
10.3389/fninf.2015.00016
10.3389/fnins.2023.1122661
10.1371/journal.pone.0014465
10.1016/j.neuropsychologia.2017.02.013
10.3758/s13428-018-01193-y
10.1109/TPAMI.2020.2995909
10.1016/j.neuron.2008.10.043
10.1088/2057-1976/ab302c
10.3389/fnhum.2020.00365
10.1117/12.2520589
10.1109/CVPR52688.2022.01042
10.1016/j.neuron.2012.03.011
ContentType Journal Article
Copyright The Author(s) 2024
2024. The Author(s).
The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
The Author(s) 2024 2024
Copyright_xml – notice: The Author(s) 2024
– notice: 2024. The Author(s).
– notice: The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: The Author(s) 2024 2024
DBID C6C
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7X7
7XB
88A
88E
88I
8FE
8FH
8FI
8FJ
8FK
ABUWG
AEUYN
AFKRA
AZQEC
BBNVY
BENPR
BHPHI
CCPQU
DWQXO
FYUFA
GHDGH
GNUQQ
HCIFZ
K9.
LK8
M0S
M1P
M2P
M7P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
Q9U
7X8
5PM
DOA
DOI 10.1038/s41598-024-66228-1
DatabaseName Springer Nature OA Free Journals
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Biology Database (Alumni Edition)
Medical Database (Alumni Edition)
Science Database (Alumni Edition)
ProQuest SciTech Collection
ProQuest Natural Science Collection
ProQuest Hospital Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest One Sustainability (subscription)
ProQuest Central UK/Ireland
ProQuest Central Essentials
Biological Science Collection
ProQuest Central
Natural Science Collection
ProQuest One Community College
ProQuest Central Korea
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Central Student
SciTech Premium Collection
ProQuest Health & Medical Complete (Alumni)
Biological Sciences
ProQuest Health & Medical Collection
PML(ProQuest Medical Library)
Science Database
Biological Science Database
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
ProQuest Central Basic
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Publicly Available Content Database
ProQuest Central Student
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Natural Science Collection
ProQuest Central China
ProQuest Biology Journals (Alumni Edition)
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest One Sustainability
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
Natural Science Collection
ProQuest Central Korea
Health & Medical Research Collection
Biological Science Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest Science Journals (Alumni Edition)
ProQuest Biological Science Collection
ProQuest Central Basic
ProQuest Science Journals
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
Biological Science Database
ProQuest SciTech Collection
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList
MEDLINE
MEDLINE - Academic


Publicly Available Content Database
CrossRef
Database_xml – sequence: 1
  dbid: C6C
  name: Springer Nature OA Free Journals
  url: http://www.springeropen.com/
  sourceTypes: Publisher
– sequence: 2
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 3
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 4
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
– sequence: 5
  dbid: BENPR
  name: ProQuest Central - New (Subscription)
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Biology
EISSN 2045-2322
EndPage 14
ExternalDocumentID oai_doaj_org_article_bf21c917b1054391b47010a50dc249c6
PMC11252274
39013929
10_1038_s41598_024_66228_1
Genre Journal Article
GrantInformation_xml – fundername: Technische Universität München (1025)
GroupedDBID 0R~
3V.
4.4
53G
5VS
7X7
88A
88E
88I
8FE
8FH
8FI
8FJ
AAFWJ
AAJSJ
AAKDD
ABDBF
ABUWG
ACGFS
ACSMW
ACUHS
ADBBV
ADRAZ
AENEX
AEUYN
AFKRA
AJTQC
ALIPV
ALMA_UNASSIGNED_HOLDINGS
AOIJS
AZQEC
BAWUL
BBNVY
BCNDV
BENPR
BHPHI
BPHCQ
BVXVI
C6C
CCPQU
DIK
DWQXO
EBD
EBLON
EBS
ESX
FYUFA
GNUQQ
GROUPED_DOAJ
GX1
HCIFZ
HH5
HMCUK
HYE
KQ8
LK8
M0L
M1P
M2P
M48
M7P
M~E
NAO
OK1
PIMPY
PQQKQ
PROAC
PSQYO
RNT
RNTTT
RPM
SNYQT
UKHRP
AASML
AAYXX
AFPKN
CITATION
PHGZM
PHGZT
CGR
CUY
CVF
ECM
EIF
NPM
7XB
8FK
AARCD
K9.
PJZUB
PKEHL
PPXIY
PQEST
PQGLB
PQUKI
PRINS
Q9U
7X8
5PM
PUEGO
ID FETCH-LOGICAL-c4438-cf9cf616597894a948f7e76e2673d787bcb02d888f1636a619ca4528b71e408f3
IEDL.DBID M48
ISSN 2045-2322
IngestDate Wed Aug 27 01:32:14 EDT 2025
Thu Aug 21 18:33:04 EDT 2025
Thu Jul 10 23:50:38 EDT 2025
Wed Aug 13 06:04:19 EDT 2025
Wed Feb 19 02:08:53 EST 2025
Tue Jul 01 01:02:10 EDT 2025
Fri Feb 21 02:37:53 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
License 2024. The Author(s).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c4438-cf9cf616597894a948f7e76e2673d787bcb02d888f1636a619ca4528b71e408f3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
OpenAccessLink http://journals.scholarsportal.info/openUrl.xqy?doi=10.1038/s41598-024-66228-1
PMID 39013929
PQID 3081479190
PQPubID 2041939
PageCount 14
ParticipantIDs doaj_primary_oai_doaj_org_article_bf21c917b1054391b47010a50dc249c6
pubmedcentral_primary_oai_pubmedcentral_nih_gov_11252274
proquest_miscellaneous_3081770916
proquest_journals_3081479190
pubmed_primary_39013929
crossref_primary_10_1038_s41598_024_66228_1
springer_journals_10_1038_s41598_024_66228_1
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20240716
PublicationDateYYYYMMDD 2024-07-16
PublicationDate_xml – month: 7
  year: 2024
  text: 20240716
  day: 16
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: England
PublicationTitle Scientific reports
PublicationTitleAbbrev Sci Rep
PublicationTitleAlternate Sci Rep
PublicationYear 2024
Publisher Nature Publishing Group UK
Nature Publishing Group
Nature Portfolio
Publisher_xml – name: Nature Publishing Group UK
– name: Nature Publishing Group
– name: Nature Portfolio
References CarlsonTTovarDAAlinkAKriegeskorteNRepresentational dynamics of object vision: The first 1000 msJ. Vision2013131110.1167/13.10.1
RonnebergerOFischerPBroxTNavabNHorneggerJWellsWMFrangiAFU-net: Convolutional networks for biomedical image segmentationMedical image computing and computer-assisted intervention - MICCAI 20152015ChamSpringer International Publishing234241
Cui, W. et al. Neuro-gpt: Developing a foundation model for eeg (2023). arXiv: 2311.03764.
Mishra, A., Raj, N. & Bajwa, G. Eeg-based image feature extraction for visual classification using deep learning (2022). arXiv: 2209.13090.
Ng, A. Y. Feature selection, l1 vs. l2 regularization, and rotational invariance. In proceedings of the twenty-first international conference on machine learning, ICML ’04, 78, https://doi.org/10.1145/1015330.1015435 (Association for computing machinery, New York, NY, USA, 2004).
ZhengXChenWAn attention-based bi-lstm method for visual object classification via eegBiomed. Signal Process. Control20216310.1016/j.bspc.2020.102174
HuangGDiscrepancy between inter- and intra-subject variability in eeg-based motor imagery brain-computer interface: Evidence from multiple perspectivesFront. Neurosci.202317112266110.3389/fnins.2023.1122661368606209968845
Chen, Z., Qing, J., Xiang, T., Yue, W. L. & Zhou, J. H. Seeing beyond the brain: Conditional diffusion model with sparse masked modeling for vision decoding. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 22710–22720 (2022).
Dhariwal, P. & Nichol, A. Diffusion Models Beat GANs on Image Synthesis. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P. S. & Vaughan, J. W. (eds.) Advances in Neural Information Processing Systems, 8780–8794 (Curran Associates, Inc., 2021).
Rombach, R., Blattmann, A., Lorenz, D., Esser, P. & Ommer, B. High-resolution image synthesis with latent diffusion models. In 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR)
Liu, L., Ren, Y., Lin, Z. & Zhao, Z. Pseudo numerical methods for diffusion models on manifolds (2022). arXiv: 2202.09778.
Esser, P., Rombach, R. & Ommer, B. Taming transformers for high-resolution image synthesis. In 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR), 12868–12878 (IEEE, New York, 2021).
Sauer, A., Lorenz, D., Blattmann, A. & Rombach, R. Adversarial diffusion distillation (2023). arXiv: 2311.17042.
LiRThe perils and pitfalls of block design for eeg classification experimentsIEEE Trans. Pattern Anal. Mach. Intell.20214331633310.1109/TPAMI.2020.2973153
Bigdely-ShamloNMullenTKotheCSuK-MRobbinsKAThe prep pipeline: Standardized preprocessing for large-scale eeg analysisFront. Neuroinform.201591610.3389/fninf.2015.00016261507854471356
LawhernVJEEGNet: A compact convolutional neural network for EEG-based brain-computer interfacesJ. Neural Eng.2018152018JNEng..15e6013L10.1088/1741-2552/aace8c29932424
Ozcelik, F., Choksi, B., Mozafari, M., Reddy, L. & VanRullen, R. Reconstruction of perceived images from fmri patterns and semantic brain exploration using instance-conditioned gans. In 2022 international joint conference on neural networks (IJCNN), 1–8, https://doi.org/10.1109/IJCNN55064.2022.9892673 (2022).
Smith, L. N. & Topin, N. Super-convergence: Very fast training of neural networks using large learning rates (2018). arXiv: 1708.07120.
PasupathyAKimTPopovkinaDVObject shape and surface properties are jointly encoded in mid-level ventral visual cortexCurr. Opin. Neurobiol.2019581992081:CAS:528:DC%2BC1MXhvVenu7rF10.1016/j.conb.2019.09.009315867496876744
SimanovaIvan GervenMOostenveldRHagoortPIdentifying object categories from event-related eeg: Toward decoding of conceptual representationsPLoS ONE201151121:CAS:528:DC%2BC3MXlsl2guw%3D%3D10.1371/journal.pone.0014465
RoeAWToward a unified theory of visual area v4Neuron20127412291:CAS:528:DC%2BC38Xls1emu7k%3D10.1016/j.neuron.2012.03.011225006264912377
LeeSJangSJunSCExploring the ability to classify visual perception and visual imagery eeg data: Toward an intuitive bci systemElectronics202211270610.3390/electronics11172706
Ding, Y. et al. TSception: A deep learning framework for emotion detection using EEG. In 2020 international joint conference on neural networks (IJCNN), 1–7, https://doi.org/10.1109/IJCNN48605.2020.9206750 (2020).
GrootswagersTZhouIRobinsonAKHebartMNCarlsonTAHuman EEG recordings for 1,854 concepts presented in rapid serial visual presentation streamsSci. Data20229310.1038/s41597-021-01102-7350133318748587
SongYZhengQLiuBGaoXEEG conformer: Convolutional transformer for EEG decoding and visualizationIEEE Trans. Neural Syst. Rehabil. Eng.20233171071910.1109/TNSRE.2022.3230250
RussakovskyOImagenet large scale visual recognition challengeInt. J. Comput. Vision2015115211252342248210.1007/s11263-015-0816-y
Bai, Y. et al. Dreamdiffusion: Generating high-quality images from brain eeg signals (2023). arXiv: 2306.16934.
Gupta, A. Human faces [dataset]. Kaggle (2021 (Accessed January 10, 2024)). https://www.kaggle.com/datasets/ashwingupta3012/human-faces.
SmitDJABoomsmaDISchnackHGHulshoff PolHEde GeusEJCIndividual differences in eeg spectral power reflect genetic variance in gray and white matter volumesTwin Res. Human Genet.20121538439210.1017/thg.2012.6
Dosovitskiy, A. et al. An image is worth 16x16 words: Transformers for image recognition at scale (2021). arXiv: 2010.11929.
Spampinato, C. et al. Deep learning human mind for automated visual classification. In In 2017 IEEE conference on computer vision and pattern recognition (CVPR), 4503–4511, https://doi.org/10.1109/CVPR.2017.479 (2017).
Chollet, F. Xception: Deep learning with depthwise separable convolutions (2017). arXiv: 1610.02357.
BirdCMBerensSCHornerAJFranklinACategorical encoding of color in the brainProc. Natl. Acad. Sci.2014111459045952014PNAS..111.4590B1:CAS:528:DC%2BC2cXjtlyltL8%3D10.1073/pnas.1315275111245916023970503
ShimizuHSrinivasanRImproving classification and reconstruction of imagined images from eeg signalsPLoS ONE2022171161:CAS:528:DC%2BB38XisFOrtbbN10.1371/journal.pone.0274847
PetroniAThe variability of neural responses to naturalistic videos change with age and sexeNeuro201851710.1523/ENEURO.0244-17.2017
Benchetrit, Y., Banville, H. & King, J.-R. Brain decoding: Toward real-time reconstruction of visual perception, https://doi.org/10.48550/arXiv.2310.19812 (2023). arXiv: 2310.19812.
Van Den BoomMAVansteenselMJKoppeschaarMIRaemaekersMAHRamseyNFTowards an intuitive communication-BCI: Decoding visually imagined characters from the early visual cortex using high-field fMRIBiomed. Phys. Eng. Express2019510.1088/2057-1976/ab302c329835737116116
MalachRLevyIHassonUThe topography of high-order human object areasTrends Cogn. Sci.2002617618410.1016/s1364-6613(02)01870-311912041
ContiniEWWardleSGCarlsonTADecoding the time-course of object recognition in the human brain: From visual features to categorical decisionsNeuropsychologia201710516517610.1016/j.neuropsychologia.2017.02.01328215698
NicholsDBettsLWilsonHDecoding of faces and face components in face-sensitive human visual cortexFront. Psychol.20101136710.3389/fpsyg.2010.00028
TeichmannLThe influence of object-color knowledge on emerging object representations in the brainJ. Neurosci.202040677967891:CAS:528:DC%2BB3cXisFWhtr3F10.1523/JNEUROSCI.0158-20.2020327039037455208
Szegedy, C. et al. Going deeper with convolutions. In 2015 IEEE conference on computer vision and pattern recognition (CVPR), 1–9, https://doi.org/10.1109/CVPR.2015.7298594 (2015).
SchirrmeisterRTDeep learning with convolutional neural networks for eeg decoding and visualizationHuman Brain Map.2017385391542010.1002/hbm.23730
JozwikKMDisentangling five dimensions of animacy in human brain and behaviourNat. Commun. Biol.20225124710.1038/s42003-022-04194-y
Holly Wilson, M. G. M. J. P., Xi Chen & O’Neill, E. Feasibility of decoding visual information from eeg. Brain-computer interfaces, 1–28, https://doi.org/10.1080/2326263X.2023.2287719 (2023).
KlemGHLüdersHJasperHHElgerCEThe ten-twenty electrode system of the international federation the international federation of clinical neurophysiologyElectroencephal. Clin. Neurophysiol.199952361:STN:280:DC%2BD3c%2FlvVCquw%3D%3D
PontifexMBCoffmanCAValidation of the gtec unicorn hybrid black wireless EEG systemPsychophysiology20236010.1111/psyp.1432037171024
ContiniEWWardleSGCarlsonTADecoding the time-course of object recognition in the human brain: From visual features to categorical decisionsNeuropsychologia201710516517610.1016/j.neuropsychologia.2017.02.0110.1016/j.neuropsychologia.2017.02.0110.1016/j.neuropsychologia.2017.02.01328215698
van DrielJOliversCNFahrenfortJJHigh-pass filtering artifacts in multivariate classification of neural time series dataJ. Neurosci. Methods202135210.1016/j.jneumeth.2021.10908033508412
KaneshiroBPerreau GuimaraesMKimH-SNorciaAMSuppesPA representational similarity analysis of the dynamics of object processing using single-trial eeg classificationPLOS ONE20151012710.1371/journal.pone.0135697
KriegeskorteNMatching categorical object representations in inferior temporal cortex of man and monkeyNeuron200860112611411:CAS:528:DC%2BD1MXlsVKjtA%3D%3D10.1016/j.neuron.2008.10.043191099163143574
PeirceJWPsychopy2: Experiments in behavior made easyBehav. Res. Methods20195119520310.3758/s13428-018-01193-y307342066420413
Kothe, C. Lab streaming layer (lsl) - a software framework for synchronizing a large array of data collection and stimulation devices. Computer software (2014).
Kavasidis, I., Palazzo, S., Spampinato, C., Giordano, D. & Shah, M. Brain2image: Converting brain signals into images. In proceedings of the 25th ACM international conference on multimedia, MM ’17, 1809-1817, https://doi.org/10.1145/3123266.3127907 (Association for computing machinery, New York, NY, USA, 2017).
Ahmed, H., Wilbur, R. B., Bharadwaj, H. M. & Siskind, J. M. Object classification from randomized eeg trials. In 2021 IEEE/cvf conference on computer vision and pattern recognition (CVPR), 3844–3853, https://doi.org/10.1109/CVPR46437.
66228_CR46
T Grootswagers (66228_CR19) 2022; 9
A Petroni (66228_CR48) 2018; 5
I Simanova (66228_CR10) 2011; 5
D Nichols (66228_CR15) 2010; 1
S Lee (66228_CR3) 2022; 11
H Shimizu (66228_CR4) 2022; 17
DJA Smit (66228_CR49) 2012; 15
66228_CR41
66228_CR40
66228_CR43
66228_CR42
G Huang (66228_CR47) 2023; 17
O Russakovsky (66228_CR13) 2015; 115
A Pasupathy (66228_CR53) 2019; 58
66228_CR14
H Zhang (66228_CR32) 2020; 14
AW Roe (66228_CR54) 2012; 74
O Ronneberger (66228_CR38) 2015
66228_CR59
T Carlson (66228_CR18) 2013; 13
X Zheng (66228_CR45) 2021; 63
KM Jozwik (66228_CR51) 2022; 5
66228_CR12
VJ Lawhern (66228_CR24) 2018; 15
66228_CR55
N Kriegeskorte (66228_CR50) 2008; 60
L Teichmann (66228_CR56) 2020; 40
R Li (66228_CR8) 2021; 43
N Bigdely-Shamlo (66228_CR22) 2015; 9
B Kaneshiro (66228_CR9) 2015; 10
R Malach (66228_CR58) 2002; 6
GH Klem (66228_CR11) 1999; 52
66228_CR25
S Palazzo (66228_CR26) 2021; 43
S Lee (66228_CR20) 2022; 11
RT Schirrmeister (66228_CR31) 2017; 38
66228_CR29
66228_CR28
MB Pontifex (66228_CR44) 2023; 60
66228_CR61
66228_CR60
66228_CR7
JW Peirce (66228_CR21) 2019; 51
66228_CR1
J van Driel (66228_CR23) 2021; 352
L Teichmann (66228_CR17) 2020; 40
66228_CR5
66228_CR2
Y Song (66228_CR27) 2023; 31
EW Contini (66228_CR57) 2017; 105
EW Contini (66228_CR16) 2017; 105
66228_CR36
CM Bird (66228_CR52) 2014; 111
66228_CR35
66228_CR37
66228_CR39
66228_CR30
MA Van Den Boom (66228_CR6) 2019; 5
66228_CR34
66228_CR33
References_xml – reference: MalachRLevyIHassonUThe topography of high-order human object areasTrends Cogn. Sci.2002617618410.1016/s1364-6613(02)01870-311912041
– reference: Kavasidis, I., Palazzo, S., Spampinato, C., Giordano, D. & Shah, M. Brain2image: Converting brain signals into images. In proceedings of the 25th ACM international conference on multimedia, MM ’17, 1809-1817, https://doi.org/10.1145/3123266.3127907 (Association for computing machinery, New York, NY, USA, 2017).
– reference: PalazzoSDecoding brain representations by multimodal learning of neural activity and visual featuresIEEE Trans. Pattern Anal. Mach. Intell.2021433833384910.1109/TPAMI.2020.299590932750768
– reference: Ho, J., Jain, A. & Abbeel, P. Denoising diffusion probabilistic models. In Proceedings of the 34th international conference on neural information processing systems
– reference: ShimizuHSrinivasanRImproving classification and reconstruction of imagined images from eeg signalsPLoS ONE2022171161:CAS:528:DC%2BB38XisFOrtbbN10.1371/journal.pone.0274847
– reference: Ding, Y. et al. TSception: A deep learning framework for emotion detection using EEG. In 2020 international joint conference on neural networks (IJCNN), 1–7, https://doi.org/10.1109/IJCNN48605.2020.9206750 (2020).
– reference: Holly Wilson, M. G. M. J. P., Xi Chen & O’Neill, E. Feasibility of decoding visual information from eeg. Brain-computer interfaces, 1–28, https://doi.org/10.1080/2326263X.2023.2287719 (2023).
– reference: Kingma, D. P. & Ba, J A method for stochastic optimization, Adam, 2017), arXiv: 1412.6980.
– reference: PeirceJWPsychopy2: Experiments in behavior made easyBehav. Res. Methods20195119520310.3758/s13428-018-01193-y307342066420413
– reference: Ozcelik, F., Choksi, B., Mozafari, M., Reddy, L. & VanRullen, R. Reconstruction of perceived images from fmri patterns and semantic brain exploration using instance-conditioned gans. In 2022 international joint conference on neural networks (IJCNN), 1–8, https://doi.org/10.1109/IJCNN55064.2022.9892673 (2022).
– reference: Ng, A. Y. Feature selection, l1 vs. l2 regularization, and rotational invariance. In proceedings of the twenty-first international conference on machine learning, ICML ’04, 78, https://doi.org/10.1145/1015330.1015435 (Association for computing machinery, New York, NY, USA, 2004).
– reference: Spampinato, C. et al. Deep learning human mind for automated visual classification. In In 2017 IEEE conference on computer vision and pattern recognition (CVPR), 4503–4511, https://doi.org/10.1109/CVPR.2017.479 (2017).
– reference: SmitDJABoomsmaDISchnackHGHulshoff PolHEde GeusEJCIndividual differences in eeg spectral power reflect genetic variance in gray and white matter volumesTwin Res. Human Genet.20121538439210.1017/thg.2012.6
– reference: KaneshiroBPerreau GuimaraesMKimH-SNorciaAMSuppesPA representational similarity analysis of the dynamics of object processing using single-trial eeg classificationPLOS ONE20151012710.1371/journal.pone.0135697
– reference: RoeAWToward a unified theory of visual area v4Neuron20127412291:CAS:528:DC%2BC38Xls1emu7k%3D10.1016/j.neuron.2012.03.011225006264912377
– reference: JozwikKMDisentangling five dimensions of animacy in human brain and behaviourNat. Commun. Biol.20225124710.1038/s42003-022-04194-y
– reference: LawhernVJEEGNet: A compact convolutional neural network for EEG-based brain-computer interfacesJ. Neural Eng.2018152018JNEng..15e6013L10.1088/1741-2552/aace8c29932424
– reference: SongYZhengQLiuBGaoXEEG conformer: Convolutional transformer for EEG decoding and visualizationIEEE Trans. Neural Syst. Rehabil. Eng.20233171071910.1109/TNSRE.2022.3230250
– reference: Liu, L., Ren, Y., Lin, Z. & Zhao, Z. Pseudo numerical methods for diffusion models on manifolds (2022). arXiv: 2202.09778.
– reference: Kothe, C. Lab streaming layer (lsl) - a software framework for synchronizing a large array of data collection and stimulation devices. Computer software (2014).
– reference: PasupathyAKimTPopovkinaDVObject shape and surface properties are jointly encoded in mid-level ventral visual cortexCurr. Opin. Neurobiol.2019581992081:CAS:528:DC%2BC1MXhvVenu7rF10.1016/j.conb.2019.09.009315867496876744
– reference: Sauer, A., Lorenz, D., Blattmann, A. & Rombach, R. Adversarial diffusion distillation (2023). arXiv: 2311.17042.
– reference: KlemGHLüdersHJasperHHElgerCEThe ten-twenty electrode system of the international federation the international federation of clinical neurophysiologyElectroencephal. Clin. Neurophysiol.199952361:STN:280:DC%2BD3c%2FlvVCquw%3D%3D
– reference: Bigdely-ShamloNMullenTKotheCSuK-MRobbinsKAThe prep pipeline: Standardized preprocessing for large-scale eeg analysisFront. Neuroinform.201591610.3389/fninf.2015.00016261507854471356
– reference: SchirrmeisterRTDeep learning with convolutional neural networks for eeg decoding and visualizationHuman Brain Map.2017385391542010.1002/hbm.23730
– reference: Benchetrit, Y., Banville, H. & King, J.-R. Brain decoding: Toward real-time reconstruction of visual perception, https://doi.org/10.48550/arXiv.2310.19812 (2023). arXiv: 2310.19812.
– reference: RonnebergerOFischerPBroxTNavabNHorneggerJWellsWMFrangiAFU-net: Convolutional networks for biomedical image segmentationMedical image computing and computer-assisted intervention - MICCAI 20152015ChamSpringer International Publishing234241
– reference: Mishra, A., Raj, N. & Bajwa, G. Eeg-based image feature extraction for visual classification using deep learning (2022). arXiv: 2209.13090.
– reference: Szegedy, C. et al. Going deeper with convolutions. In 2015 IEEE conference on computer vision and pattern recognition (CVPR), 1–9, https://doi.org/10.1109/CVPR.2015.7298594 (2015).
– reference: SimanovaIvan GervenMOostenveldRHagoortPIdentifying object categories from event-related eeg: Toward decoding of conceptual representationsPLoS ONE201151121:CAS:528:DC%2BC3MXlsl2guw%3D%3D10.1371/journal.pone.0014465
– reference: GrootswagersTZhouIRobinsonAKHebartMNCarlsonTAHuman EEG recordings for 1,854 concepts presented in rapid serial visual presentation streamsSci. Data20229310.1038/s41597-021-01102-7350133318748587
– reference: Cui, W. et al. Neuro-gpt: Developing a foundation model for eeg (2023). arXiv: 2311.03764.
– reference: Dosovitskiy, A. et al. An image is worth 16x16 words: Transformers for image recognition at scale (2021). arXiv: 2010.11929.
– reference: NicholsDBettsLWilsonHDecoding of faces and face components in face-sensitive human visual cortexFront. Psychol.20101136710.3389/fpsyg.2010.00028
– reference: ContiniEWWardleSGCarlsonTADecoding the time-course of object recognition in the human brain: From visual features to categorical decisionsNeuropsychologia201710516517610.1016/j.neuropsychologia.2017.02.01328215698
– reference: RussakovskyOImagenet large scale visual recognition challengeInt. J. Comput. Vision2015115211252342248210.1007/s11263-015-0816-y
– reference: PontifexMBCoffmanCAValidation of the gtec unicorn hybrid black wireless EEG systemPsychophysiology20236010.1111/psyp.1432037171024
– reference: ZhangHSilvaFHSOhataEFMedeirosAGRebouças FilhoPPBi-dimensional approach based on transfer learning for alcoholism pre-disposition classification via eeg signalsFront. Human Neurosci.20201436510.3389/fnhum.2020.00365
– reference: ZhengXChenWAn attention-based bi-lstm method for visual object classification via eegBiomed. Signal Process. Control20216310.1016/j.bspc.2020.102174
– reference: Van Den BoomMAVansteenselMJKoppeschaarMIRaemaekersMAHRamseyNFTowards an intuitive communication-BCI: Decoding visually imagined characters from the early visual cortex using high-field fMRIBiomed. Phys. Eng. Express2019510.1088/2057-1976/ab302c329835737116116
– reference: Bai, Y. et al. Dreamdiffusion: Generating high-quality images from brain eeg signals (2023). arXiv: 2306.16934.
– reference: BirdCMBerensSCHornerAJFranklinACategorical encoding of color in the brainProc. Natl. Acad. Sci.2014111459045952014PNAS..111.4590B1:CAS:528:DC%2BC2cXjtlyltL8%3D10.1073/pnas.1315275111245916023970503
– reference: ContiniEWWardleSGCarlsonTADecoding the time-course of object recognition in the human brain: From visual features to categorical decisionsNeuropsychologia201710516517610.1016/j.neuropsychologia.2017.02.0110.1016/j.neuropsychologia.2017.02.0110.1016/j.neuropsychologia.2017.02.01328215698
– reference: LeeSJangSJunSCExploring the ability to classify visual perception and visual imagery eeg data: Toward an intuitive bci systemElectronics202211270610.3390/electronics11172706
– reference: van DrielJOliversCNFahrenfortJJHigh-pass filtering artifacts in multivariate classification of neural time series dataJ. Neurosci. Methods202135210.1016/j.jneumeth.2021.10908033508412
– reference: Chollet, F. Xception: Deep learning with depthwise separable convolutions (2017). arXiv: 1610.02357.
– reference: CarlsonTTovarDAAlinkAKriegeskorteNRepresentational dynamics of object vision: The first 1000 msJ. Vision2013131110.1167/13.10.1
– reference: Rombach, R., Blattmann, A., Lorenz, D., Esser, P. & Ommer, B. High-resolution image synthesis with latent diffusion models. In 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR)
– reference: TeichmannLThe influence of object-color knowledge on emerging object representations in the brainJ. Neurosci.202040677967891:CAS:528:DC%2BB3cXisFWhtr3F10.1523/JNEUROSCI.0158-20.2020327039037455208
– reference: PetroniAThe variability of neural responses to naturalistic videos change with age and sexeNeuro201851710.1523/ENEURO.0244-17.2017
– reference: KriegeskorteNMatching categorical object representations in inferior temporal cortex of man and monkeyNeuron200860112611411:CAS:528:DC%2BD1MXlsVKjtA%3D%3D10.1016/j.neuron.2008.10.043191099163143574
– reference: Chen, Z., Qing, J., Xiang, T., Yue, W. L. & Zhou, J. H. Seeing beyond the brain: Conditional diffusion model with sparse masked modeling for vision decoding. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 22710–22720 (2022).
– reference: LiRThe perils and pitfalls of block design for eeg classification experimentsIEEE Trans. Pattern Anal. Mach. Intell.20214331633310.1109/TPAMI.2020.2973153
– reference: Gupta, A. Human faces [dataset]. Kaggle (2021 (Accessed January 10, 2024)). https://www.kaggle.com/datasets/ashwingupta3012/human-faces.
– reference: HuangGDiscrepancy between inter- and intra-subject variability in eeg-based motor imagery brain-computer interface: Evidence from multiple perspectivesFront. Neurosci.202317112266110.3389/fnins.2023.1122661368606209968845
– reference: Esser, P., Rombach, R. & Ommer, B. Taming transformers for high-resolution image synthesis. In 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR), 12868–12878 (IEEE, New York, 2021).
– reference: Ahmed, H., Wilbur, R. B., Bharadwaj, H. M. & Siskind, J. M. Object classification from randomized eeg trials. In 2021 IEEE/cvf conference on computer vision and pattern recognition (CVPR), 3844–3853, https://doi.org/10.1109/CVPR46437.2021.00384 (2021).
– reference: Smith, L. N. & Topin, N. Super-convergence: Very fast training of neural networks using large learning rates (2018). arXiv: 1708.07120.
– reference: Dhariwal, P. & Nichol, A. Diffusion Models Beat GANs on Image Synthesis. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P. S. & Vaughan, J. W. (eds.) Advances in Neural Information Processing Systems, 8780–8794 (Curran Associates, Inc., 2021).
– ident: 66228_CR7
  doi: 10.1109/CVPR.2017.479
– volume: 10
  start-page: 1
  year: 2015
  ident: 66228_CR9
  publication-title: PLOS ONE
  doi: 10.1371/journal.pone.0135697
– volume: 63
  year: 2021
  ident: 66228_CR45
  publication-title: Biomed. Signal Process. Control
  doi: 10.1016/j.bspc.2020.102174
– ident: 66228_CR39
  doi: 10.1109/CVPR46437.2021.01268
– ident: 66228_CR59
– volume: 58
  start-page: 199
  year: 2019
  ident: 66228_CR53
  publication-title: Curr. Opin. Neurobiol.
  doi: 10.1016/j.conb.2019.09.009
– volume: 9
  start-page: 3
  year: 2022
  ident: 66228_CR19
  publication-title: Sci. Data
  doi: 10.1038/s41597-021-01102-7
– volume: 105
  start-page: 165
  year: 2017
  ident: 66228_CR16
  publication-title: Neuropsychologia
  doi: 10.1016/j.neuropsychologia.2017.02.0110.1016/j.neuropsychologia.2017.02.0110.1016/j.neuropsychologia.2017.02.013
– ident: 66228_CR30
  doi: 10.1109/CVPR.2015.7298594
– ident: 66228_CR1
  doi: 10.1109/CVPR52729.2023.02175
– volume: 115
  start-page: 211
  year: 2015
  ident: 66228_CR13
  publication-title: Int. J. Comput. Vision
  doi: 10.1007/s11263-015-0816-y
– ident: 66228_CR55
  doi: 10.1109/IJCNN55064.2022.9892673
– ident: 66228_CR41
– volume: 40
  start-page: 6779
  year: 2020
  ident: 66228_CR17
  publication-title: J. Neurosci.
  doi: 10.1523/JNEUROSCI.0158-20.2020
– volume: 6
  start-page: 176
  year: 2002
  ident: 66228_CR58
  publication-title: Trends Cogn. Sci.
  doi: 10.1016/s1364-6613(02)01870-3
– volume: 52
  start-page: 3
  year: 1999
  ident: 66228_CR11
  publication-title: Electroencephal. Clin. Neurophysiol.
– volume: 60
  year: 2023
  ident: 66228_CR44
  publication-title: Psychophysiology
  doi: 10.1111/psyp.14320
– volume: 111
  start-page: 4590
  year: 2014
  ident: 66228_CR52
  publication-title: Proc. Natl. Acad. Sci.
  doi: 10.1073/pnas.1315275111
– ident: 66228_CR14
– volume: 13
  start-page: 1
  year: 2013
  ident: 66228_CR18
  publication-title: J. Vision
  doi: 10.1167/13.10.1
– start-page: 234
  volume-title: Medical image computing and computer-assisted intervention - MICCAI 2015
  year: 2015
  ident: 66228_CR38
– volume: 11
  start-page: 2706
  year: 2022
  ident: 66228_CR20
  publication-title: Electronics
  doi: 10.3390/electronics11172706
– volume: 40
  start-page: 6779
  year: 2020
  ident: 66228_CR56
  publication-title: J. Neurosci.
  doi: 10.1523/JNEUROSCI.0158-20.2020
– ident: 66228_CR29
  doi: 10.1109/CVPR.2017.195
– volume: 38
  start-page: 5391
  year: 2017
  ident: 66228_CR31
  publication-title: Human Brain Map.
  doi: 10.1002/hbm.23730
– volume: 15
  start-page: 384
  year: 2012
  ident: 66228_CR49
  publication-title: Twin Res. Human Genet.
  doi: 10.1017/thg.2012.6
– ident: 66228_CR28
  doi: 10.1109/IDSTA55301.2022.9923087
– ident: 66228_CR25
  doi: 10.1109/IJCNN48605.2020.9206750
– ident: 66228_CR46
– ident: 66228_CR42
– ident: 66228_CR60
  doi: 10.1145/3123266.3127907
– volume: 5
  start-page: 17
  year: 2018
  ident: 66228_CR48
  publication-title: eNeuro
  doi: 10.1523/ENEURO.0244-17.2017
– volume: 5
  start-page: 1247
  year: 2022
  ident: 66228_CR51
  publication-title: Nat. Commun. Biol.
  doi: 10.1038/s42003-022-04194-y
– volume: 352
  year: 2021
  ident: 66228_CR23
  publication-title: J. Neurosci. Methods
  doi: 10.1016/j.jneumeth.2021.109080
– volume: 15
  year: 2018
  ident: 66228_CR24
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2552/aace8c
– ident: 66228_CR5
  doi: 10.1080/2326263X.2023.2287719
– volume: 43
  start-page: 316
  year: 2021
  ident: 66228_CR8
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2020.2973153
– ident: 66228_CR43
  doi: 10.1109/CVPR46437.2021.00384
– ident: 66228_CR2
  doi: 10.48550/arXiv.2310.19812
– volume: 11
  start-page: 2706
  year: 2022
  ident: 66228_CR3
  publication-title: Electronics
  doi: 10.3390/electronics11172706
– volume: 31
  start-page: 710
  year: 2023
  ident: 66228_CR27
  publication-title: IEEE Trans. Neural Syst. Rehabil. Eng.
  doi: 10.1109/TNSRE.2022.3230250
– volume: 17
  start-page: 1
  year: 2022
  ident: 66228_CR4
  publication-title: PLoS ONE
  doi: 10.1371/journal.pone.0274847
– volume: 1
  start-page: 1367
  year: 2010
  ident: 66228_CR15
  publication-title: Front. Psychol.
  doi: 10.3389/fpsyg.2010.00028
– ident: 66228_CR34
  doi: 10.1145/1015330.1015435
– volume: 9
  start-page: 16
  year: 2015
  ident: 66228_CR22
  publication-title: Front. Neuroinform.
  doi: 10.3389/fninf.2015.00016
– volume: 17
  start-page: 1122661
  year: 2023
  ident: 66228_CR47
  publication-title: Front. Neurosci.
  doi: 10.3389/fnins.2023.1122661
– volume: 5
  start-page: 1
  year: 2011
  ident: 66228_CR10
  publication-title: PLoS ONE
  doi: 10.1371/journal.pone.0014465
– volume: 105
  start-page: 165
  year: 2017
  ident: 66228_CR57
  publication-title: Neuropsychologia
  doi: 10.1016/j.neuropsychologia.2017.02.013
– volume: 51
  start-page: 195
  year: 2019
  ident: 66228_CR21
  publication-title: Behav. Res. Methods
  doi: 10.3758/s13428-018-01193-y
– ident: 66228_CR12
– volume: 43
  start-page: 3833
  year: 2021
  ident: 66228_CR26
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2020.2995909
– ident: 66228_CR33
– ident: 66228_CR37
– volume: 60
  start-page: 1126
  year: 2008
  ident: 66228_CR50
  publication-title: Neuron
  doi: 10.1016/j.neuron.2008.10.043
– volume: 5
  year: 2019
  ident: 66228_CR6
  publication-title: Biomed. Phys. Eng. Express
  doi: 10.1088/2057-1976/ab302c
– volume: 14
  start-page: 365
  year: 2020
  ident: 66228_CR32
  publication-title: Front. Human Neurosci.
  doi: 10.3389/fnhum.2020.00365
– ident: 66228_CR35
  doi: 10.1117/12.2520589
– ident: 66228_CR40
– ident: 66228_CR36
  doi: 10.1109/CVPR52688.2022.01042
– volume: 74
  start-page: 12
  year: 2012
  ident: 66228_CR54
  publication-title: Neuron
  doi: 10.1016/j.neuron.2012.03.011
– ident: 66228_CR61
SSID ssj0000529419
Score 2.423213
Snippet Recent advances in visual decoding have enabled the classification and reconstruction of perceived images from the brain. However, previous approaches have...
Abstract Recent advances in visual decoding have enabled the classification and reconstruction of perceived images from the brain. However, previous approaches...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
springer
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Publisher
StartPage 16436
SubjectTerms 631/378/116/2394
639/705/117
Adult
Algorithms
Brain - diagnostic imaging
Brain - physiology
Brain mapping
Brain Mapping - methods
Classification
Diffusion models
EEG
Electroencephalography - methods
Female
Functional magnetic resonance imaging
Humanities and Social Sciences
Humans
Image processing
Image Processing, Computer-Assisted - methods
Information processing
Magnetic Resonance Imaging - methods
Male
multidisciplinary
Neuroimaging
Photic Stimulation
Science
Science (multidisciplinary)
Visual stimuli
Young Adult
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3Pa9swFH6UwKCXsa3d5i4bHvS2mViyLFmwyzbStYXutEJuwpIlWmidQlJG_vu9Jzlpsh_0smtsB_G9J7_vs_Q-ARyLELT2mhfe09cqplihQ80KZjnH6uSYrakb-eK7PL0U57N6tnXUF-0JS_bACbiJDZw51BQWiQB1iVqhUEK0ddk5VA4umm1jzdsSU8nVm2vB9NAlU1bNZIGVirrJuCik5Byl004liob9f2OZf26W_G3FNBaik2fwdGCQ-ec08uew5_sX8CSdKbk6gE9nt_iKyB2xYtoGFJHP277Lo_bd-MXm1FiS38x_Fh3tYV-u8un02yFcnkx_fD0thiMSCicEvqpc0C5IJlEWNFq0WjRBeSU9l6rqcC5aZ0veocoNyLtki2rJtaLmjVXMi7IJ1UsY9fPev4a8dE2FT9u6q1vBfUWuNc5VnddtzW2nM_iwhsvcJScME1ewq8YkcA2CayK4hmXwhRDd3Eku1vEHjK0ZYmsei20G43U8zDC1FqZCEiOURiKTwfvNZZwUtNLR9n5-n-5RCtMC_-JVCt9mJPSRh0hhBs1OYHeGunulv76KxtvITZGuKpHBx3UOPIzr31gc_Q8s3sA-p-QlT085hhFmi3-LfGhp38XU_wXFrgJa
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: Health & Medical Collection
  dbid: 7X7
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3Nb9UwDLdgaBKXic_RMVCRuG3VmjRNGmnSBOiNMWmcmPRuUfM1kLZ27G1C---x075OjwHXJq1Sx45_tmMb4L2IUeugeRECeauYYoWONSuY5Ry1k2O2pmzkk6_y6FQcz-v56HBbjNcql2diOqh978hHvleh7hJKo_46uPxZUNcoiq6OLTQewiMqXUZcreZq8rFQFEswPebKlFWzt0B9RTllXBRSco4G1Io-SmX7_4Y171-Z_CNumtTR4RPYGHFk_mHY-KfwIHTPYH3oLHn7HPa_XOBBkTvCxnQZKNE_bzufJwt4qhqbU3pJft7_KjzdZL--zWezzy_g9HD27dNRMTZKKJwQeGC5qF2UTKJx0GjRatFEFZQMXKrKo0RaZ0vu0daNiL5kizaTa0XNG6tYEGUTq5ew1vVdeAV56ZoK37a1r1vBQ0W1a5yrfNBtza3XGewsyWUuh3oYJsWxq8YMxDVIXJOIa1gGH4mi00yqZZ0e9FdnZhQNYyNnDq1Gi1CP8oCtUGgktnXpHdqGTmawvdwPMwrYwtyxQwbvpmEUDYp3tF3ob4Y5SiEgwk9sDts3rYRcPQQNM2hWNnZlqasj3Y_vqfw2IlQErUpksLvkgbt1_ZsWW___jdfwmBNbUs1OuQ1ryAfhDeKda_s2MfVva1_7BQ
  priority: 102
  providerName: ProQuest
– databaseName: Springer Nature HAS Fully OA
  dbid: AAJSJ
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB71oUq9oAIFQgsKEjcaETuOHUu9LGhLWam9wEp7s2LHpkiQrdpFaP99Z5wH2lIOXGM7Go3Hnm_smc8Ab0UIWnvNM-_ptIoplulQsoxZztE7OWZLqka-uJTnczFblIst4EMtTEzaj5SWcZsessPe36KjoWIwLjIpOcfIZxt2iaodbXt3Mpl9mY0nK3R3JZjuK2Tyonpg8IYXimT9DyHMvxMl792WRid0dgCPevSYTjp5H8OWb5_AXvee5PopnH7-idtD6ggRUwpQ1Hpat00a496RKzalopL0x_J31lD--mqdTqefDmF-Nv368Tzrn0fInBC4TbmgXZBMYkhQaVFrUQXllfRcqqLBdWidzXmDEW5AzCVrjJRcLUpeWcW8yKtQPIOddtn6F5DmripwtC2bshbcF8RY41zReF2X3DY6gXeDusx1x4Jh4u11UZlOuQaVa6JyDUvgA2l07EkM1vHD8uab6WfU2MCZw1jRIsCj6l8rFIaGdZk3DiNCJxM4HubD9Mvq1hQIYITSCGISeDM244KgW4669ctfXR-lEAbhL5530zdKQgc8BAgTqDYmdkPUzZb2-1Uk3UZcilBViQROBhv4I9e_dfHy_7ofwT4nMyXmTnkMO2gX_hWinpV93Zv5HSP2-jQ
  priority: 102
  providerName: Springer Nature
Title Image classification and reconstruction from low-density EEG
URI https://link.springer.com/article/10.1038/s41598-024-66228-1
https://www.ncbi.nlm.nih.gov/pubmed/39013929
https://www.proquest.com/docview/3081479190
https://www.proquest.com/docview/3081770916
https://pubmed.ncbi.nlm.nih.gov/PMC11252274
https://doaj.org/article/bf21c917b1054391b47010a50dc249c6
Volume 14
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3di9QwEB_uA8EX8dt651LBN61u0jRpwOPYW_Y8F-4QdWHfQpMmKpxd70N0_3tn0nZldX3zqZAmJUxmOr9fkpkBeCZC0NprnnlPu1VMsUyHgmXMco7eyTFbUDTy6Zk8mYnpvJhvQV_uqBPg1UZqR_WkZpfnL39eLA_R4A_akPHy1RU6IQoU4yKTknNkRduwi55JUUWD0w7ut7m-uRZMd7Ezm4eu-aeYxn8T9vz7CuUf56jRPR3fhlsdrkxHrSLcgS3f3IUbbaXJ5T14_fYr_jhSR1iZLgfF9Uirpk4jI15lkU0p3CQ9X_zIarrZfr1MJ5M392F2PPk4Psm6wgmZEwJ_YC5oFySTSBZKLSotyqC8kp5LlddoodbZIa-R-wZEY7JCDuUqUfDSKubFsAz5A9hpFo1_BOnQlTmOtkVdVIL7nHLZOJfXXlcFt7VO4HkvLvOtzY9h4rl2XppWuAaFa6JwDUvgiCS66km5rWPD4vKT6UzF2MCZQxZpEfpRXLAVCkljVQxrh1zRyQT2-_Uwvb6YHKGNUBrhTQJPV6_RVOj8o2r84nvbRykESPiJh-3yrWZCWz8EFRMo1xZ2barrb5ovn2M6bkSsCGKVSOBFrwO_5_VvWTz-H7LYg5uclJcyfcp92EFt8U8QJV3bAWyruRrA7mg0_TDF59Hk7N17bB3L8SDuPAyicfwCmYYQig
linkProvider Scholars Portal
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Zb9QwEB6VIgQviJtAgSDBE0SNHceOJRDi2LJLj6dW2jc3dhxaCZLCFlX7p_iNzDhHtVxvfY0Ty5nD843HMwPwTNS11l7zxHs6rWKKJbrOWcIs52idHLM5ZSPv7snpgfg0z-dr8HPIhaFrlcOeGDbqqnV0Rr6Zoe0SSqP9enPyLaGuURRdHVpodGKx7Zdn6LItXs8-IH-fc7412X8_TfquAokTArXb1drVkklE0oUWpRZFrbySnkuVVSi-1tmUV-gY1ghVZIkOhitFzgurmBdpUWc47yW4jIY3JWdPzdV4pkNRM8F0n5uTZsXmAu0j5bBxkUjJOTpsK_YvtAn4G7b984rmb3HaYP62bsD1HrfGbztBuwlrvrkFV7pOlsvb8Gr2FTem2BEWp8tHgd9x2VRx8LjHKrUxpbPEX9qzpKKb86fLeDL5eAcOLoSEd2G9aRt_H-LUFRl-bfMqLwX3GdXKcS6rvC5zbisdwYuBXOakq79hQtw8K0xHXIPENYG4hkXwjig6vkm1s8OD9vtn06uisTVnDr1Ui9CS8o6tUOiUlnlaOfRFnYxgY-CH6RV6Yc7FL4Kn4zCqIsVXysa3P7p3lEIAhlPc69g3roSOlgiKRlCsMHZlqasjzfFRKPeNiBhBshIRvBxk4Hxd_6bFg___xhO4Ot3f3TE7s73th3CNk4hSvVC5AesoE_4RYq1T-zgIeAyHF61RvwBZpTYn
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Zb9QwEB6VrUC8IG4CBYIETxBt7Dh2LIEQpbt0KawqRKW-mdixAQmSwhZV-9f4dczk2Gq53vqaS84cnm88F8BDEYLWXvPEezqtYoolOuQsYZZztE6O2Zyqkd_O5e6BeH2YH27Az6EWhtIqhz2x3airxtEZ-ThD2yWURvs1Dn1axP7O9PnRt4QmSFGkdRin0YnInl-eoPu2eDbbQV4_4nw6ef9yN-knDCROCNR0F7QLkklE1YUWpRZFUF5Jz6XKKhRl62zKK3QSA8IWWaKz4UqR88Iq5kVahAy_ew42FXlFI9jcnsz3361OeCiGJpjuK3XSrBgv0FpSRRsXiZSco_u2Zg3boQF_Q7p_Jmz-FrVtjeH0MlzqUWz8ohO7K7Dh66twvptrubwGT2dfcZuKHSFzSkVquR-XdRW3_veqZ21MxS3xl-YkqSiP_ngZTyavrsPBmRDxBozqpva3IE5dkeHbNq_yUnCfUecc57LK6zLnttIRPB7IZY66bhymjaJnhemIa5C4piWuYRFsE0VXT1In7fZC8_2j6RXT2MCZQ5_VItCkKmQrFLqoZZ5WDj1TJyPYGvhhevVemFNhjODB6jYqJkVbyto3P7pnlEI4hp-42bFvtRI6aCJgGkGxxti1pa7fqT9_apt_Iz5GyKxEBE8GGThd179pcfv_v3EfLqA2mTez-d4duMhJQql5qNyCEYqEv4vA69je6yU8hg9nrVS_AFtEO8I
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Image+classification+and+reconstruction+from+low-density+EEG&rft.jtitle=Scientific+reports&rft.au=Sven+Guenther&rft.au=Nataliya+Kosmyna&rft.au=Pattie+Maes&rft.date=2024-07-16&rft.pub=Nature+Portfolio&rft.eissn=2045-2322&rft.volume=14&rft.issue=1&rft.spage=1&rft.epage=14&rft_id=info:doi/10.1038%2Fs41598-024-66228-1&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_bf21c917b1054391b47010a50dc249c6
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2045-2322&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2045-2322&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2045-2322&client=summon