Latent Representation Learning for Alzheimer's Disease Diagnosis With Incomplete Multi-Modality Neuroimaging and Genetic Data

The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer's disease (AD) diagnosis. However, multi-modality based AD diagnostic models a...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on medical imaging Vol. 38; no. 10; pp. 2411 - 2422
Main Authors Zhou, Tao, Liu, Mingxia, Thung, Kim-Han, Shen, Dinggang
Format Journal Article
LanguageEnglish
Published United States IEEE 01.10.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0278-0062
1558-254X
1558-254X
DOI10.1109/TMI.2019.2913158

Cover

Loading…
Abstract The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer's disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.
AbstractList The fusion of complementary information contained in multi-modality data [ e.g. , magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer’s disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e. , not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related ( e.g. , MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.
The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer's disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer's disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.
The fusion of complementary information contained in multi-modality data ( e.g. , Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) and genetic data) has advanced the progress of automated Alzheimer’s disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e. , not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related ( e.g. , MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.
Author Liu, Mingxia
Thung, Kim-Han
Shen, Dinggang
Zhou, Tao
Author_xml – sequence: 1
  givenname: Tao
  orcidid: 0000-0002-2592-1688
  surname: Zhou
  fullname: Zhou, Tao
  email: taozhou.dreams@gmail.com
  organization: Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
– sequence: 2
  givenname: Mingxia
  surname: Liu
  fullname: Liu, Mingxia
  email: mxliu@med.unc.edu
  organization: Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
– sequence: 3
  givenname: Kim-Han
  surname: Thung
  fullname: Thung, Kim-Han
  email: henrythung@gmail.com
  organization: Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
– sequence: 4
  givenname: Dinggang
  orcidid: 0000-0002-7934-5698
  surname: Shen
  fullname: Shen, Dinggang
  email: dgshen@med.unc.edu
  organization: Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
BackLink https://www.ncbi.nlm.nih.gov/pubmed/31021792$$D View this record in MEDLINE/PubMed
BookMark eNp9kc2LUzEUxYOMOJ3RvSBIwIVuXr35eB_ZCMOMjoVWQQq6C-l797UZXpOa5Akj-L-b0lp0Fq5uIL9zcm7OBTlz3iEhzxlMGQP1drmYTTkwNeWKCVY2j8iElWVT8FJ-OyMT4HVTAFT8nFzEeAfAZAnqCTkXDDirFZ-QX3OT0CX6BXcBYz6ZZL2jczTBWbemvQ_0avi5QbvF8DrSGxvRRMzTrJ2PNtKvNm3ozLV-uxswIV2MQ7LFwndmsOmefsIxeLs1672bcR29RYfJtvTGJPOUPO7NEPHZcV6S5Yf3y-uPxfzz7ez6al60UspUdMzUinXYlk0nux6Q874tuTI9V41UUokVQ1kL4AJVhfkrVqzn3CjWVKCUuCTvDra7cbXFrs1rBjPoXci5wr32xup_b5zd6LX_oRsQsgKWDd4cDYL_PmJMemtji8NgHPoxas5ZxRtZ1pDRVw_QOz8Gl7fTXAAIARXbUy__TnSK8qeYDMABaIOPMWB_Qhjoffc6d6_33etj91lSPZC09lBn3skO_xO-OAgtIp7eaSrVNLISvwFvlbxl
CODEN ITMID4
CitedBy_id crossref_primary_10_1016_j_media_2022_102419
crossref_primary_10_1007_s11682_019_00255_9
crossref_primary_10_1093_cercor_bhad381
crossref_primary_10_1016_j_jbi_2023_104512
crossref_primary_10_1109_JBHI_2020_3010946
crossref_primary_10_1109_TMI_2022_3152157
crossref_primary_10_1007_s41095_020_0199_z
crossref_primary_10_1002_ima_22531
crossref_primary_10_1016_j_engappai_2025_110194
crossref_primary_10_3390_biomimetics9020120
crossref_primary_10_1109_JBHI_2023_3262948
crossref_primary_10_1016_j_compbiomed_2022_106116
crossref_primary_10_1145_3664290
crossref_primary_10_3233_JIFS_232312
crossref_primary_10_3390_brainsci12060751
crossref_primary_10_1109_TMI_2022_3162870
crossref_primary_10_1016_j_patcog_2020_107247
crossref_primary_10_1016_j_compbiomed_2023_107684
crossref_primary_10_1109_TMI_2020_3008871
crossref_primary_10_1007_s13042_023_01883_w
crossref_primary_10_1109_TMI_2021_3108802
crossref_primary_10_1016_j_eswa_2024_124780
crossref_primary_10_1109_ACCESS_2022_3221089
crossref_primary_10_3389_fnins_2021_634124
crossref_primary_10_1016_j_media_2020_101953
crossref_primary_10_1109_JBHI_2020_2973324
crossref_primary_10_1109_TMI_2024_3368664
crossref_primary_10_1088_1361_6560_accac8
crossref_primary_10_1109_ACCESS_2020_3046309
crossref_primary_10_1038_s42256_023_00633_5
crossref_primary_10_1016_j_inffus_2024_102361
crossref_primary_10_1007_s42979_024_03441_9
crossref_primary_10_3389_fnagi_2022_812870
crossref_primary_10_1016_j_media_2022_102643
crossref_primary_10_1007_s11042_020_08711_1
crossref_primary_10_1093_braincomms_fcab246
crossref_primary_10_1016_j_jbi_2023_104415
crossref_primary_10_1016_j_patcog_2024_110993
crossref_primary_10_1109_TCBB_2022_3204619
crossref_primary_10_1109_TFUZZ_2024_3371678
crossref_primary_10_1007_s10044_024_01268_x
crossref_primary_10_1016_j_bspc_2024_107085
crossref_primary_10_1109_TMI_2024_3355142
crossref_primary_10_3233_JAD_220021
crossref_primary_10_1109_TIP_2022_3147046
crossref_primary_10_1016_j_bspc_2024_105986
crossref_primary_10_1109_TASE_2024_3350894
crossref_primary_10_1186_s13195_021_00900_w
crossref_primary_10_1016_j_asoc_2022_109351
crossref_primary_10_1016_j_knosys_2021_106849
crossref_primary_10_1080_24725579_2023_2227197
crossref_primary_10_1088_2057_1976_abaf5e
crossref_primary_10_1007_s00521_023_08508_x
crossref_primary_10_1109_TCCN_2022_3164880
crossref_primary_10_1007_s11042_020_09103_1
crossref_primary_10_1007_s12065_021_00564_3
crossref_primary_10_1016_j_eswa_2023_121013
crossref_primary_10_1016_j_asoc_2022_108660
crossref_primary_10_1155_2022_6825576
crossref_primary_10_1109_JBHI_2021_3097721
crossref_primary_10_7717_peerj_13425
crossref_primary_10_1007_s13755_023_00231_0
crossref_primary_10_1016_j_media_2021_102248
crossref_primary_10_1111_exsy_13569
crossref_primary_10_1109_TMI_2021_3063150
crossref_primary_10_1016_j_patcog_2025_111597
crossref_primary_10_1016_j_bspc_2024_106568
crossref_primary_10_1109_JBHI_2024_3355111
crossref_primary_10_1109_TMI_2023_3288001
crossref_primary_10_1007_s11042_023_16023_3
crossref_primary_10_1109_JBHI_2023_3290006
crossref_primary_10_1016_j_inffus_2023_102040
crossref_primary_10_3233_JAD_190715
crossref_primary_10_1016_j_neucom_2024_127520
crossref_primary_10_1109_TMI_2024_3464861
crossref_primary_10_1109_TII_2022_3225028
crossref_primary_10_1016_j_jbi_2021_103863
crossref_primary_10_1016_j_compbiomed_2024_108740
crossref_primary_10_3390_ijms22157911
crossref_primary_10_1080_24725579_2023_2249487
crossref_primary_10_1016_j_inffus_2024_102546
crossref_primary_10_1016_j_media_2022_102698
crossref_primary_10_1016_j_media_2022_102571
crossref_primary_10_1007_s11042_020_09543_9
crossref_primary_10_3390_bioengineering11030219
crossref_primary_10_1016_j_media_2023_102869
crossref_primary_10_1142_S0219720021500128
crossref_primary_10_1007_s11042_023_16486_4
crossref_primary_10_1186_s12877_025_05771_6
crossref_primary_10_1109_TMI_2024_3389747
crossref_primary_10_1016_j_patcog_2022_108825
crossref_primary_10_1088_1741_2552_ac8450
crossref_primary_10_1007_s10489_021_02365_8
crossref_primary_10_1016_j_bspc_2023_105669
crossref_primary_10_1016_j_eswa_2023_119965
crossref_primary_10_1016_j_asej_2022_101986
crossref_primary_10_1109_JBHI_2024_3472462
crossref_primary_10_1016_j_artmed_2021_102097
crossref_primary_10_1016_j_media_2019_101630
crossref_primary_10_1109_TMI_2022_3218720
crossref_primary_10_1016_j_media_2024_103133
crossref_primary_10_1016_j_neulet_2021_136147
crossref_primary_10_1109_TMI_2023_3300725
crossref_primary_10_1016_j_ipm_2022_103113
crossref_primary_10_3389_fnagi_2023_1212275
crossref_primary_10_1016_j_eswa_2022_118165
crossref_primary_10_1109_TPAMI_2023_3330795
crossref_primary_10_1109_TMI_2024_3385756
crossref_primary_10_1002_hbm_26581
crossref_primary_10_1016_j_adhoc_2021_102581
crossref_primary_10_1109_TMM_2020_3020695
crossref_primary_10_1007_s11042_020_10473_9
crossref_primary_10_1007_s11042_021_11279_z
crossref_primary_10_1109_JBHI_2024_3472011
crossref_primary_10_1109_TCSS_2022_3216483
crossref_primary_10_1109_TMI_2023_3295489
crossref_primary_10_1109_JBHI_2020_2985907
crossref_primary_10_1109_TNNLS_2021_3112194
crossref_primary_10_1109_TCBB_2022_3143900
crossref_primary_10_1093_braincomms_fcae010
crossref_primary_10_1109_ACCESS_2023_3243854
crossref_primary_10_1016_j_bspc_2022_103571
crossref_primary_10_1109_JBHI_2024_3442468
crossref_primary_10_1109_TNNLS_2021_3053576
crossref_primary_10_1016_j_bspc_2025_107514
crossref_primary_10_1016_j_neucom_2020_07_008
crossref_primary_10_1109_TMI_2022_3166131
crossref_primary_10_1109_JBHI_2022_3219123
crossref_primary_10_1126_sciadv_adp6040
crossref_primary_10_1016_j_compmedimag_2023_102303
crossref_primary_10_1016_j_isci_2024_110509
crossref_primary_10_1007_s11042_020_08712_0
Cites_doi 10.1016/j.biopsych.2007.03.015
10.1093/bioinformatics/btv521
10.1016/j.media.2016.11.002
10.1109/JBHI.2013.2285378
10.1016/j.jalz.2015.06.246
10.1109/TBME.2015.2466616
10.1007/s10107-009-0306-5
10.1109/TMI.2017.2681966
10.1109/TBME.2018.2824725
10.1016/j.neuroimage.2012.04.056
10.1016/S1053-8119(18)31550-7
10.1109/TVCG.2017.2722414
10.1016/j.neucom.2015.07.155
10.1002/jmri.21049
10.1002/hbm.23575
10.1023/A:1010920819831
10.1118/1.3488894
10.1038/srep45269
10.1145/1961189.1961199
10.1016/j.neuroimage.2012.09.065
10.1093/bioinformatics/17.6.520
10.1109/42.668698
10.1109/TBME.2016.2553663
10.1016/j.neuroimage.2012.03.059
10.1016/j.jalz.2013.02.003
10.1016/S0197-4580(03)00084-8
10.1016/j.media.2019.03.006
10.1002/hbm.22254
10.1007/978-3-030-00928-1_83
10.1109/TPAMI.2015.2430325
10.1016/j.neuroimage.2010.01.041
10.1145/2488608.2488693
10.1016/j.patcog.2017.07.018
10.1038/ng1934
10.24963/ijcai.2018/313
10.1016/j.media.2018.02.009
10.1155/2015/647389
10.1371/journal.pone.0077810
10.1016/S0197-4580(99)00107-4
10.1016/j.nicl.2013.07.006
10.1212/01.WNL.0000055847.17752.E6
10.1162/0899766042321814
10.1007/978-3-030-00320-3_10
10.1109/TMI.2016.2538465
10.1176/appi.ajp.163.9.1603
10.1016/j.neuroimage.2005.09.054
10.1109/42.906424
10.1109/TMI.2002.803111
10.1016/j.neuroimage.2013.08.015
10.1007/978-3-319-24888-2_31
10.1007/s00429-013-0687-3
10.1016/j.neuroimage.2013.03.073
10.1016/j.neuroimage.2011.11.066
10.1016/j.media.2018.01.002
10.1016/j.neuroimage.2014.01.033
10.1109/TCYB.2018.2841847
10.1007/978-3-030-00320-3_7
10.1109/TMI.2016.2515021
10.1002/hbm.24428
10.1109/TCYB.2017.2747998
10.1016/j.jalz.2010.03.013
10.1038/nature08538
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
DBID 97E
RIA
RIE
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7QF
7QO
7QQ
7SC
7SE
7SP
7SR
7TA
7TB
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
NAPCQ
P64
7X8
5PM
DOI 10.1109/TMI.2019.2913158
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005-present
IEEE All-Society Periodicals Package (ASPP) 1998-Present
IEEE Electronic Library (IEL)
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Ceramic Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Nursing & Allied Health Premium
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
PubMed Central (Full Participant titles)
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Materials Research Database
Civil Engineering Abstracts
Aluminium Industry Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Ceramic Abstracts
Materials Business File
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Aerospace Database
Nursing & Allied Health Premium
Engineered Materials Abstracts
Biotechnology Research Abstracts
Solid State and Superconductivity Abstracts
Engineering Research Database
Corrosion Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
MEDLINE - Academic
DatabaseTitleList Materials Research Database
MEDLINE - Academic
MEDLINE


Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
– sequence: 3
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
Engineering
EISSN 1558-254X
EndPage 2422
ExternalDocumentID PMC8034601
31021792
10_1109_TMI_2019_2913158
8698846
Genre orig-research
Journal Article
Research Support, N.I.H., Extramural
GrantInformation_xml – fundername: National Institutes of Health
  grantid: EB006733; EB008374; EB009634; MH100217; AG041721; AG042599
  funderid: 10.13039/100000002
– fundername: NIBIB NIH HHS
  grantid: R01 EB006733
– fundername: NIBIB NIH HHS
  grantid: R01 EB009634
– fundername: NIA NIH HHS
  grantid: R01 AG042599
– fundername: NIBIB NIH HHS
  grantid: R01 EB008374
– fundername: NIA NIH HHS
  grantid: R01 AG041721
– fundername: NIMH NIH HHS
  grantid: R01 MH100217
GroupedDBID ---
-DZ
-~X
.GJ
0R~
29I
4.4
53G
5GY
5RE
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
ACPRK
AENEX
AETIX
AFRAH
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
F5P
HZ~
H~9
IBMZZ
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYOK
AAYXX
CITATION
RIG
CGR
CUY
CVF
ECM
EIF
NPM
7QF
7QO
7QQ
7SC
7SE
7SP
7SR
7TA
7TB
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
NAPCQ
P64
7X8
5PM
ID FETCH-LOGICAL-c444t-d1a791dec58d4df0e22fc529af29849493b1e473023e96e131b1f22a91860993
IEDL.DBID RIE
ISSN 0278-0062
1558-254X
IngestDate Thu Aug 21 18:43:00 EDT 2025
Fri Jul 11 02:28:57 EDT 2025
Mon Jun 30 05:34:27 EDT 2025
Thu Apr 03 06:53:04 EDT 2025
Thu Apr 24 23:02:25 EDT 2025
Tue Jul 01 03:16:02 EDT 2025
Wed Aug 27 02:30:47 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Issue 10
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c444t-d1a791dec58d4df0e22fc529af29849493b1e473023e96e131b1f22a91860993
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-2592-1688
0000-0002-7934-5698
OpenAccessLink https://www.ncbi.nlm.nih.gov/pmc/articles/8034601
PMID 31021792
PQID 2300330610
PQPubID 85460
PageCount 12
ParticipantIDs crossref_primary_10_1109_TMI_2019_2913158
pubmed_primary_31021792
proquest_miscellaneous_2216284570
crossref_citationtrail_10_1109_TMI_2019_2913158
ieee_primary_8698846
proquest_journals_2300330610
pubmedcentral_primary_oai_pubmedcentral_nih_gov_8034601
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2019-10-01
PublicationDateYYYYMMDD 2019-10-01
PublicationDate_xml – month: 10
  year: 2019
  text: 2019-10-01
  day: 01
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on medical imaging
PublicationTitleAbbrev TMI
PublicationTitleAlternate IEEE Trans Med Imaging
PublicationYear 2019
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
ref56
ref12
ref59
ref15
ref58
ref14
ref53
peng (ref20) 2016
ref52
ref55
ref11
ref10
association (ref1) 2013; 9
ref17
ref16
ref19
ref18
he (ref27) 2005
ref51
ref50
zhao (ref65) 2016
ref46
ref45
wang (ref42) 2013
ref48
wen (ref66) 2018
ref47
ref41
ref44
ref49
ref8
li (ref64) 2014
ref7
ref9
ref4
ref3
lin (ref43) 2010
ref6
jolliffe (ref25) 2002
ref5
ref40
ref35
ref34
ref37
ref36
ref31
ref30
ref33
ref32
ref2
ref39
ref38
ref71
ref70
ref72
ref68
ref24
ref67
ref23
ref69
ref63
ref22
ref21
ref28
ref29
duda (ref26) 2012
ref60
liu (ref54) 2009; 6
ref62
ref61
References_xml – ident: ref72
  doi: 10.1016/j.biopsych.2007.03.015
– ident: ref71
  doi: 10.1093/bioinformatics/btv521
– ident: ref12
  doi: 10.1016/j.media.2016.11.002
– ident: ref28
  doi: 10.1109/JBHI.2013.2285378
– ident: ref18
  doi: 10.1016/j.jalz.2015.06.246
– year: 2010
  ident: ref43
  publication-title: The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices
– ident: ref44
  doi: 10.1109/TBME.2015.2466616
– start-page: 70
  year: 2016
  ident: ref20
  article-title: Structured sparse kernel learning for imaging genetics based Alzheimer's disease diagnosis
  publication-title: Proc Int Conf Med Image Comput Comput Assisted Intervent
– ident: ref51
  doi: 10.1007/s10107-009-0306-5
– ident: ref16
  doi: 10.1109/TMI.2017.2681966
– ident: ref60
  doi: 10.1109/TBME.2018.2824725
– ident: ref62
  doi: 10.1016/j.neuroimage.2012.04.056
– start-page: 2392
  year: 2016
  ident: ref65
  article-title: Incomplete multi-modal visual data grouping
  publication-title: Proc Int Joint Conf Artif Intell
– ident: ref37
  doi: 10.1016/S1053-8119(18)31550-7
– ident: ref48
  doi: 10.1109/TVCG.2017.2722414
– ident: ref70
  doi: 10.1016/j.neucom.2015.07.155
– year: 2012
  ident: ref26
  publication-title: Pattern Classification
– year: 2018
  ident: ref66
  publication-title: Incomplete multi-view clustering via graph regularized matrix factorization
– ident: ref32
  doi: 10.1002/jmri.21049
– ident: ref41
  doi: 10.1002/hbm.23575
– ident: ref53
  doi: 10.1023/A:1010920819831
– ident: ref22
  doi: 10.1118/1.3488894
– ident: ref17
  doi: 10.1038/srep45269
– ident: ref52
  doi: 10.1145/1961189.1961199
– ident: ref15
  doi: 10.1016/j.neuroimage.2012.09.065
– ident: ref47
  doi: 10.1093/bioinformatics/17.6.520
– ident: ref34
  doi: 10.1109/42.668698
– ident: ref68
  doi: 10.1109/TBME.2016.2553663
– year: 2002
  ident: ref25
  publication-title: Principal Components Analysis
– ident: ref30
  doi: 10.1016/j.neuroimage.2012.03.059
– volume: 9
  start-page: 208
  year: 2013
  ident: ref1
  article-title: 2013 Alzheimer's disease facts and figure
  publication-title: Alzheimer's & Dementia
  doi: 10.1016/j.jalz.2013.02.003
– ident: ref5
  doi: 10.1016/S0197-4580(03)00084-8
– ident: ref10
  doi: 10.1016/j.media.2019.03.006
– ident: ref63
  doi: 10.1002/hbm.22254
– ident: ref8
  doi: 10.1007/978-3-030-00928-1_83
– start-page: 352
  year: 2013
  ident: ref42
  article-title: Multi-view clustering and feature learning via structured sparsity
  publication-title: Proc Int Conf Mach Learn
– ident: ref57
  doi: 10.1109/TPAMI.2015.2430325
– ident: ref29
  doi: 10.1016/j.neuroimage.2010.01.041
– ident: ref49
  doi: 10.1145/2488608.2488693
– ident: ref24
  doi: 10.1016/j.patcog.2017.07.018
– ident: ref39
  doi: 10.1038/ng1934
– volume: 6
  start-page: 7
  year: 2009
  ident: ref54
  article-title: SLEP: Sparse learning with efficient projections
  publication-title: Arizona State Univ
– ident: ref50
  doi: 10.24963/ijcai.2018/313
– ident: ref9
  doi: 10.1016/j.media.2018.02.009
– ident: ref19
  doi: 10.1155/2015/647389
– ident: ref35
  doi: 10.1371/journal.pone.0077810
– ident: ref56
  doi: 10.1016/S0197-4580(99)00107-4
– ident: ref55
  doi: 10.1016/j.nicl.2013.07.006
– ident: ref11
  doi: 10.1212/01.WNL.0000055847.17752.E6
– ident: ref45
  doi: 10.1162/0899766042321814
– start-page: 1968
  year: 2014
  ident: ref64
  article-title: Partial multi-view clustering
  publication-title: Proc AAAI
– ident: ref14
  doi: 10.1007/978-3-030-00320-3_10
– ident: ref6
  doi: 10.1109/TMI.2016.2538465
– ident: ref59
  doi: 10.1176/appi.ajp.163.9.1603
– ident: ref33
  doi: 10.1016/j.neuroimage.2005.09.054
– ident: ref36
  doi: 10.1109/42.906424
– ident: ref38
  doi: 10.1109/TMI.2002.803111
– start-page: 507
  year: 2005
  ident: ref27
  article-title: Laplacian score for feature selection
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref31
  doi: 10.1016/j.neuroimage.2013.08.015
– ident: ref61
  doi: 10.1007/978-3-319-24888-2_31
– ident: ref46
  doi: 10.1007/s00429-013-0687-3
– ident: ref23
  doi: 10.1016/j.neuroimage.2013.03.073
– ident: ref21
  doi: 10.1016/j.neuroimage.2011.11.066
– ident: ref4
  doi: 10.1016/j.media.2018.01.002
– ident: ref7
  doi: 10.1016/j.neuroimage.2014.01.033
– ident: ref40
  doi: 10.1109/TCYB.2018.2841847
– ident: ref67
  doi: 10.1007/978-3-030-00320-3_7
– ident: ref2
  doi: 10.1109/TMI.2016.2515021
– ident: ref3
  doi: 10.1002/hbm.24428
– ident: ref69
  doi: 10.1109/TCYB.2017.2747998
– ident: ref58
  doi: 10.1016/j.jalz.2010.03.013
– ident: ref13
  doi: 10.1038/nature08538
SSID ssj0014509
Score 2.6179419
Snippet The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and...
The fusion of complementary information contained in multi-modality data [ e.g. , magnetic resonance imaging (MRI), positron emission tomography (PET), and...
The fusion of complementary information contained in multi-modality data ( e.g. , Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) and...
SourceID pubmedcentral
proquest
pubmed
crossref
ieee
SourceType Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 2411
SubjectTerms Aged
Aged, 80 and over
Algorithms
Alzheimer Disease - diagnostic imaging
Alzheimer Disease - genetics
Alzheimer's disease
Brain - diagnostic imaging
Classification
Databases, Factual
Diagnosis
Diagnosis, Computer-Assisted - methods
Diagnostic systems
Feature extraction
Female
Genetic Association Studies
Genetics
Humans
incomplete multi-modality data
latent representation space
Learning
Machine Learning
Magnetic resonance imaging
Male
Medical diagnosis
Medical imaging
Missing data
multi-modality data
Multimodal Imaging - methods
Neuroimaging
Neuroimaging - methods
Neurology
NMR
Nuclear magnetic resonance
Polymorphism, Single Nucleotide - genetics
Positron emission
Positron emission tomography
Representations
Tomography
Title Latent Representation Learning for Alzheimer's Disease Diagnosis With Incomplete Multi-Modality Neuroimaging and Genetic Data
URI https://ieeexplore.ieee.org/document/8698846
https://www.ncbi.nlm.nih.gov/pubmed/31021792
https://www.proquest.com/docview/2300330610
https://www.proquest.com/docview/2216284570
https://pubmed.ncbi.nlm.nih.gov/PMC8034601
Volume 38
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1La9wwEB6SHEp76CPpY9ukqFAohXrXkiVbOoY8SEvdQ9nS3IytR9c08Zas9xLof-9IfpANoeRkg8dGYkbSjOebbwDeK-dkUlU6EiIREddaRNLwFEMVI51UTujQOiH_lp794F_OxfkWfBprYay1AXxmp_425PLNUq_9r7KZTJXE83IbtjFw62q1xowBFx2cg3nG2DhlQ0oyVrN5_tljuNSUKZpQ4Vv0Jb6jdabYxmkU2qvc5WneBkzeOIFOn0A-jL0Dnvyerttqqq9v0Tred3JP4XHvipLDznaewZZtduHRDYLCXXiQ96n3Pfj7Fb3SpiXfA3S2r1hqSM_P-oug80sOL64Xtr60Vx9W5LjL_OA1YPnqFflZtwuC-5EHsaOrTkLtb5QvTQgFSKAJqS9D1yRSNoZ4RmwcGjku2_I5zE9P5kdnUd-7IdKc8zYytMwUNVYLVL1xsWXMacFU6ZiSnhEnqajlmW9ZZFVqUS8VdYyVisoUndbkBew0y8a-AoKbkKs0q2xsLBeGK0-4YwyNrdSZ0vEEZoMKC93zmvv2GhdFiG9iVaD-C6__otf_BD6Ob_zpOD3-I7vnVTXK9VqawP5gJUW_6FcFRnNxgiEYxTG9Gx_jcvU5mLKxyzXKMJqiRyAylHnZGdX47cEoJ5BtmNso4KnAN5809SJQgss44Rhav757tG_goZ9Th0Hch532am0P0Jdqq7dhEf0DBhQakQ
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VIkE58Gh5LBQwEhJCIruxYyf2saJUW9j0gBbRW5TYDhu1zaJu9lKJ_87YeahbVYhTImUS2ZqxPZP55huA96osZVQUOhAiEgHXWgTS8BhDFSNLqUqhfeuE9CSe_uBfT8XpFnwaamGstR58Zsfu1ufyzVKv3a-yiYyVxPPyDtwVrhi3rdYacgZctIAO5jhjw5j1SclQTebpsUNxqTFTNKLCNemLXE_rRLGN88g3WLnN17wJmbx2Bh09grQffQs9ORuvm2Ksr24QO_7v9B7Dw84ZJQet9TyBLVvvwoNrFIW7cC_tku978GeGfmndkO8ePNvVLNWkY2j9RdD9JQfnVwtbXdjLDyty2OZ-8OrRfNWK_KyaBcEdycHY0Vknvvo3SJfGBwPEE4VUF75vEslrQxwnNg6NHOZN_hTmR1_mn6dB170h0JzzJjA0TxQ1VgtUvilDy1ipBVN5yZR0nDhRQS1PXNMiq2KLeiloyViuqIzRbY2ewXa9rO0LILgNlYVmhQ2N5cJw5Sh3jKGhlTpROhzBpFdhpjtmc9dg4zzzEU6oMtR_5vSfdfofwcfhjd8tq8c_ZPecqga5Tksj2O-tJOuW_SrDeC6MMAijOKZ3w2NcsC4Lk9d2uUYZRmP0CUSCMs9boxq-3RvlCJINcxsEHBn45pO6WnhScBlGHIPrl7eP9i3cn87TWTY7Pvn2Cnbc_FpE4j5sN5dr-xo9q6Z44xfUX9ntHdk
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Latent+Representation+Learning+for+Alzheimer%E2%80%99s+Disease+Diagnosis+With+Incomplete+Multi-Modality+Neuroimaging+and+Genetic+Data&rft.jtitle=IEEE+transactions+on+medical+imaging&rft.au=Zhou%2C+Tao&rft.au=Liu%2C+Mingxia&rft.au=Thung%2C+Kim-Han&rft.au=Shen%2C+Dinggang&rft.date=2019-10-01&rft.issn=0278-0062&rft.eissn=1558-254X&rft.volume=38&rft.issue=10&rft.spage=2411&rft.epage=2422&rft_id=info:doi/10.1109%2FTMI.2019.2913158&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TMI_2019_2913158
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0278-0062&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0278-0062&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0278-0062&client=summon