A mixture-of-experts deep generative model for integrated analysis of single-cell multiomics data

The recent development of single-cell multiomics analysis has enabled simultaneous detection of multiple traits at the single-cell level, providing deeper insights into cellular phenotypes and functions in diverse tissues. However, currently, it is challenging to infer the joint representations and...

Full description

Saved in:
Bibliographic Details
Published inCell reports methods Vol. 1; no. 5; p. 100071
Main Authors Minoura, Kodai, Abe, Ko, Nam, Hyunha, Nishikawa, Hiroyoshi, Shimamura, Teppei
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 27.09.2021
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The recent development of single-cell multiomics analysis has enabled simultaneous detection of multiple traits at the single-cell level, providing deeper insights into cellular phenotypes and functions in diverse tissues. However, currently, it is challenging to infer the joint representations and learn relationships among multiple modalities from complex multimodal single-cell data. Here, we present scMM, a novel deep generative model-based framework for the extraction of interpretable joint representations and crossmodal generation. scMM addresses the complexity of data by leveraging a mixture-of-experts multimodal variational autoencoder. The pseudocell generation strategy of scMM compensates for the limited interpretability of deep learning models, and the proposed approach experimentally discovered multimodal regulatory programs associated with latent dimensions. Analysis of recently produced datasets validated that scMM facilitates high-resolution clustering with rich interpretability. Furthermore, we show that crossmodal generation by scMM leads to more precise prediction and data integration compared with the state-of-the-art and conventional approaches. [Display omitted] •scMM learns low-dimensional joint representations from single-cell multiomics data•scMM detects previously overlooked cell populations in single-cell multimodal data•Pseudocell generation enables scMM to learn interpretable latent dimensions•scMM accurately predicts missing modalities by crossmodal generation Revolutionary single-cell multiomics technologies have enabled acquiring characteristics of individual cells across multiple modalities, such as transcriptome, epigenome, and surface proteins. However, computational methods for integrated analysis of complex and high-dimensional multimodal single-cell data are currently limited. Here, we present scMM, a mixture-of-experts deep generative model for integrated analysis of single-cell multiomics data. scMM effectively infers interpretable joint representations from multimodal single-cell data. In addition, scMM learns underlying relationships across modalities, enabling crossmodal generation of single-cell data. Minoura et al. report the development of scMM, a multimodal deep generative model-based framework for analyzing single-cell multiomics data. scMM extracts biologically interpretable joint representations from high-dimensional multimodal data that can be used for downstream analyses. In addition, it learns relationships among single-cell modalities, enabling many-to-many prediction of missing modalities.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Lead contact
ISSN:2667-2375
2667-2375
DOI:10.1016/j.crmeth.2021.100071