Skip to main content
Fig. 1 | BMC Bioinformatics

Fig. 1

From: InClust+: the deep generative framework with mask modules for multimodal data integration, imputation, and cross-modal generation

Fig. 1

Architecture of inClust+ and its application. A The architecture of inClust+. InClust+ is based on inClust, with a VAE backbone (an encoder, a sampling part, and a decoder) and three built-in functional modules (an embedding layer embedded auxiliary information into the latent space, the vector arithmetic part performs information integration, a classifier cluster cells into groups). In addition, two mask modules designed for multimodal data processing are augmented to original inClust: an input-mask module in front of the encoder and an output-mask module behind the decoder. Each mask module is used to filter out unwanted values, and achieve multimodal integration and translation. B–D The application of inClust+. B Cross modal imputation by inClust+. There are two datasets, one from scRNA-seq (blue) and the other from MERFISH with some missing genes (red). InClust+ could impute the missing genes in MERFISH dataset (in the purple box) by referring to the scRNA-seq dataset. C Cross modal integration by inClust+. There are two paired datasets, one contains gene expression (blue) and protein abundance (green), and the other contains protein abundance (red) and chromatin accessibility (purple). InClust+ could integrate all three modalities from the two datasets. D Cross modal generation by inClust+. There are three datasets, two of which are paired datasets with gene expression (blue and red) and protein abundance (green and orange). The third one is a monomodal dataset with only gene expression data (purple). InClust+ could generate the protein abundance data (in the red box) for the third monomodal dataset by referring to the paired datasets

Back to article page