 Research
 Open access
 Published:
Cancer survival prediction by learning comprehensive deep feature representation for multiple types of genetic data
BMC Bioinformatics volumeÂ 24, ArticleÂ number:Â 267 (2023)
Abstract
Background
Cancer is one of the leading death causes around the world. Accurate prediction of its survival time is significant, which can help clinicians make appropriate therapeutic schemes. Cancer data can be characterized by varied molecular features, clinical behaviors and morphological appearances. However, the cancer heterogeneity problem usually makes patient samples with different risks (i.e., short and long survival time) inseparable, thereby causing unsatisfactory prediction results. Clinical studies have shown that genetic data tends to contain more molecular biomarkers associated with cancer, and hence integrating multitype genetic data may be a feasible way to deal with cancer heterogeneity. Although multitype gene data have been used in the existing work, how to learn more effective features for cancer survival prediction has not been well studied.
Results
To this end, we propose a deep learning approach to reduce the negative impact of cancer heterogeneity and improve the cancer survival prediction effect. It represents each type of genetic data as the shared and specific features, which can capture the consensus and complementary information among all types of data. We collect mRNA expression, DNA methylation and microRNA expression data for four cancers to conduct experiments.
Conclusions
Experimental results demonstrate that our approach substantially outperforms established integrative methods and is effective for cancer survival prediction.
Availability and implementation
Introduction
As the morbidity and mortality rates gradually rise, cancer is becoming the main death cause in the global [1,2,3]. According to the global cancer report, additional 14.10 million cancer cases occurred with death cases 8.20 million in 2012. The number of new cancer cases and death cases reached 18.1 million (9.5 million men and 8.6 million women) and 9.6 million respectively in 2018 [4]. Meanwhile, cancers have become common among young people. Therefore, it is significant to accurately predict the survival time, which can help the clinicians make proper therapeutic guidance to improve the survival rate and living quality of cancer patients [5, 6].
Cancer survival prediction has been an interesting and challenging issue in cancer research over the past few decades [7,8,9]. The heterogenous disease, cancer, can be characterized by varied molecular features, clinical behaviors, morphological appearances and reactions to therapies. This leads that the genes and phenotypes of cells in the same pattern and stage are also different, which results in a big challenge for cancer survival prediction [10,11,12]. FigureÂ 1 visualizes the embedding feature spaces of DNA methylation, mRNA expression and microRNA expression for the glioblastoma multiforme (GBM) dataset by reducing the dimensionality of original features. Specifically, we use a commonlyused visualization method tSNE (tdistributed stochastic neighbor embedding) [13] to display the lowdimensional feature space of genetic data. tSNE adopts the nonlinear dimensionality reduction technique and can preserve the local and global distribution structures of the dataset. As shown in this figure, patient samples with different survival times are mixed together and difficult to be distinguished in the embedding feature spaces of three types of genetic data, which further verifies the difficulty of cancer survival prediction.
Survival analysis is usually accomplished using heterogeneous data sources including lowdimensional clinical data (age, sex, cancer grade detail, body fat rate, etc.) [14], pathological images [15,16,17,18], and multitype gene data [19]. For example, Chen et al. proposed an interpretable strategy for endtoend multimodal fusion of histology image and genomic (mutations, CNV, RNASeq) features [20]. Cheerla et al. designed an unsupervised encoder to integrate four data modalities (gene expression data, miRNA data, clinical data and whole slide image) into a single feature vector for each patient [21]. ValeSilva et al. utilized clinical, imaging, and different highdimensional omics data modalities to conduct cancer survival prediction [22]. Compared with single source data, multisource heterogeneous data describes the cancer from different perspectives, which can capture a more comprehensive understanding of the cancer [23, 24]. Multisource heterogeneous data can be regarded as multimodal data, which contains not only large consensus information but also abundant complementary information [25]. From information perspective, consensus information indicates that each modality contains information that shared by all modalities (intermodal shared information); complementary information instructs that each modality also contains information that is unique to itself (intramodal specific information) [26].
Existing data integration based cancer survival prediction methods can be mainly categorized into three paradigms:

(i)
fusion methods [27,28,29,30] based on concatenation integrate multiple types of data directly. This seems unreasonable since the concatenation of heterogeneous data sources neglects the intermodal discriminant information. In addition, this strategy would cause very high dimensional feature vectors, which is adverse for feature learning [31].

(ii)
fusion methods [21, 32,33,34] only learn consensus information. This strategy only exploits the consensus information of heterogeneous data but ignores the diversity of heterogeneous data, which is adverse for exploiting comprehensive information of cancer. While, for heterogenous disease, making full use of the complementarity between different types of data is conducive to a comprehensive understanding of the disease.

(iii)
works [25] and [35] utilize the similarity network fusion to integrate multiple types of data. They learn consensus and complementary information based on the relations between patient samples. However, they ignore finegrained feature representation information, especially for gene sequences with thousands of dimensions.
In general, although multitype gene data have been used in the existing work, how to learn more effective features for cancer survival prediction and explain them at the feature level has not been well studied. Besides, deep learning technique has been proved to have strong feature representation and classification ability in various tasks. In this paper, we intend to utilize deep learning to obtain more effective feature representations of multitype genetic data and achieve better performance of cancer survival prediction at the feature level, which can reduce the negative impact of data heterogeneity. Also, we want to explain the functions of deep features from the aspect of extracting consensus and complementary information. FigureÂ 2 shows a demo of the proposed deep learning for cancer survival prediction. These consensus and complementary representations are exploited to capture comprehensive survival information of cancer patients; e.g., consensus representation is exploited to capture modalityinvariant survival information; the specific representations of mRNA expression, DNA methylation and miRNA expression are exploited to capture the modalityspecific survival information.
The main contributions of this study are summarized as follows:

(1)
This study focuses on the problem of data heterogeneity in cancer survival prediction and proposes a deep learning approach to integrate multitype genetic data effectively. As shown in Fig.Â 2, by sufficiently integrating multitype genetic data, survivors with different times (i.e., short and long times) can be well separated in the feature space built by our approach, which means that the negative impact of data heterogeneity on cancer survival prediction can be alleviated significantly.

(2)
In the proposed deep learning approach, it represents each type of genetic data as the shared and specific features, which can capture the consensus and complementary information among all types of data. Then, we fuse the shared and specific features of each type of data by concatenation, and employ the fused features for cancer survival prediction as shown in Fig.Â 2. To strengthen the representation ability of deep features, we layerbylayer impose an Euclidean distance constraint on the shared feature learning network, as well as impose an orthogonal constraint on the specific feature learning network.

(3)
We conduct extensive experiments on glioblastoma multiforme (GBM), kidney renal clear cell carcinoma (KRCCC), lung squamous cell carcinoma (LSCC) and breast invasive carcinoma (BIC) datasets. Experimental results show that our approach can achieve higher prediction performance than competing methods. This demonstrates that our approach significantly improves the performance of cancer survival prediction and is helpful for clinicians to make proper therapeutic guidance for cancer patients.
Proposed methods
FigureÂ 3 shows the proposed deep learning network to achieve the shared and specific feature representation for cancer survival prediction. First, it maps the original feature dimensions of all data types to the same dimension through a fully connected neural network. Secondly, it builds a multistream deep shared network with parameters shared for all data types to learn the consensus information, as well as a deep specific network for each data type to learn the complementary information. At the same time, an Euclidean distance constraint is used to enhance the learning of consensus information and an orthogonal constraint is used to enhance the learning of complementary information. Finally, to improve the separability of data, we introduce the contrastive loss to pull samples from the same class closer and push samples from different classes farther.
Feature mapping
The data integration strategy designed in this paper can learn the consensus and complementary information only when the feature dimensions of these data types are consistent. Therefore, it is necessary to map the features of all genetic data types to the same dimension. A common approach is to adopt the MaxRelevance and MinRedundancy (mRMR) feature selection algorithm for dimension reduction [27, 36, 37], which ignores the interaction between gene sites in sequence. In this paper, we design a threelayer fully connected neural network for feature mapping. Considering that the dimension increase operation will introduce noise, we employ dimension reduction operation to get the same feature dimension for all data types. In addition, the data dimension for miRAN is relatively low (329 dimension for KRCCC to 534 dimension for GBM), which is not suitable for further dimensionality reduction. Therefore, we use the dimension of miRNA as the last mapping dimension. Table 1 shows the detailed dimensionality values for the feature mapping process.
Shared and specific deep feature learning
Let \(X=\left\{ x_{i}\in {\mathbb {R}}^{q} \right\} _{i=1}^{N}\) be a set of N samples, where q represents dimension of each sample. Moreover, let \(X_{K}=\left\{ x_{k,i}\in {\mathbb {R}}^{q^{k}} \right\} _{i=1}^{N}\) denote the feature set of X in data type k, where \({\mathrm{{x}}_{k,i}}\) is the \(k\text{{th}}\) representation of the \({\mathrm{{x}}_i}\) and \({q^k}\) is the dimension of \({\mathrm{{x}}_i}\). Here, \(k = 1,2, \ldots ,K\), where K denotes the total number of data types. Generally, the \(k\mathrm{{th}}\) representation \({\mathrm{{x}}_{k,i}}\) and the \(l\mathrm{{th}}\) representation \({\mathrm{{x}}_{l,i}}\) of the \({\mathrm{{x}}_i}\), \(k \ne l\), are different, because they are usually from different spaces. Therefore, directly concatenating them may not be physically meaningful, and cannot well utilize the complementary property.
Considering the fact that each data type represents the same object from different point of view, different data types not only contain the specific information but also share common information. For \({\mathrm{{x}}_{k,i}}\), we employ the shared feature learning network to project it to get the consensus information by \(h_{k,i}^c = W_k^c{\mathrm{{x}}_{k,i}}\), where \(W^{c}\in {\mathbb {R}}^{r^{c}\times q_{k}}\), and employ the specific feature learning network to project it to get the complementary information by \(h_{k,i}^s = W_k^s{\mathrm{{x}}_{k,i}}\), where \(W_{k}^{s}\in {\mathbb {R}}^{r_{k}^{s}\times q_{k}}\). The learned feature representation of \({\mathrm{{x}}_{k,i}}\) can be written as:
Therefore, the final representation with multiple data types can be denoted as:
Since the shared information from different data types is almost the same, it is unnecessary to include all of them in the final representation. Instead, we use the average value:
Finally, the resulting representation of \({\mathrm{{x}}_i}\) can be written as:
Layerbylayer Euclidean distance and orthogonality constraints
We impose the orthogonality constraint between each layer of shared and specific feature learning networks to separate shared and specific information, as well as prevent them from contaminating each other. Furthermore, we impose the Euclidean distance constraint between each layer of multistream shared feature learning networks to ensure the similarity of consensus information. Details are described as follows:
Let \(H_k^c\,(m)\) and \(H_k^s(m)\) be the outputs of shared and specific networks from layer m. Orthogonality loss between \(H_k^c(m)\) and \(H_k^s(m)\) is defined as:
where \(\left\ \cdot \right\ _{F}^{2}\) is the squared Frobenius norm.
Let \(H_k^c(m)\) and \(H_l^c(m)\) be the outputs of the same layer for data type k and l in shared feature learning network, respectively. Euclidean distance loss between \(H_k^c(m)\) and \(H_l^c(m)\) is defined as:
where \({d_n} = \left\ {h_{k,n}^c} \right.  {\left. {h_{l,n}^c} \right\ ^2}\), and \(h_{k,n}^c\) and \(h_{l,n}^c\) are shared representation of sample \({{\mathrm{x}}_n}\) in data type k and l, respectively.
Classification
After integrating multiple genetic data types into a unified representation, we classify them with a multilayer network. CrossEntropy loss is used for classification.
To improve the separability of data, contrastive loss is implemented. Specifically, for a pair of samples \({{\mathrm{x}}_i}\) and \({{\mathrm{x}}_j}\), we use \({{\mathrm{h}}_i}\) and \({{\mathrm{h}}_j}\) to represent their features extracted by the feature learning network, respectively. The distance between them is computed as:
Contrastive loss between \({\mathrm{{h}}_i}\) and \({\mathrm{{h}}_j}\) is defined as:
where \({{\mathrm{d}}_n}\) is the distance of the \({n^{th}}\) paired samples, Margin is a threshold, and \({y_n}\) denotes whether the paired samples are from the same class. If they are from the same class, \({y_n} = 1\), otherwise, \({y_n} = 0\).
Experiments
Datasets
Four cancer datasets including glioblastoma multiforme (GBM), kidney renal clear cell carcinoma (KRCCC), lung squamous cell carcinoma (LSCC) and breast invasive carcinoma (BIC) are used to evaluate our approach. For each dataset, we collect three types of genetic data, including DNA methylation, mRNA expression and miRNA expression data. The datasets used in this paper are obtained from http://compbio.cs.toronto.edu/SNF/, which are provided and preprocessed by work [25]. It downloads these data from the TCGA website and performs three steps of preprocessing: sample selection, missingdata imputation and normalization. Detailed preprocessing process is described as follows: (i) if one patient sample has more than 20% missing data in a certain data type, then this sample will be removed; (ii) if a certain gene has more than 20% missing values, then this gene will be filtered, otherwise, the knearest interpolation is used for complementing this gene; (iii) the zscore transformation is used for normalizing the data samples.
FigureÂ 4 illustrates the survival time distribution for four cancer datasets, from which we can observe that the survival time for GBM, KRCCC, LSCC and BIC ranges 0â€“118 months, 0â€“113 months, 0â€“125 months and 0â€“192 months, respectively. The median survival for GBM, KRCCC, LSCC and BIC is 14 months, 45 months, 19 months and 26 months, respectively. Combined with the survival time distribution and median survival of each cancer, 2year, 4year, 2year and 3year are taken as thresholds to divide two types of patients with four cancer types. Table 2 shows the data properties of four datasets. For classification, the short term patients are labeled as 0 and long term patients are labeled as 1. The initial feature dimensions of three types of genetic expressions in all datasets are significantly different.
Evaluation
To evaluate our proposed method, we adopt tenfold cross validation in our experiments. Specifically, we randomly divide long time survivors and short time survivors into ten subsets, respectively. For each round of training, each subset of long time survivors combined with each subset of short time survivors will be used as a validation set, another seven subsets of long time survivors combined with seven subsets of short time survivors are used as training set, the last two subsets of long time survivors combined with the last two subsets of short time survivors are used as testing set. The prediction score is the average of the output of ten rounds. In this paper, we use five metrics including Accuracy (Acc), Recall, Precision(Pre), ROC curve and AUC (area under the ROC curve) to measure model performance. These metrics are defined as follows:
where true positive (TP) represents the number of cases correctly identified as shortsurvival, false positive (FP) represents the number of cases incorrectly identified as shortsurvival, true negative (TN) represents the number of cases correctly identified as longsurvival, and false negative (FN) represents the number of cases incorrectly identified as longsurvival.
Hyperparameter selection
The designed cancer survival prediction model consists of three modules: features mapping network, shared and specific representation learning network and classification network. Specifically, the features mapping network adopts a threelayer fully connected network, and the size of each layer is shown in Table 1. We build the shared and specific representation learning network with two hidden fully connected layers of sizes 256 and 128, and an output layer of size 32. Each layer uses the ReLU activation function. The classification network adopts a threelayer fully connected network, in which the sizes of hidden and output layers are 32 and 2, respectively.
To avoid overfitting, we do not perform a separate hyperparameter search for each cancer dataset. Instead, we search the hyperparameters on GBM dataset and apply the selected parameters for other datasets. The hyperparameter margin is searched on the grid [1, 2, 3, 4, 5]. We perform grid search based on the grid [0.0001, 0.0003, 0.0005, 0.0007, 0.0009, 0.001] to determine the learning rate of Adam optimizer. Batch size for training set is searched from [20, 30, 40, 50]. Specifically, we conduct a series of tests on the validation set where in each experiment we vary one of the three hyperparameters from the chosen value by tuning it up or down by one grid, obtaining 15 sets of varied hyperparameters. For each set of varied hyperparameters, 10fold crossvalidation is conducted.
The final chosen hyperparameters are shown in Table 3.
Experimental results
We compare our approach with three stateoftheart cancer survival prediction methods:

Similarity network fusion (SNF) for aggregating data types on a genomic scale [25];

Integrating multiple genomic data and clinical data based on graph convolutional network (GCGCN) for cancer survival prediction [35];

Multimodal deep neural network for human breast cancer prognosis prediction by integrating multidimensional data (MDNNMD) [28].

Multimodal advanced deep learning architectures for breast cancer survival prediction (SiGaAtCNNs) [30];

Crossaligned multimodal representation learning for cancer survival prediction (CAMR) [38];

Integrating multiomics data by learning modality invariant representations for improved prediction of overall survival of cancer (LMIR) [39].
A brief introduction of these survival analysis methods is summarized in Table 4. The predictive results of all competing methods are reported in Figs.Â 5 and 6. FigureÂ 5 shows the comparison results of all evaluation metrics including accuracy, precision and the area under curve (AUC) on four datasets. FigureÂ 6 presents the receiver operating characteristic (ROC) curves of all competing methods on four datasets.
From these results, we can conclude that the overall performance of our method is much higher than those of three compared methods. This indicates that methods considering consensus and complementary information are better than that simply concatenating features.
In order to further investigate the effectiveness of learned feature representations by our approach, i.e., the final fusion representation by concatenating all specific representations and the shared representation, we employ the tSNE to embed the samples into the twodimensional space for visualization. FigureÂ 7 illustrates the distribution of original training samples and the distribution of learned feature representations on four cancer datasets. From the figure, we can observe that (1) tSNE produces visually interpretable results by converting vector similarities into joint probabilities, generating visually distinct clusters that represent patterns in the data. (2) the samples with different survival stages are mixed together and not well separated in the original feature space. (3) With the learned shared features, specific features of mRNA, specific features of miRNA, and specific features of DNA, patients with the same survival stage tend to be clustered. (4) With the final learned integrated features, the samples from different survival stages can be intuitively separated into two disjoint clusters, which indicates the better separability of integrated feature representations.
Survival analysis
Survival analysis expresses a statistical method considering both results and survival time. FigureÂ 8 shows the confusion matrixes of test sets on four cancers. The KaplanMeier (KM) survival curves are drawn in Fig.Â 9, and their P values are calculated according to the curves. For GBM, KRCCC and LSCC, there are significant differences between highrisk and lowrisk patients (p values are \(8.70 \times {10^{  5}}\), \(1.8 \times {10^{  4}}\), \(3.23 \times {10^{  4}}\), respectively), while for BIC, the difference is not significant (\(p=0.471\)). The p values for GBM, LSCC, KRCCC and BIC rise significantly when the censored data ratio rises from 0.077 for GBM to 0.875 for BIC. The reason is that the model can hardly learn well by primarily using the censored data.
Effect of layerbylayer constraints for strengthening feature representation ability
To investigate the effect of layerbylayer constraints in our approach, we construct the compared backbone by imposing constraints only on the last layer of deep learning network and denote it as ICLL. FigureÂ 10 reports the comparison results of ICLL versus ours. Overall, our approach performs better than ICLL on all datasets in terms of accuracy, precision and AUC. The average performance improvements are 5.00%, 4.75%, 2.75% and 7.50% on GBM, KRCCC, LSCC and BIC datasets respectively, which indicates the effectiveness of imposing distance and orthogonal constraints layerbylayer.
There are two reasons that the proposed approach is superior to ICLL that only imposes the constraints on the last layer of deep learning network: (i) layerbylayer imposing constraints learns the shared and specific features multiple times, which can obtain better consensus and complementary feature representation than ICLL that learns shared and specific features only one time; (ii) layerbylayer constraints are employed on each layer of deep learning networks, which can avoid learning networks falling into local optimal solution and can learn robust representations.
Conclusion
Accurate prediction of survival time of cancers is significant, which can help clinicians make appropriate therapeutic schemes. Stateoftheart works show that integrating multitype genetic data may be an effective way to deal with data heterogeneity, but they cannot provide a rational and feature representation for multitype genetic data. To this end, we propose a deep learning approach which can learn the consensus and complementary information between multitype genetic data at the feature level. It explicitly represents each type of genetic data as the shared and specific features to strengthen the interpretability. Sufficient experiments verify that the our approach can significantly improve the cancer survival prediction performance as compared with related works. In summary, our work provides an effective deep learning method to overcome data heterogeneity in cancer survival prediction.
Availability of data and materials
The datasets generated and analysed during the current study are available with http://compbio.cs.toronto.edu/SNF/.
References
Torre LA, Bray F, Siegel RL, Ferlay J, LortetTieulent J, Jemal A. Global cancer statistics. CA Cancer J Clin. 2015;65(2):87â€“108.
Baek E, Yang HJ, Kim S, Lee G, Oh I, Kang S, Min J. Survival time prediction by integrating cox proportional hazards network and distribution function network. BMC Bioinform. 2021;22(1):192.
Ding D, Lang T, Zou D, Tan J, Chen J, Zhou L, Wang D, Li R, Li Y, Liu J, Ma C, Zhou Q. Machine learningbased prediction of survival prognosis in cervical cancer. BMC Bioinform. 2021;22(1):331.
Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68:394â€“424.
Chaudhary K, Poirion OB, Lu L, Garmire LX. Deep learning based multiomics integration robustly predicts survival in liver cancer. Clin Cancer Res. 2018;24(6):1248â€“59.
Wang Y, Wang D, Ye X, Wang Y, Yin Y, Jin Y. A tree ensemblebased twostage model for advancedstage colorectal cancer survival prediction. Inf Sci. 2019;474:106â€“24.
Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI. Machine learning applications in cancer prognosis and prediction. Comput Struct Biotechnol J. 2015;13(C):8â€“17.
Travers C, Zhu X, Garmire LX, Florian M. Coxnnet: an artificial neural network method for prognosis prediction of highthroughput omics data. PLoS Comput Biol. 2018;14(4):1006076.
Luck M, Sylvain T, Cardinal H, Lodi A, Bengio Y. Deep learning for patientspecific kidney graft survival analysis. Preprint at 1705.10245 (2017)
Zhang H, Zheng Y, Hou L, Zheng C, Liu L. Mediation analysis for survival data with highdimensional mediators. Bioinformatics. 2021;37(21):3815â€“21.
Bichindaritz I, Liu G, Bartlett CL. Integrative survival analysis of breast cancer with gene expression and DNA methylation data. Bioinformatics. 2021;37(17):2601â€“8.
Cui L, Li H, Hui W, Chen S, Yang L, Kang Y, Bo Q, Feng J. A deep learningbased framework for lung cancer survival analysis with biomarker interpretation. BMC Bioinform. 2020;21(1):112.
Van der Maaten L, Hinton G. Visualizing data using tSNE. J Mach Learn Res. 2008;9(11):2579â€“605.
Louis DN, Perry A, Reifenberger G, Deimling AV, FigarellaBranger D, Cavenee WK, Ohgaki H, Wiestler OD, Kleihues P, Ellison DW. The 2016 World Health Organization classification of tumors of the central nervous system: a summary. Acta Neuropathol. 2016;131(6):803â€“20.
Shao W, Wang T, Huang Z, Han Z, Zhang J, Huang K. Weakly supervised deep ordinal cox model for survival prediction from wholeslide pathological images. IEEE Trans Med Imaging. 2021;40(12):3739â€“47.
Zhang L, Dong D, Liu Z, Zhou J, Tian J. Joint multitask learning for survival prediction of gastric cancer patients using CT images. In: International symposium on biomedical imaging; 2021. p. 895â€“8.
Agarwal S, Abaker MEO, Daescu O. Survival prediction based on histopathology imaging and clinical data: a novel, whole slide CNN approach. In: Medical image computing and computer assisted intervention; 2021. p. 762â€“71.
Fan L, Sowmya A, Meijering E, Song Y. Learning visual features by colorization for slideconsistent survival prediction from whole slide images. In: Medical image computing and computer assisted intervention; 2021. p. 592â€“601.
Qiu YL, Zheng H, Devos A, Selby H, Gevaert O. A metalearning approach for genomic survival analysis. Nat Commun. 2020;11:6350.
Chen RJ, Lu MY, Wang J, Williamson DFK, Rodig SJ, Lindeman NI, Mahmood F. Pathomic fusion: an integrated framework for fusing histopathology and genomic features for cancer diagnosis and prognosis. IEEE Trans Med Imaging. 2022;41(4):757â€“70.
Cheerla A, Gevaert O. Deep learning with multimodal representation for pancancer prognosis prediction. Bioinformatics. 2019;35(14):446â€“54.
ValeSilva LA, Rohr K. Longterm cancer survival prediction using multimodal deep learning. Sci Rep. 2021;11:13505.
Kirk PDW, Griffin JE, Savage RS, Ghahramani Z, Wild DL. Bayesian correlated clustering to integrate multiple datasets. Bioinformatics. 2012;28(24):3290â€“7.
Kim S, Kim K, Choe J, Lee I, Kang J. Improved survival analysis by learning shared genomic information from pancancer data. Bioinformatics. 2020;36(Supplementâ€“1):389â€“98.
Wang B, Mezlini AM, Demir F, Fiume M, Tu Z, Brudno M, HaibeKains B, Goldenberg A. Similarity network fusion for aggregating data types on a genomic scale. Nat Methods. 2014;11(3):333â€“7.
Jia X, Jing X, Zhu X, Chen S, Du B, Cai Z, He Z, Yue D. Semisupervised multiview deep discriminant representation learning. IEEE Trans Pattern Anal Mach Intell. 2021;43(7):2496â€“509.
Zhang Y, Li A, Peng C, Wang M. Improve glioblastoma multiforme prognosis prediction by using feature selection and multiple kernel learning. IEEE/ACM Trans Comput Biol Bioinf. 2016;13(5):825â€“35.
Sun D, Wang M, Li A. A multimodal deep neural network for human breast cancer prognosis prediction by integrating multidimensional data. IEEE/ACM Trans Comput Biol Bioinf. 2019;16(3):841â€“50.
Gao J, Lyu T, Xiong F, Wang J, Ke W, Li Z. MGNN: a multimodal graph neural network for predicting the survival of cancer patients. In: ACM SIGIR conference on research and development in information retrieval; 2020. p. 1697â€“700.
Arya N, Saha S. Multimodal advanced deep learning architectures for breast cancer survival prediction. Knowl Based Syst. 2021;221:106965.
Xu J. Li W, Liu X, Zhang D, Liu J, Han J. Deep embedded complementary and interactive information for multiview classification. In: IAAI; 2020. p. 6494â€“501.
Wang L. Chignell MH, Jiang H, Charoenkitkarn N. Clusterboosted multitask learning framework for survival analysis. In: IEEE international conference on bioinformatics and bioengineering; 2020. p. 255â€“62.
Erola P, BjÃ¶rkegren J, Michoel T. Modelbased clustering of multitissue gene expression data. Bioinformatics. 2020;36(6):1807â€“13.
Coretto P, Serra A, Tagliaferri R. Robust clustering of noisy highdimensional gene expression data for patients subtyping. Bioinformatics. 2018;34(23):4064â€“72.
Wang C, Guo J, Zhao N, Liu Y, Liu X, Liu G, Guo M. A cancer survival prediction method based on graph convolutional network. IEEE Trans Nanobiosci. 2020;19(1):117â€“26.
Xu X, Zhang Y, Zou L, Wang M, Li A. A gene signature for breast cancer prognosis using support vector machine. In: International conference on biomedical engineering and informatics; 2012. p. 928â€“31.
Dao F, Lv H, Wang F, Feng C, Ding H, Chen W, Lin H. Identify origin of replication in saccharomyces cerevisiae using twostep feature selection technique. Bioinformatics. 2019;35(12):2075â€“83.
Wu X, Shi Y, Wang M, Li A. CAMR: crossaligned multimodal representation learning for cancer survival prediction. Bioinformatics. 2023;39(1):18.
Tong L, Wu H, Wang MD. Integrating multiomics data by learning modality invariant representations for improved prediction of overall survival of cancer. Methods. 2021;189:74â€“85.
Acknowledgements
Not applicable.
Funding
This work was supported by the NSFC Project under Grant Nos. 62176069 and 61933013, the Innovation Group of Guangdong Education Department under Grant No. 2020KCXTD014, Â the Innovation Group of Guangdong Education Department under Grant No.2020KCXTD014.
Author information
Authors and Affiliations
Contributions
YH: conceptualization, methodology, writing original draft preparation. XYJ: writing reviewing and editing, supervision, data curation. QS: visualization, investigation, software, validation.
Corresponding authors
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Hao, Y., Jing, XY. & Sun, Q. Cancer survival prediction by learning comprehensive deep feature representation for multiple types of genetic data. BMC Bioinformatics 24, 267 (2023). https://doi.org/10.1186/s1285902305392z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1285902305392z