 Methodology Article
 Open access
 Published:
Sparse multiple coInertia analysis with application to integrative analysis of multi Omics data
BMC Bioinformatics volumeÂ 21, ArticleÂ number:Â 141 (2020)
Abstract
Background
Multiple coinertia analysis (mCIA) is a multivariate analysis method that can assess relationships and trends in multiple datasets. Recently it has been used for integrative analysis of multiple highdimensional omics datasets. However, its estimated loading vectors are nonsparse, which presents challenges for identifying important features and interpreting analysis results. We propose two new mCIA methods: 1) a sparse mCIA method that produces sparse loading estimates and 2) a structured sparse mCIA method that further enables incorporation of structural information among variables such as those from functional genomics.
Results
Our extensive simulation studies demonstrate the superior performance of the sparse mCIA and structured sparse mCIA methods compared to the existing mCIA in terms of feature selection and estimation accuracy. Application to the integrative analysis of transcriptomics data and proteomics data from a cancer study identified biomarkers that are suggested in the literature related with cancer disease.
Conclusion
Proposed sparse mCIA achieves simultaneous model estimation and feature selection and yields analysis results that are more interpretable than the existing mCIA. Furthermore, proposed structured sparse mCIA can effectively incorporate prior network information among genes, resulting in improved feature selection and enhanced interpretability.
Background
Large scale omics studies have become common partly as a result of rapid advances in technologies. Many of them generate multiple omics datasets on the same set of subjects. For example, cancer studies generate datasets using the NCI60 cell line panel, a group of 60 human cancer cell lines used by the National Cancer Institute (NCI). Various types of omics datasets such as gene expression or protein abundance from this cell line panel are generated and available via a web application CellMiner [32]. Another example can be found at The Cancer Genome Atlas (TCGA) repository that contains multiple types of omics datasets such as genotype, mRNA, microRNA, and protein abundance data collected from the same set of subjects. The abundance of such datasets has created increasing needs in advanced methods for integrative analysis beyond separated analyses. Integrative analysis enables us not only to understand underlying relationships among multiple datasets but also discover more biologically meaningful results that may not be found from analysis of a single dataset. As a response to increasing needs, there have been continuous efforts in developing such methods.
Tenenhaus and Tenenhaus [36] reviewed various methods for integrative analysis of multiple datasets from the same set of subjects. Canonical correlation analysis [17] is one popular method for integrative analysis of two datasets measured on the same set of subjects. For each of two datasets, CCA seeks a linear transformation so that correlation between two transformed datasets is maximized. It is a prototype method to use a correlationbased objective function. Based on CCA, various extended methods have been proposed to integrate more than two datasets into a single model. Some examples are [4, 16, 41], [12, 12], and [15].
Covariancebased criteria is another way to construct an objective function. Tuckerâ€™s innerbattery factor analysis [38] is the seminal paper for investigating covariance structures between two datasets. Various approaches have been proposed to extend the method to an integrative model for more than two datasets. [39], [13, 19], and [8] are some examples.
Multiple coinertia analysis [6] is another integrative analysis method employing a covariancebased objective function to identify common relationships and assess concordance among multiple datasets. This method finds a set of loading vectors for multiple Kâ‰¥2 datasets and a socalled â€œsynthetic" center of all datasets such that a sum of squared covariances between each of linearly transformed datasets and a synthetic center is maximized. Recently it has been applied to an integrative analysis of multiple omics datasets [24]. However, estimated loading vectors of mCIA are nonsparse. That is, if we want to apply mCIA for analyzing two gene expression data, every gene in each data has nonzero coefficient, making it difficult to interpret the results. This has been noted as a weakness of the method [20, 25]. In statistical literature, sparse estimation has been suggested as a remedy for this type of problem and has shown good performance in genomics or biological data [22, 34].
In this paper, we propose a novel approach that imposes a sparsity constraint on mCIA method, sparse mCIA (smCIA). This model conducts estimation and variable selection simultaneously. Nonsparsity poses significant challenges not only in developing an accurate model, but also in interpreting the results. Ultrahigh dimensionality is the inherited nature of omics datasets, thus statistical models for analyzing omics datasets benefit from feature selection procedure. To address this issue, it is desirable to employ a sparsity in the model. However, it has not been introduced in the mCIA framework to the best of our knowledge. The regularized generalized CCA framework [37] encompasses many integrative methods including mCIA and a sparse version of generalized CCA as its special cases, but it does not include a sparsityconstrained mCIA as its special case.
Also, we propose to extend smCIA, structured sparse mCIA (ssmCIA) that incorporates the structural information among variables to guide the model for obtaining more biologically meaningful results. It is wellknown that gene expressions are controlled by the gene regulatory network (GRN) [31]. Incorporation of those known prior structural knowledge among genes is one of potential approaches to improve analysis results. There are continuing interests in developing statistical methods toward this direction [21, 26, 27]. To incorporate structural knowledge, we employ another penalty term in the objective function of smCIA so that we can guide the model to achieve the improved feature selection.
Methods
Before introducing two proposed models, we briefly review the classical mCIA problem.
Suppose that we have K datasets from n subjects, i.e., K data triplets \((\boldsymbol {X}_{k}, \boldsymbol {D}, \boldsymbol {Q}_{k})_{k=1}^{K}, \boldsymbol {X}_{k} \in \mathbb {R}^{n \times p_{k}}, \boldsymbol {D} \in \mathbb {R}^{n \times n}, \boldsymbol {Q}_{k} \in \mathbb {R}^{p_{k} \times p_{k}}\), and w=(w_{1},â€¦,w_{K}) for k=1,â€¦,K. D is a diagonal weight metric of the space \(\mathbb {R}^{n}, \boldsymbol {Q}_{k}\) is a diagonal weight metric of the space \(\mathbb {R}^{p_{k}}\), and w_{k} is a positive weight for the kth dataset such that \(\sum w_{k} = 1\). Without loss of generality, assume that X_{k} is columnwise centered and standardized.
There are various ways to construct D. The simplest way is to use the identity matrix for D, equal weights for each sample. Or, it can be used to put strong emphasis on some reliable samples compared to other samples by putting higher weights. Also possible sampling bias or duplicated observations can be adjusted via constructing appropriate D matrix. In specific, we can estimate the probability of selection for each individual in the sample using available covariates in the dataset and use the inverse of the estimated probability as a weight of each individual for adjustment. Later in our real data analysis, we use the identity matrix for D.
For Q_{k}, we use the proportions defined as the column sums divided by the total sum of the absolute values of the kth dataset, following the similar approaches used in the literature [7, 9, 24, 25]. In this way, we put higher weights on the genes with higher variability. Or, we can construct Q matrices such that some genes known to be associated with a clinical phenotype of interest have higher weights. Also, it would be another possible approaches to construct Q based on functional annotation following recent methods, originally proposed for a rare variant test for an integrative analysis [3, 14].
Multiple coInertia analysis (mCIA)
The goal of mCIA is to find a set of vectors \(\boldsymbol {u}_{k} \in \mathbb {R}^{p_{k}}, k=1,\ldots,K\), and a vector \(\boldsymbol {v} \in \mathbb {R}^{n}\), such that the weighted sum of \((\boldsymbol {v} \intercal \boldsymbol {D} \boldsymbol {X}_{k} \boldsymbol {Q}_{k} \boldsymbol {u}_{k})^{2}\) is maximized. The objective function of mCIA problem is defined as follows,
where (u_{1},â€¦,u_{K}) denotes a set of coinertia loadings (or coefficients) and v is a synthetic center [24]. The synthetic center v can be understood as a reference structure in the sample space. Loading vectors (u_{1},â€¦,u_{K}) are the set of coefficients that maximizes the objective function.
It has been shown that the vector v of problem (1) can be found by solving the following eigenvalue problem [6],
where \(\boldsymbol {X}^{\dagger } = \left [w_{1}^{1/2}\boldsymbol {X}_{1}, w_{2}^{1/2}\boldsymbol {X}_{2}, \ldots, w_{K}^{1/2}\boldsymbol {X}_{K}\right ] \in \mathbb {R}^{n \times \sum p_{k}}\) is the merged table of K weighted datasets and \(\boldsymbol {Q}^{\dagger } \in \mathbb {R}^{\sum p_{k} \times \sum p_{k}}\) is the matrix that has Q_{1},â€¦,Q_{K} as its diagonal blocks. Given the reference vector v defined above, the loading vectors u_{k},k=1,â€¦,K are obtained by \(\boldsymbol {u}_{k} = \boldsymbol {X}_{k}^{\intercal } \boldsymbol {D} \boldsymbol {v} / \ \boldsymbol {X}_{k}^{\intercal } \boldsymbol {D} \boldsymbol {v} \_{\boldsymbol {Q}_{k}}\).
The second set of loadings orthogonal to the first set can be obtained by repeating the above procedure to the residual datasets calculated using a deflation method [10, Chap7.1.2].
We propose a new mCIA approach that enforces sparsity on the set of loading vectors for all datasets. Consider the following problem, which is another representation of (1),
where \(\tilde {\boldsymbol {X}}_{k} = \sqrt {w_{k}} \boldsymbol {D}^{1/2} \boldsymbol {X}_{k} \boldsymbol {Q}_{k}^{1/2} \in \mathbb {R}^{n\times p_{k}}, \boldsymbol {a}_{k} = \boldsymbol {Q}_{k}^{1/2}\boldsymbol {u}_{k} \in \mathbb {R}^{p_{k}}\), and \(\boldsymbol {b} = \boldsymbol {D}^{1/2} \boldsymbol {v} \in \mathbb {R}^{n}\). The problem (2) is a multiconvex problem, which is a convex problem with respect to a_{k} while others \(\phantom {\dot {i}\!}\boldsymbol {a}_{k'},k'=1,\ldots,k1,k+1, \ldots,K\) and v are fixed. This enables us to apply an iterative algorithm for finding a solution set (b,a_{1},â€¦,a_{K}).
First, for fixed a_{k},k=1,â€¦,K, the problem (2) becomes
where the objective function is convex with respect to b. Indeed, above problem can be optimized via Eigenvalue decomposition. Consider the Lagrangian formulation of (3), \(L(\boldsymbol {b}) = \sum ^{K}_{k=1} \left (\boldsymbol {b}^{\intercal } \tilde {\boldsymbol {X}}_{k} \boldsymbol {a}_{k}\right)^{2}  \lambda (\boldsymbol {b}^{\intercal } \boldsymbol {b} 1)\), where Î» is a Lagrangian multiplier. To obtain a solution, we take a derivative of L with respect to b and solve the equation by setting the derivative equal to zero as follows, \( \frac {\partial L}{\partial \boldsymbol {b}} = 2 \sum ^{K}_{k=1} \left (\boldsymbol {b}^{\intercal } \tilde {\boldsymbol {X}}_{k} \boldsymbol {a}_{k}\right) \tilde {\boldsymbol {X}}_{k} \boldsymbol {a}_{k}  2\lambda \boldsymbol {b} = 2 \left (\sum ^{K}_{k=1} \boldsymbol {M}_{k} \boldsymbol {b}  \lambda \boldsymbol {b}\right) = 0,\) where \(\boldsymbol {M}_{k} = \tilde {\boldsymbol {X}}_{k} \boldsymbol {a}_{k} \boldsymbol {a}_{k}^{\intercal } \tilde {\boldsymbol {X}}_{k}^{\intercal } \in {n \times n}\). The optimal b is the first eigenvector of \(\sum _{k=1}^{K} \boldsymbol {M}_{k}\).
As a next step for finding a solution of a_{1}, we fix b and a_{k},k=2,â€¦,K. Then we have
where \(\boldsymbol {N}_{1} = \tilde {\boldsymbol {X}}_{1}^{\intercal } \boldsymbol {b} \boldsymbol {b}^{\intercal } \tilde {\boldsymbol {X}}_{1}\). Notice that the problem (4) is the eigenvalue decomposition problem. The first eigenvector of N_{1} is the optimal a_{1} and the corresponding eigenvalue is the maximized objective value at the optimal value of a_{1}. Rest of loading vectors a_{2},â€¦,a_{K} can be estimated by applying the same procedure as a_{1}. From the set of estimated vectors \((\hat {\boldsymbol {b}}, \hat {\boldsymbol {a}_{1}},\ldots,\hat {\boldsymbol {a}_{K}})\), we recover a solution of the original mCIA, \((\hat {\boldsymbol {v}}, \hat {\boldsymbol {u}_{1}},\ldots,\hat {\boldsymbol {u}_{K}})\), by premultiplying \(\boldsymbol {D}^{1/2}, \boldsymbol {Q}^{1/2}_{1},\ldots, \boldsymbol {Q}^{1/2}_{K}\) to \((\hat {\boldsymbol {b}}, \hat {\boldsymbol {a}_{1}}, \ldots, \hat {\boldsymbol {a}_{K}})\) respectively.
The subsequent sets of vectors \(\left (\boldsymbol {v}^{(r)}, \boldsymbol {u}^{(r)}_{1},\ldots, \boldsymbol {u}^{(r)}_{K}\right), r=2,\ldots,\min (n, p_{1}, \ldots, p_{K})\) which are orthogonal to all sets of previously estimated vectors can be estimated by applying the same procedure to the residual data matrices \(\boldsymbol {X}^{(r)}_{1}, \ldots, \boldsymbol {X}^{(r)}_{K}\) with respect to the previously estimated vectors \(\left (\boldsymbol {v}^{(r')}, \boldsymbol {u}_{1}^{(r')}, \ldots,\boldsymbol {u}_{K}^{(r')}\right), r'=1,\ldots, r1\) using a deflation technique.
Sparse mCIA
For obtaining interpretable results, sparsity on coefficient loading vectors (a_{1},â€¦,a_{K}) is desirable. To this end, we will impose a sparsity constraint on the transformed loading vectors a_{1},â€¦,a_{K}. Note that we do not put a sparsity constraint on the reference vector b in the sample space. Sparsity on (a_{1},â€¦,a_{K}) can be transferred to the original loading vectors (u_{1},â€¦,u_{K}) because the weight matrices Q_{1},â€¦,Q_{K} are assumed to be diagonal matrices.
Given b and a_{k},k=2,â€¦,K, we propose to add the l_{0}sparsity constraint to (4) for obtaining a sparse estimate of a_{1} as follows,
where \(\boldsymbol {N}_{1} = \tilde {\boldsymbol {X}}_{1}^{\intercal } \boldsymbol {b} \boldsymbol {b}^{\intercal } \tilde {\boldsymbol {X}}_{1}\) and s_{1} is a predefined positive integer value less than p_{1}.
To tackle our problem (5), we will utilize the algorithm recently proposed by [35]. They proposed the truncated Rayleigh flow method (Rifle), which solves the maximization problem of the l_{0}sparsity constrained generalized Rayleigh quotient. It is well known that the optimization problem of the generalized Rayleigh quotient with respect to \(\boldsymbol {\omega }\in \mathbb {R}^{p}\),
where \(\boldsymbol {R}_{1},\boldsymbol {R}_{2}\in \mathbb {R}^{p\times p}\) are symmetric realvalued matrices, is same as the generalized eigenvalue problem. Our objective criterion is a specific case of the generalized eigenvalue problem with R_{1}=N_{1} and \(\phantom {\dot {i}\!}\boldsymbol {R}_{2} = \boldsymbol {I}_{p_{1}}\), which allows us to use Rifle for solving our problem. The algorithm is a simple iterative procedure consisting of the gradient descent algorithm and hardthresholding steps. At each iteration, the most biggest s_{1} elements of the solution from the gradient descent step are left as nonzero and others are forced to be zero. The same procedure is applied for estimating remaining loading vectors a_{2},â€¦,a_{K}. The complete pseudoalgorithm of smCIA problem is summarized in Algorithm 1.
Structured sparse mCIA
We propose another new model that incorporates prior known network information among features. To this end, we employ the Laplacian penalty on the sparse mCIA model to obtain more biologically meaningful results.
Let \({\mathcal {G}}_{1}=\{\boldsymbol {C}_{1},\boldsymbol {E}_{1}, \boldsymbol {W}_{1}\}\) denote a weighted and undirected graph of variables in X_{1}, where C_{1} is the set of vertices corresponding to the p_{1} features (or nodes), E_{1}={iâˆ¼j} is the set of edges that connect features i and j, and W_{1} contains the weights for all nodes. Given \(\mathcal {G}_{1}=\{\boldsymbol {C}_{1},\boldsymbol {E}_{1}, \boldsymbol {W}_{1} \}\), the (i,j)th element of the normalized Laplacian matrix L_{1} of X_{1} is defined by
where w_{1}(i,j) is a weight of the edge e=(iâˆ¼j) and d_{i} is a degree of the vertex i defined as \(\sum _{i\sim j} w_{1}(i,j)\). It is easily shown that \(p(\boldsymbol {u}_{1}; \boldsymbol {L}_{1})= \boldsymbol {u}_{1}^{\intercal } \boldsymbol {L}_{1} \boldsymbol {u}_{1}\) becomes zero if the prior known network information of L_{1} agrees with the true network existing among X_{1}.
For fixed b and a_{k},k=2,â€¦,K, consider the following optimization problem,
where \(\boldsymbol {N}_{1} = \tilde {\boldsymbol {X}}_{1}^{\intercal } \boldsymbol {b} \boldsymbol {b}^{\intercal } \tilde {\boldsymbol {X}}_{1}, s_{1}\) is a predefined positive integer value less than p_{1},Î»_{1} is a predefined network penalty parameter, and \(\tilde {\boldsymbol {L}}_{1} = \boldsymbol {Q}_{1}^{1/2} \boldsymbol {L}_{1} \boldsymbol {Q}_{1}^{1/2}\) is a transformed Laplacian matrix that contains the network information among variables of X_{1}. To solve (7), the network penalty needs to be minimized, which implies that the penalty encourages the model to estimate a_{1} to be in agreement with the incorporated network information contained in the \(\tilde {\boldsymbol {L}}_{1}\).
We again employ Rifle for solving (7). The objective function of (7) become \(\boldsymbol {a}_{1}^{\intercal } \boldsymbol {R}_{1} \boldsymbol {a}_{1}\) where \(\boldsymbol {R}_{1} = \boldsymbol {N}_{1}  \lambda _{1} \tilde {\boldsymbol {L}}_{1}\). Rifle requires R_{1} to be symmetric and \(\boldsymbol {N}_{1}  \lambda _{1} \tilde {\boldsymbol {L}}_{1}\) satisfies the condition since both N_{1} and \(\tilde {\boldsymbol {L}}_{1}\) are symmetric. Like smCIA algorithm, the estimation of remaining loading vectors a_{2},â€¦,a_{K} is same as that of a_{1}. The complete pseudoalgorithm of ssmCIA problem is summarized in Algorithm 1.
Choice of tuning parameters
In our methods, we have K and 2K parameters required to be tuned for smCIA and ssmCIA, respectively. Denote the set of tuning parameters as
We employ a Tfold cross validation (CV) method to select the best tuning parameter set. We set the range of grid points for each parameters from several initial trials. We divide each dataset into T subgroups and calculate the CV objective value defined as follows,
where \(cv_{t,k} =\left ((\hat {\boldsymbol {b}}^{t}(\boldsymbol {\lambda }))^{\intercal } \tilde {\boldsymbol {X}}^{t}_{k} \hat {\boldsymbol {a}}_{k}^{t}(\boldsymbol {\lambda }) \right)^{2}\), and \(\hat {\boldsymbol {a}}_{k}^{t}(\boldsymbol {\lambda })\) and \(\hat {\boldsymbol {b}}^{t}(\boldsymbol {\lambda }), t=1,\ldots,T\) are estimated loading vectors and reference vectors from the training data \(\tilde {\boldsymbol {X}}_{k}^{t}\) using a tuning parameter set Î». This can be considered as a scaled version of the CV objective value used in [40]. Unlike CCA whose correlation values are always within a range [âˆ’1,1], coinertia values are not limited to be within a certain range. We overcome this problem by standardizing all coinertia values used for the cross validation.
There is another set of parameters in the algorithm, the stepsize Î·_{k} of the gradient descent step. [35] suggests that Î·_{k}<1/maximum eigenvalue of R_{2}, where R_{2} is the matrix in the denominator of the Rayleigh function (6). Since R_{2} is the identity matrix in smCIA and ssmCIA problem, the maximum value of Î·_{k} is 1. We also tune this value by exploring multiple values within (0,1] and select the best value using the cross validation.
Lastly, we use the nonsparse solution of (u_{1},â€¦,u_{k},v) from mCIA as a starting point.
Simulation study
Synthetic data generation
We use a latent variable model to generate synthetic K datasets related to each other. Let Î¸ be a latent variable such that Î¸âˆ¼N(0,Ïƒ^{2}) and it affects to K sets of random variables \(\boldsymbol {x}_{k} = \theta \boldsymbol {a}_{k}^{\intercal } + \boldsymbol {\epsilon }_{k}^{\intercal } \in \mathbb {R}^{p_{k} \time 1}, k=1,\ldots, K\), where \(\phantom {\dot {i}\!}\boldsymbol {\epsilon }_{k} \sim N(\boldsymbol {0}_{p_{k}}, \boldsymbol {\Sigma }_{k})\) and a_{k} is set to be same as the first eigenvector of the matrix Î£_{k}. Then following calculation
verifies that a_{k} is same as e_{1}, the first eigenvector of the matrix \(\mathrm {E}\left (\boldsymbol {X}_{k}^{\intercal } \boldsymbol {b} \boldsymbol {b}^{\intercal } \boldsymbol {X}_{k}\right)\) with the corresponding eigenvalue nÏƒ^{2}+Î³_{1}, where (Î³_{j},e_{j}),j=1,â€¦,p_{k} are eigenpairs of Î£_{k}.
Following calculation is for crosscovariance matrices in the model.
Our complete generative model simulates a concatenated dataset \(\boldsymbol {X}^{\intercal } = \left [\boldsymbol {X}_{1}^{\intercal } \, \boldsymbol {X}_{2}^{\intercal } \, \cdots \, \boldsymbol {X}_{K}^{\intercal }\right ] \in \mathbb {R}^{\sum p_{k} \times n}\) from the normal distribution with the mean \(\boldsymbol {0}_{\sum p_{k}}\) and the variance \(\Sigma _{T} \in \mathbb {R}^{\sum p_{k} \times \sum p_{k}}\), where
Simulation design
We consider various simulation designs to compare the performance of smCIA and ssmCIA with mCIA. We compare our methods with mCIA only since the objective functions of other integrative methods such as generalized CCA or methods that have the covariancebased objective function are different from mCIA so that direct comparison is inappropriate.
We assume that there exist multiple networks among genes in each dataset, and the networks affect the relationship between datasets. We have 8 design scenarios by varying three conditions:
Ïƒ^{2}, the variance of the latent variable,
n_{el}, the number of elements in each network,
n_{en}, the number of effective networks among whole networks.
We generate 100 Monte Carlo (MC) datasets. For each MC dataset, we generate n=200 observations of each three random variables \(\boldsymbol {x}_{1}\in \mathbb {R}^{300}, \boldsymbol {x}_{2} \in \mathbb {R}^{400}\), and \(\boldsymbol {x}_{3}\in \mathbb {R}^{500}\). There are 5 networks among each of x_{1},x_{2}, and x_{3} and 10 or 20 elements n_{el} in each network. Among n_{el} genes of each network, the first indexed gene is the main gene that are connected to all other genes within the network. This means that the first indexed gene of each network in the simulation design with n_{el}=20 has the higher weight compared to the one in the simulation with n_{el}=10. For the number of effective networks n_{en}, we consider two cases. One case assumes that some networks affect relationships among datasets by setting n_{en}=(3,4,5), while the other case assumes that all existing networks affect relationships, n_{en}=(5,5,5). Also, we consider two values for Ïƒ^{2}=(1.2,2.5), the higher Ïƒ^{2} value leads to the higher first eigenvalue of \(\mathrm {E}\left (\boldsymbol {X}_{k}^{\intercal } \boldsymbol {b} \boldsymbol {b}^{\intercal } \boldsymbol {X}_{k}\right) \).
All true loadings make the network penalty zero. Thus we expect that ssmCIA performs better compared to smCIA since ssmCIA is encouraged to estimate the coefficient loadings minimizing the network penalty. All simulation scenarios and corresponding true coefficient loadings are summarized in Table 1. In addition, we consider incorporating incorrect network information in the first scenario to show the robustness of ssmCIA. Results of the additional simulation studies can be found in the supplementary materials.
Performance measures
To compare the feature selection performance of our methods in the simulations, we use sensitivity (SENS), specificity (SPEC), and Matthewâ€™s correlation coefficient (MCC) defined as follows,
where TP, TN, FP, and FN are true positives, true negatives, false positives, and false negatives, respectively. Also, we calculate the angle between the estimated loading vectors \(\hat {\boldsymbol {a}}_{k}\) and the true loading vectors \(\boldsymbol {a}^{*}_{k}, k=1,2,3\), to compare the estimation performance between our methods and mCIA. Angle is defined as \(\angle (\hat {\boldsymbol {a}}_{k}) = \frac {\hat {\boldsymbol {a}}_{k}^{\intercal } \boldsymbol {a}^{*}_{k}}{\\hat {\boldsymbol {a}}_{k}\_{2}\times \\boldsymbol {a}_{k}^{*}\_{2}}\). If two vectors are exactly same, the calculated angle between those two vectors is 1.
Simulation results
Simulation results are summarized in Table 2 and Table 3. First, the estimation performance of our proposed methods are superior compared to mCIA evidenced by calculated angle values. An angle value is close to 1 if the estimated loading vector is close to the true loading vector. The calculated angle values from our methods are closer to 1 than those from mCIA. Second, ssmCIA performs better than smCIA in feature selection. Note that, in our simulation scenarios, the true loadings are designed to follow the predefined network structure of the synthetic data. Thus we expect to observe better performance from ssmCIA than that from smCIA. In all scenarios, ssmCIA performs better than smCIA in all aspects, SENS, SPEC, MCC, and even for angle.
Also, we have several observations by comparing the results of different scenarios, driven by the nature of our generative model. First, the performance of the methods is better in the scenarios 3(4,7,8) than the one in the scenarios 1(2,5,6) (respectively). This observation agrees with our expectation originated from the nature of our generative model. In particular, the bigger Ïƒ^{2} makes the first eigenvalue of the matrix \(\boldsymbol {X}_{k}^{\intercal } \boldsymbol {b} \boldsymbol {b}^{\intercal } \boldsymbol {X}_{k}\) big, and this helps the algorithm detect the eigenvector, which is the estimator of the true loading vector.
Second, results of ssmCIA from the scenarios with n_{en}=(5,5,5) show a better performance than those from the scenarios with n_{en}=(3,4,5) and the results from the scenarios with n_{el}=10 show a better performance than those from the scenarios with n_{el}=20 in terms of sensitivity. Again, this agrees with the nature of our generative model. This is because the true loading vectors from the scenarios with n_{en}=(3,4,5) has bigger nonzero valued elements compared to the scenarios with n_{en}=(5,5,5), and the coefficients of connected variables in the network are bigger in the scenarios with n_{el}=10 than those in the scenarios with n_{el}=20.
Data analysis
NCI60 dataset
The NCI60 dataset includes a panel of 60 diverse human cancer cell lines used by the Developmental Therapeutics Program (DTP) of the U.S. National Cancer Institute (NCI) to screen over 100,000 chemical compounds and natural products. It consists of 9 cancer types; leukemia, melanomas, ovarian, renal, breast, prostate, colon, lung, and CNS origin. There are various omics datasets generated from the cell line panel including gene expression datasets from various platforms, protein abundance datasets, and methylation datasets.
The goal of the analysis is to identify a subset of biomarker genes that contributes to the explanation of common relationships among multiple datasets. We downloaded three datasets generated using NCI60 cell lines from CellMiner [32], two of which were gene expression datasets and the other was protein abundance dataset. Two gene expression datasets were obtained from different technologies, one was the Affymetrix HGU133 chips [33] and the other was the Agilent Whole Human Genome Oligo Microarray [23]. The third dataset was the proteomics dataset using highdensity reversephase lysate microarrays [29]. Since one melanoma cell line was not available in the Affymetrix data, We used 59 cell line data that are common to all three datasets. To reduce the computational burden, we selected top 5% of genes with high variance, which resulted in 491 genes in the Affymetrix data, 488 genes in the Agilent data, and 94 proteins in proteomics data. Pathway graph information was obtained from the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database [18].
Analysis results
Table 4 shows the number of nonzero elements in each estimated loading and the percentage of explained data variability by each method. Our sparse methods show comparable or better performance in terms of explained variability with much smaller number of nonzero elements in the estimated loadings. Percentage of explained variability is calculated as a ratio of pseudo eigenvalues corresponding to the estimated loading vectors to the sum of total eigenvalues of the datasets. We applied the estimated loading vectors to the test dataset and the whole dataset to calculate the percentage of explained variability. When we apply the estimated loading to the whole dataset, our sparse methods explain almost the same amount of variability as mCIA with much fewer selected genes. When we apply the estimated loadings to the test dataset, both sparse methods explain comparable amount of variability as mCIA explains using the first estimated loading vector. Moreover, the first two loading vectors of ssmCIA explain more variability than mCIA with much more sparsely estimated loadings.
Four plots generated using the first two estimated loading vectors from each method are shown in Fig. 1. Plots in the first column are 3D figures where each point represents one cell line sample. The coordinate of each point consists of scores calculated using the first estimated loading vectors of three datasets. Plots from the second to fourth columns are generated using the first two estimated loading vectors on the variable spaces of each data.
Figure 1 proves that sparse estimates from our proposed methods reveal biologically meaningful results that are consistent with previous studies [7, 24]. In the 3D plot, leukemia cells are well separated from other cells. And we confirmed that smCIA and ssmCIA select certain genes related to leukemia. For example, SPARC is also high weight on both axes of Affymetrix plot from mCIA, smCIA, and ssmCIA analysis. Recent study showed that this gene promotes the growth of leukemic cell [1]. EPCAM is an another example, the gene having a high negative weight on the second axis in the plot of mCIA and ssmCIA in the Affymetrix dataset. This gene is known to be frequently overexpressed in patients with acute myeloid leukemia (AML) [42]. The gene EBF1, another example, has a high weight on the second axis in plot of ssmCIA in the Agilent data, which can be supported by recent studies discussing the relationship between this gene and leukemia [30]. Also, above observations implies that the second axis of the ssmCIA analysis may contribute to cluster the dataset into leukemia cells and nonleukemia cells. From the comparison between the results of smCIA and ssmCIA, we notice that the ssmCIA results is more consistent with the result of mCIA than the results of smCIA, in terms of number of common genes and estimated coefficients of those common genes. Selected genes from ssmCIA has more common genes with mCIA than smCIA. We compared top 30 genes in each datasets and smCIA selected 40 common genes with mCIA while ssmCIA selected 56 genes in common with mCIA. Also, ssmCIA results shows consistent direction for estimated coefficients of genes that are common with the results of mCIA, while some of genes from smCIA shows different directions compared to mCIA results. From this observation, we confirm that incorporation of network information guides the model to achieve the more biologically meaningful estimate results.
In addition, we have conducted a pathway enrichment analysis using ToppGene Suite [5] to assess the set of features selected by our methods. Note that we compare the result using the first estimated loading vectors only. There are numerous gene ontology terms (GO), pathways, and diseases that genes with nonzero values in the estimated loading vectors are enriched. For example, the GO term, regulation of cell proliferation, is revealed to be highly enriched in our results (GO:0042127, Bonferroni adjusted pvalues are 5.77e^{âˆ’16} in the result of smCIA, 7.52e^{âˆ’19} in the result of ssmCIA). Leukemiacell proliferation is a topic of interest to researchers [2, 28]. Recently, [11] have reviewed the molecular mechanism related the cell proliferation in leukemia. Also, we confirm that ssmCIA enjoys the benefit of incorporating the network information from the pathway enrichment results. Compared to the results from smCIA, the enrichment results of ssmCIA often shows much smaller Bonferroni adjusted pvalues, above GO:0042127 is one of examples. Also, we could obtain more enriched results from ssmCIA than those from smCIA. There are 673 enriched GO terms, pathways, human phenotypes, and diseases in the results of ssmCIA, while 520 enriched results are obtained from smCIA. These results indicate that ssmCIA is more sensitive to select relevant features by incorporating structural information so that more biologically meaningful genes can be identified.
Discussion
For integrative analysis of K data sets, the number of tuning parameters is K and 2K for smCIA and ssmCIA respectively. As such, the computational costs of the methods can become prohibitively expensive for integrative analysis of a large number of omics datasets using the proposed cross validation strategy for parameter tuning. One potential solution is to use the same pair of tuning parameter values for all K data sets. It is of potential interest to tackle this limitation in future research.
Conclusion
In this article, we propose smCIA method that imposes a sparsity penalty on mCIA loading vectors and ssmCIA that employs a networkbased penalty to incorporate biological information represented by a graph. Our numerical studies demonstrate that both methods are useful for integrative analysis of multiple highdimensional datasets. Particularly, they yield sparse estimates of the loading vectors while explaining a similar amount of variance of the data compared to the mCIA. In the real data analysis, ssmCIA, with incorporation of biological information, is able to select important pathways contributing to correspondence among the three datasets, and hence yields more interpretable results.
Availability of data and materials
Our algorithms are implemented by the free statistical software language R and are freely available at: https://www.med.upenn.edu/longlab/software.html. Three omics datasets used for the real data analysis can be obtained from the CellMiner webpage https://discover.nci.nih.gov/cellminer/. Additional simulation results and the list of top 30 genes from the NCI60 data analysis can be found in the supplementary materials.
Abbreviations
 mCIA:

Multiple coinertia analysis
 smCIA:

Sparse multiple coinertia analysis
 ssmCIA:

Structured sparse coinertia analysis
 NCI:

National cancer institute
 CCA:

Canonical correlation analysis
 Rifle:

Truncated Rayleigh flow method
 GO:

Gene ontology term
References
Alachkar H, Santhanam R, Maharry K, Metzeler KH, Huang X, Kohlschmidt J, Mendler JH, Benito JM, Hickey C, Neviani P. SPARC promotes leukemic cell growth and predicts acute myeloid leukemia outcome. J clinical investigation. 2014; 124(4):1512â€“24. American Society for Clinical Investigation.
Burger JA, Li KW, Keating MJ, Sivina M, Amer AM, Garg N, Ferrajoli A, Huang X, Kantarjian H, Wierda WG, et al.Leukemia cell proliferation and death in chronic lymphocytic leukemia patients on therapy with the btk inhibitor ibrutinib. JCI Insight. 2017; 2(2).
Byrnes AE, Wu MC, Wright FA, Li M, Li Y. The value of statistical or bioinformatics annotation for rare variant association with quantitative trait. Genet Epidemiol. 2013; 37(7):666â€“74.
Carroll JD. Generalization of canonical correlation analysis to three or more sets of variables. In: Proceedings of the 76th annual convention of the American Psychological Association, Vol.3. Washington, DC: American Psychological Association: 1968. p. 227â€“8.
Chen J, Bardes EE, Aronow BJ, Jegga AG. Toppgene suite for gene list enrichment analysis and candidate gene prioritization. Nucleic Acids Res. 2009; 37(suppl_2):305â€“11.
Chessel D, Hanafi M. Analyses de la coinertie de k nuages de points. Revue de statistique appliquÃ©e. 1996; 44(2):35â€“60.
Culhane AC, PerriÃ¨re G, Higgins DG. Crossplatform comparison and visualisation of gene expression data using coinertia analysis. BMC Bioinformatics. 2003; 4(1):59.
DolÃ©dec S, Chessel D. Coinertia analysis: an alternative method for studying speciesenvironment relationships. Freshw Biol. 1994; 31(3):277â€“94.
Dray S, Chessel D, Thioulouse J. Coinertia analysis and the linking of ecological data tables. Ecology. 2003; 84(11):3078â€“89.
Gentle JE. Matrix Algebra. Vol. 10.Springer; 2007. pp. 978â€“0.
Gowda C, Song C, Kapadia M, Payne JL, Hu T, Ding Y, Dovat S. Regulation of cellular proliferation in acute lymphoblastic leukemia by casein kinase ii (ck2) and ikaros. Adv Biol Regul. 2017; 63:71â€“80.
Hanafi M.Pls path modelling: computation of latent variables with the estimation mode b. Comput Stat. 2007; 22(2):275â€“92.
Hanafi M, Kiers HA. Analysis of k sets of data, with differential emphasis on agreement between and within sets. Comput Stat Data Anal. 2006; 51(3):1491â€“508.
He Z, Xu B, Lee S, IonitaLaza I. Unified sequencebased association tests allowing for multiple functional annotations and metaanalysis of noncoding variation in metabochip data. Am J Hum Genet. 2017; 101(3):340â€“52.
Horst P. Generalized canonical correlations and their applications to experimental data. Technical report. 1961a.
Horst P. Relations amongm sets of measures. Psychometrika. 1961b; 26(2):129â€“49.
Hotelling H.Relations between two sets of variates. Biometrika. 1936; 28(3/4):321. https://doi.org/10.2307/2333955.
Kanehisa M, Furumichi M, Tanabe M, Sato Y, Morishima K. KEGG: new perspectives on genomes, pathways, diseases and drugs. Nucleic Acids Res. 2016; 45(D1):353â€“61.
KrÃ¤mer N. Analysis of high dimensional data with partial least squares and boosting. 2007. Berlin: Technische UniversitÃ¤t Berlin; 2007.
LÃª Cao KA, Martin PG, RobertGraniÃ© C, Besse P. Sparse canonical methods for biological data integration: application to a crossplatform study. BMC Bioinformatics. 2009; 10(1):34.
Li C, Li H. Networkconstrained regularization and variable selection for analysis of genomic data. Bioinformatics. 2008; 24(9):1175â€“82.
Li Y, Ngom A. Sparse representation approaches for the classification of highdimensional biological data. BMC Syst Biol. 2013; 7(4):6.
Liu H, Dâ€™Andrade P, FulmerSmentek S, Lorenzi P, Kohn KW, Weinstein JN, Pommier Y, Reinhold WC. mrna and microrna expression profiles of the nci60 integrated with drug activities. Mol Cancer Ther. 2010; 9(5):1080â€“91.
Meng C, Kuster B, Culhane AC, Gholami AM. A multivariate approach to the integration of multiomics datasets. BMC Bioinformatics. 2014; 15(1):162.
Meng C, Zeleznik OA, Thallinger GG, Kuster B, Gholami AM, Culhane AC. Dimension reduction techniques for the integrative analysis of multiomics data. Brief Bioinformatics. 2016; 17(4):628â€“41. https://doi.org/10.1093/bib/bbv108.
Min EJ, Chang C, Long Q. Generalized bayesian factor analysis for integrative clustering with applications to multiomics data. IEEE; 2018a. https://doi.org/10.1109/dsaa.2018.00021.
Min EJ, Safo SE, Long Q. Penalized coinertia analysis with applications toomics data. Bioinformatics. 2018b.
Murphy EJ, Neuberg DS, Rassenti LZ, Hayes G, Redd R, Emson C, Li K, Brown JR, Wierda WG, Turner S, et al.Leukemiacell proliferation and disease progression in patients with early stage chronic lymphocytic leukemia. Leukemia. 2017; 31(6):1348.
Nishizuka S, Charboneau L, Young L, Major S, Reinhold WC, Waltham M, KourosMehr H, Bussey KJ, Lee JK, Espina V, et al.Proteomic profiling of the nci60 cancer cell lines using new highdensity reversephase lysate microarrays. Proc Natl Acad Sci. 2003; 100(24):14229â€“34.
Oakes CC, Seifert M, Assenov Y, Gu L, Przekopowitz M, Ruppert AS, Wang Q, Imbusch CD, Serva A, Koser SD, et al.Dna methylation dynamics during b cell maturation underlie a continuum of disease phenotypes in chronic lymphocytic leukemia. Nat Genet. 2016; 48(3):253.
Peter IS, Davidson EH. Genomic Control Process: Development and Evolution. Philadelphia: Academic Press; 2015.
Reinhold WC, Sunshine M, Liu H, Varma S, Kohn KW, Morris J, Doroshow J, Pommier Y. Cellminer: A webbased suite of genomic and pharmacologic tools to explore transcript and drug patterns in the nci60 cell line set. Cancer Res. 2012; 72(14):3499â€“511.
Shankavaram UT, Reinhold WC, Nishizuka S, Major S, Morita D, Chary KK, Reimers MA, Scherf U, Kahn A, Dolginow D, et al.Transcript and protein expression profiles of the nci60 cancer cell panel: an integromic microarray study. Mol Cancer Ther. 2007; 6(3):820â€“32.
Steinke F, Seeger M, Tsuda K. Experimental design for efficient identification of gene regulatory networks using sparse bayesian models. BMC Syst Biol. 2007; 1(1):51.
Tan KM, Wang Z, Liu H, Zhang T. Sparse generalized eigenvalue problem: Optimal statistical rates via truncated rayleigh flow. J R Stat Soc Ser B Stat Methodol. 2018; 80(5):1057â€“86.
Tenenhaus A, Tenenhaus M. Regularized generalized canonical correlation analysis for multiblock or multigroup data analysis. Eur J Oper Res. 2014; 238(2):391â€“403.
Tenenhaus M, Tenenhaus A, Groenen PJ. Regularized generalized canonical correlation analysis: a framework for sequential multiblock component methods. Psychometrika. 2017; 82(3):737â€“77.
Tucker LR. An interbattery method of factor analysis. Psychometrika. 1958; 23(2):111â€“36.
Van de Geer JP. Linear relations amongk sets of variables. Psychometrika. 1984; 49(1):79â€“94.
Waaijenborg S, Verselewel de Witt Hamer PC, Zwinderman AH. Quantifying the association between gene expressions and DNAmarkers by penalized canonical correlation analysis. Stat Appl Genet Mol Biol. 2008; 7(1).
Witten DM, Tibshirani RJ. Extensions of sparse canonical correlation analysis with applications to genomic data. Stat Appl Genet Mol Biol. 2009; 8(1):1â€“27.
Zheng X, Fan X, Fu B, Zheng M, Zhang A, Zhong K, Yan J, Sun R, Tian Z, Wei H. Epcam inhibition sensitizes chemoresistant leukemia to immune surveillance. Cancer Res. 2017; 77(2):482â€“93.
Funding
This work is partly supported by NIH grants P30CA016520, R01GM124111, and RF1AG063481. The content is the responsibility of the authors and does not necessarily represent the views of NIH.
Author information
Authors and Affiliations
Contributions
QL and EJ formulated the ideas and revised the paper. EJ designed the experiments, performed the experiments, analyzed the data, and wrote the first draft. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable
Consent for publication
Not applicable
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisherâ€™s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the articleâ€™s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the articleâ€™s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Min, E.J., Long, Q. Sparse multiple coInertia analysis with application to integrative analysis of multi Omics data. BMC Bioinformatics 21, 141 (2020). https://doi.org/10.1186/s1285902034554
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1285902034554