Skip to main content

Separating common from distinctive variation

Abstract

Background

Joint and individual variation explained (JIVE), distinct and common simultaneous component analysis (DISCO) and O2-PLS, a two-block (X-Y) latent variable regression method with an integral OSC filter can all be used for the integrated analysis of multiple data sets and decompose them in three terms: a low(er)-rank approximation capturing common variation across data sets, low(er)-rank approximations for structured variation distinctive for each data set, and residual noise. In this paper these three methods are compared with respect to their mathematical properties and their respective ways of defining common and distinctive variation.

Results

The methods are all applied on simulated data and mRNA and miRNA data-sets from GlioBlastoma Multiform (GBM) brain tumors to examine their overlap and differences. When the common variation is abundant, all methods are able to find the correct solution. With real data however, complexities in the data are treated differently by the three methods.

Conclusions

All three methods have their own approach to estimate common and distinctive variation with their specific strength and weaknesses. Due to their orthogonality properties and their used algorithms their view on the data is slightly different. By assuming orthogonality between common and distinctive, true natural or biological phenomena that may not be orthogonal at all might be misinterpreted.

Background

To understand and ultimately control any kind of process, albeit biological, chemical or sociological, it is necessary to collect data that functions as a proxy for these processes. Subsequent statistical data analysis on these data should reveal the relevant information to that process. For hypothesis testing such an approach of theory and measuring can be relatively straightforward especially if the analytical instruments are designed specifically for that purpose. In lack of such hypotheses and using generic but readily available analytical instruments, obvious data structures are rarely observed and extensive data analysis and interpretation are necessary (e.g. untargeted analysis [1], data-mining [2]). To make the data-analysis even more complex, the number of observations (I) is usually much smaller than the number of variables (J) (e.g. transcriptomics data) which prevents the use of classical regression models. Data-analysis and interpretation of the huge number of variables is possible when the number of variables can be summarized in fewer factors or latent variables [3]. For this purpose methods such as factor analysis (FA) [4] or principal component analysis (PCA) [4] were developed.

In functional genomics research it becomes more and more common that multiple platforms are used to explore the variation in samples for a given study. This leads to multiple sets of data with the same objects but different features. Data integration and/or data fusion methods can then be applied to improve the understanding of the differences between the samples. A new group of low level data fusion methods has recently been introduced that are able to separate the variation in all data-sets.

To investigate if the same latent processes underlie the different data-sets, component analysis can be very useful [5]. The construct of latent variables has properties that enable the integrated analysis of multiple data sets with a shared mode (e.g. same objects or variables). With shared variation across multiple data-sets a higher degree of interpretation is achieved and co-relations between variables across the data-sets become (more) apparent. Methods such as generalised SVD (GSVD), latent variable multivariate regression (LVMR), simultaneous component analysis (SCA) and canonical correlation analysis (CCA) have been used successfully in earlier studies [6]–[9]. Most of these methods or applications of these methods (i.e. CCA) focuses on the common/shared variation across the data-sets only. The interpretation of data however is not only improved by focussing on what is common but likely as important are those parts that are different from each other. These parts could include for example, measurement errors or other process and/or platform specific variations that would be distinctive for each data-set.

The concept of common and distinctive variation is visualized in Fig. 1a and b in which two different situations of overlapping data-sets (X 1(I × J 1) and X 2(I × J 2)) are shown. The two data-sets are linked via common objects (I) but have different variables (J 1 and J 2). The areas of the circles are proportional to the total amount of variation in each data-set. The overlapping parts are tagged as C 1 (I × J 1) and C 2 (I × J 2) and describe shared (column) spaces for both data-sets. The spaces are not the same but are related (e.g. C 1 = C 2 W 2 → 1 + E 1 and C 2 = C 1 W 1 → 2 + E 2, in which the W's are the respective weight matrices). Whether or not the residuals E 1 and E 2 are truly zero, depends on the specific method. The distinctive parts D 1 (I × J 1) and D 2 (I × J 2) describe the variation specific for each data-set and the remainders are indicated by E 1 (I × J 1) and E 2 (I × J 2). In most methods the common parts are built up from the same latent components.

Fig. 1
figure 1

Schematic overview of common and distinctive parts for two data-sets. a: two data-sets with equal total variance and b: two data-sets with different total variance

Figure 1a visualizes C 1 and C 2 as the intersection of the two data-sets. The common parts do not necessarily have to explain a similar amount of variation in each of the sets. The schematic in Fig. 1b demonstrates the situation in which the overlap of the two matrices is proportionally the same for data-set 2 (as in example A) but not for data-set 1.

Attempts have been made to capture both common and distinctive sources of variation across data-sets using GSVD [10], but it has been shown that GSVD does not yield an optimal approximation of the original data in a limited number of components [11]. Alternatives specifically designed for this purpose have been developed and complement the set of low level data fusion methods. In this paper we compare three implementations of such methods (JIVE [12], [13], DISCO-SCA [14], [15] and O2-PLS [16], [17]) with respect to their mathematical properties, interpretability, ease of use and overall performance using simulated and real data-sets. The different approaches to separate common from distinctive variation and the implications on (biological) interpretation are compared. For demonstration purposes we use mRNA and miRNA data from GlioBlastoma Multiform cells available at The Cancer Genome Atlas (TCGA) website [12], [18] as well as simulated data to identify the specific properties of the methods. We will only focus on the integrated analysis of two data-sets that are linked by their common objects. We assume that the data-sets are column-centered. A list of abbreviations and definitions is included in the Appendix.

Methods

From a general point of view Joint and Individual Variation Explained (JIVE), DIStinct and COmmon simultaneous component analysis (DISCO) and the 2 block latent variable regression with an orthogonal filtering step (O2-PLS) all use a model in which the overlap of two (or more) data-sets is defined as common. The part that is not common is separated into a systematic part called distinctive while the nonsystematic part is called residual. The sum of the common part, the distinctive part and the residual error adds up to the original data-set. The generic decomposition of the two data-sets (X 1 (I × J 1) and X 2 (I × J 2)) in their respective common and distinctive parts for all three methods can be viewed as:

X 1 = C 1 + D 1 + E 1 X 2 = C 2 + D 2 + E 2
(1)

In which C 1(I × J 1) and C 2(I × J 2) refer to the common parts, D 1(I × J 1) and D 2(I × J 2) to the distinctive parts and E 1(I × J 1) and E 2(I × J 2) to the residual error for both data-sets.

In their respective papers [10], [11], [14] the various authors use different terms that seem to have similar meaning like distinctive, systemic and individual, common and joint etc. For clarity purposes throughout this document we use common for combined or joint variation across data sets and distinctive for variation specific to each data set. Because the decomposition itself is different for each method, the interpretation of what is common and what is distinctive however, should be placed in the context of the method that is used. We will address the aspects of the different methods in terms of approximations of real data, orthogonalities, explained variance and we will discuss the complexity of proper model selection.

Algorithms

To compare the three different algorithms it is useful to first briefly reiterate through the different key steps of each method. For the specific implementation the reader is referred to the original papers but for convenience the algorithms are included in the Appendix. The Matlab [19] source code is available for download. Throughout this document the objects (i = 1.. I) are the rows of the matrices (I × J) and the variables correspond to the columns (j = 1.. J). A full list of used symbols and dimensions of the different matrices can be found in the Appendix.

DISCO

After concatenation of the two matrices, X(I × J) = [X 1(I × J 1)|X 2(I × J 2)], with J = J 1 + J 2), DISCO starts with an SCA routine on the concatenated matrix X. This is followed by an orthogonal rotation step of the SCA scores and loadings towards an optimal user-defined target loading matrix P* (i.e. a matrix in which each component is either distinctive for a specific data-set or common for any data-set). As an example, for two data-sets, X 1 (I × 2) and X 2 (I × 3), with one common component (c c  = 1) and one distinctive component for each data-set (c 1 = c 2 = 1), the total number of components c t for the whole model is 3.

X= X 1 | X 2
X= U c t S c t V c t t
T s c a = U c t
P s c a = V c t S c t
X ̂ = T s c a P s c a t

And P* is:

P * = 1 0 1 1 0 1 0 1 1 0 1 1 0 1 1

In P*, the zeros are a hard constraint while the ones are not restricted and can be any value. The first two rows relate to the (two) variables in the first data-set, the last 3 rows relate to the variables for the second data-set. The first column relates to the first distinctive component (for data-set 1). The second column is reserved for the distinctive component for the second data-set and the third column is the loading for the common component in both data-sets. Through orthogonal rotation the best rotation matrix (B opt (c t  × c t )) to rotate the P sca loadings (P r ) towards the target loadings P* is found by minimizing the squared sum of the 0 entries in the P r matrix. To do just that a weight matrix (W = 1 − P*) is used, in which all the 1 entries are set to 0 and the 0 entries to 1:

B opt min W P s c a B 2 s.t. B t B = I

B opt is used to calculate the final rotated scores and loadings (T r  = T sca B opt and P r  = P sca B opt ). Consequently the smallest distance criterion is based only on the 0 entries (in P*) and thus on the distinctive components only. A perfect separation of the distinctive components is often not achieved; the positions where P* is 0 are not exactly 0 in P r . Furthermore, the common variation is forced to be orthogonal to these distinctive parts which clearly could lead to sub-optimal estimations of this common variation. The effects of the orthogonality constraints are discussed later. The final decomposition of the DISCO algorithm is:

X 1 = C 1 + D 1 + E 1 = T c P c 1 t + T d 1 P d 1 t + E 1 X 2 = C 2 + D 2 + E 2 = T c P c 2 t + T d 2 P d 2 t + E 2
(2)

The common scores (T c ) for both data-sets are the same and are obtained by optimizing on the distinctive components.

JIVE

The JIVE algorithm is also based on an SCA of the concatenated data-sets (X). The common parts for both data-sets (C k ) are estimated simultaneously, C = [C 1|C 2] = T sca P sca t (I × J), but now with only the number of common components (c c ) and not all the components (c t ) like in DISCO. The distinctive parts (D 1 and D 2) are estimated separately and iteratively based on an orthogonal residual (R k  − T sca T sca t R k ) matrix with c k distinctive components. Using the same example as before;

X= U c c S c c V c c t
T s c a = U c c
P s c a = V c c S c c
C k = T s c a P s c a t
R k = X k C k
R k T s c a T s c a t R k = U d k c k S d k c k V d k c k t
D k = U d k c k S d k c k V d k c k t
X=X D 1 | D 2

The steps are repeated until convergence of the combined common and distinctive matrices (C + D). By using the iterative and alternate optimization of the common and distinctinve parts, the orthogonality between the two distinctive parts that does exist in DISCO is no longer enforced. The resulting fit should be able to accommodate more types of data (e.g. the data has to conform to less criteria) than DISCO. Similar to DISCO the common parts are estimated from an SCA on both data-sets simultaneously and like DISCO there is no guarantee that both blocks take part in the common loadings P sca . As a consequence, the optimal solution could for example be one where P sca (=[P 1|P 2]) only has values for P 1 and not P 2 which hardly can be considered common.

The resulting decompostion (Eq. 3) in scores and loadings is exactly the same as for DISCO:

X 1 = C 1 + D 1 + E 1 = T c P c 1 t + T d 1 P d 1 t + E 1 X 2 = C 2 + D 2 + E 2 = T c P c 2 t + T d 2 P d 2 t + E 2
(3)

The common scores (T c ) for both data-sets are the same. Because SCA is a least squares method and the common parts are determined first, those variables with much variance are likely to end up in the common parts. Because JIVE is an iterative solution the initial guesses for common and distinctive parts can change considerably during these iterations (see Additional file 1). If however, the distinctive variation is larger than the (combined) common variation these iterations will not prevent the method to mis-identify the common components.

O2-PLS

In contrast to DISCO and JIVE, that use an SCA on the concatenated data-sets, O2-PLS starts with an SVD on the covariance matrix (X 1 t X 2 (J 1 × J 2)) for an analysis of the common variation. Similar to JIVE, the common components are estimated first and from the orthogonal remainder to P c k ( R k t T c k ), per data-set. The distinctive component is estimated per component. When all distinctive components are removed from the data the common scores are updated. Using the same matrices X 1 and X 2;

X 2 t X 1 = P c 1 c c D c c P c 2 c c t

Deflate X k per component:

T c k = X k P c k
R k = X k T c k P c k t
R k t T c k = u d k 1 s d k 1 v d k 1 t
t d k , l = X k u d k
p d k , l = t d k , l t t d k , l 1 X k t t d k , l
X k = X k t d k , l p d k , l t

The choice of a covariance matrix seems appropriate since we are interested in co-varying variables across the data-sets. In case of orthogonal blocks where no common variation exists, the covariation matrix would be 0 and no common variation can be estimated. Similar to JIVE, the distinctive parts are calculated orthogonal to the common part for every data-set individually. Because the common parts are estimates from the individual blocks (not the concatenation) the algorithm itself is less restrictive than JIVE. With different common scores per data-set the decomposition of Eq. 1 in scores and loadings is almost similar to Eqs. 2 and 3;

X 1 = C 1 + D 1 + E 1 = T c 1 P c 1 t + T d 1 P d 1 t + E 1 X 2 = C 2 + D 2 + E 2 = T c 2 P c 2 t + T d 2 P d 2 t + E 2
(4)

As a post-processing step the common scores can be combined and by means of a regression model [20], for example an SCA of the combined common parts, global common scores can be calculated (i.e. T c invariant for a block) so Eq. 4 would be exactly Eqs. 2 and 3 [21]. This would however also require recalculation of P c 1 and P c 2 .

Orthogonalities

The similarity between the three methods is large in terms of scores and loadings that are created in accordance with the algorithms. The methods however are different in terms of constraints that are applied during the decompositions which leads to different orthogonality properties and consequently different independence of the different common and distinctive parts.

The similarity between DISCO and JIVE is a consequence of the use of SCA in both methods. Because the final step in DISCO involves an orthogonal rotation of scores and loadings, the orthogonality between all the rotated scores and loadings remains. This rotation also forces orthogonality between the separate terms: C 1 D 1 t  = 0, C 1 D 2 t  = 0, D 1 D 2 t  = 0, C 2 D 1 t  = 0 and C 2 D 2 t  = 0. The error terms (E 1 and E 2) are orthogonal to each respective common part and distinctive part only. Orthogonality between the distinctive and common part per data-set in JIVE is enforced by estimation of the distinct components orthogonally to the common scores ( T s c a I T s c a T s c a t R k = U d k c k S d k c k V d k c k t ). There is no restriction for orthogonality between the distinctive parts of the different data-sets. Because the distinctive parts are calculated as the final step, the error matrix (E k ) is orthogonal to the distinctive part but not to the common part.

The decomposition in scores and loadings using the O2-PLS algorithm (Eq. 4) is similar to those obtained when using JIVE or DISCO (Eqs. 2 and 3). The significant difference in terms of orthogonality follows from the fact that there is room for the common parts (i.e. C 1 and C 2) to have different loadings and scores. The common scores for each block ( T c 1 and T c 2 ) themselves are expected to have a high correlation because the SVD was applied on the covariance matrix of the two matrices. The distinctive parts are estimated under the restriction that they are orthogonal to the common part per data-set. As a consequence the common parts per data-set share no variance with the distinctive parts. The distinctive parts themselves are not orthogonal to the common parts of the other data-set although the correlations are very small. Similar to JIVE the residuals (E 1 and E 2) in O2-PLS are found to be orthogonal only to the distinctive parts that are calculated as a final step.

A summary of the different orthogonality constraints for the three algorithms can be found in Table 1. It is clear that DISCO is the most strict and O2-PLS the most lenient regarding orthogonality properties. The different constraints that each algorithm imposes will affect the decomposition in different scores and loadings. What is designated as common and what is distinctive per method depends on these constraints. In DISCO the common part is defined as what is orthogonal to the distinctive parts while in JIVE this is the reverse i.e., what is distinctive is what is orthogonal to what is common. From a semantical point of view this seems equivalent but mathematically can generate very different results. These constraints will therefore be of importance when interpreting the data and consequently also for the application of the method. Orthogonality properties make it easier to come to a clear definition of these terms. Furthermore, orthogonality properties make the estimation of the separate parts easier.

Table 1 Summary table of all orthogonalities constraints for the three algorithms

The orthogonality constraints between allmost all parts in DISCO enforce that all underlying sources of variation can be split up in orthogonal parts, even the distinctive parts. From a mathematical viewpont this is a perfect separation but in biological phenomena such behavior will be rare. The solution therefore might be easier to find but it makes the interpretation more difficult. In JIVE the orthogonality constraint between the distinctive parts is removed and consequently is expected to be better suitable for biological data. With the single restriction of the distinctive parts to be orthogonal to the common part, O2-PLS is expected to suit most data-sets. The flexibility of O2-PLS is advantageous for fitting the best common and distinctive parts but might come at the expense of more loosely coupled common parts. Furthermore, the distinctive parts in O2-PLS are referred to as orthogonal to the counter common parts (e.g. C k t D l  = 0) and therefore do not optimally describe the total variation in the residual block (R k ) which would limit the interpretation of these distinctive parts. The fact that we did not fully observe C k t D l = 0 but still find some small residuals originates from the updated scores ( T c k = X k P c k ) after deflation in the algorithm.

Explained variances

The orthogonalities discussed above imply, because of the centering, uncorrelated structure between the distinctive and common parts. A closer look at the algorithms reveals an additional layer of complexity. This is especially true for DISCO and JIVE where the SVD is taken from the concatenated matrix X. The simultaneous decomposition in DISCO:

X= X 1 | X 2
X ^ =T P t =TB B t P t T rot = T B , P rot = PB
X ^ = T rot P rot t
X 1 | X 2 = T rot P rot t + E 1 | E 2
X 1 | X 2 = C 1 | C 2 + D 1 | D 2 + E 1 | E 2 =C+D+E

decomposes the concatenated data-sets together in orthogonal combined parts. The explained variances of the separate parts of the combined model add up:

X 2 =C 2 +D 2 +E 2 =C+D+E 2
(5)

E2 is minimal for a given total number of components (c t ). The best P rot however, is an approximation of P* and because of orthogonality constraints, situations can occur where the rotation is not perfect. In such cases the elements set to zero in the original target matrix are different from zero in P rot . The exact estimation of X k is:

X k = T c P c k t + T d k P d k t + T d k P d k t + E k
(6)

The cross-over ( T d k P d k t ) part of the original X k , the variation in X k that is explained by the distinctive components of the other data sets, is minimized during the DISCO iterations and is indicative for the influence both data-sets have on each others individual loadings and thus affect direct interpretation. The size of the cross-over part depends on the data and the number of distinctive components reserved for the other data-sets. The model selection procedure is based on minimization of this cross-over content.

Contrary to DISCO, not all parts in both JIVE and O2-PLS are orthogonal (see Table 1). Equation 5 does not hold and should be reduced, per data-set, to:

C k 2 + D k 2 = C k + D k 2
(7)

The residual E k is not orthogonal to the common part C k which indicates that the final solution found for E k could still hold some information from C k . To find the correct value for E k type III partial explained sum of squares for residuals should be applied by projecting E k on C k and only consider orthogonal parts of residual [22].

Interpretation

Even though the fusion methods have separated common from distinctive variation the interpretation of the results can be hampered or sometimes even prohibited by the fact that the data-sets themselves do not conform to the appropriate criteria. The most apparent critereon is the link between the samples across the different data-sets. If the different data-sets for example contain technical replicates, the fusion can only be performed on the averages of the technical replicates as the technical replicates of different data sets are not directly related. Secondly, in order to give equal chance to all data sets to be represented in the model, large blocks should not be favoured just because of their size. Therefore after variable scaling, a block scaling is usually applied such that the sum of squares of all blocks is equal. This block scaling however lowers the influence of the individual variables if the data-set consists of many variables and thus could be the cause of under-estimation.

Common variation can be thought of as variation that is related between data-sets. Because there is no mandatory contribution of both data-sets to the common parts when using JIVE or DISCO the results should always be validated for a shared variation between the data-sets. Second, for blocks where I is larger than J k the rank of data-set X k is bounded by the number of variables. The selection of the common score T c from the concatenated matrix X defines a direction in the I thdimensional columnspace that may be outside the J k dimensional subspace in R I defined by X k . C k , which is built from T c will therefore also be outside the J k dimensional subspace defined by X k . Thus there will be variation in C k which is not in X k . When scores T d k for the distinctive part D k are calculated, they are forced to be orthogonal to T c , but not forced to be in the columnspace of X k . This means that also the distinctive part D k may not be in the columnspace of X k . Because of this, the interpretation of the loadings from C k and D k can go wrong, as they may represent variation that is not in X k .

To check whether the distinctive and common parts are still in the column space of the original matrix of the separate data-sets, the projections of C k and D k on X k can be determined via:

C ^ k = X k X k + C k
(8)

The residual (i.e. C k  − Ĉ k 2 or D k D ^ k 2 ) is zero for a perfect projection and different from zero if C k or D k is not within the column space of X k .

The common and distinct parts of O2-PLS are based on an SVD of the covariance matrix of X 1 and X 2 ( P c 1 D P c 2 =svd( X 2 t X 1 , c c ). The SVD decomposes the covariance matrix in orthogonal contributions. P c 1 is expressed in terms of variables of X 1 and P c 2 in terms of variables of X 2. The subsequent steps in the algorithm only affect the individual blocks. Consequently, no variation from one data-set is introduced into the other and projection issues like in JIVE and DISCO do not occur. If the post-processing step is performed to calculate global common scores, variation from other data-sets is introduced and also in this case the projection errors need to be evaluated.

The issue that the common scores of multiple data sets may not be in the column space of each data set separately, and the problems this brings was already discussed earlier for multiblock PLS models [23], [24]. In the latter paper the common score was called the super score. It was shown that deflation of information from the separate blocks using the super score leads to introduction of variation that was never present in the block. When information which is not present in the data set is subtracted from that dataset, it is actually (negatively) introduced.

Model selection

Both orthogonalities and explained variances on touch the heart of exactly what is common and what is distinctive. The three methods are all different in this respect. All three methods however, can only decompose the data-sets if the optimal number of common and distinctive components for the final model are known. It is important that the selected model is appropriate for the data-sets that are analysed and each method has its own strategy of selecting the appropriate model.

Model selection in DISCO is a two step process. In the first step the total number of components (c t ) is selected based on proportion of variance accounted for by the simultaneous components for each individual data block. The second step finds the “best” performing model from all possible combinations of common (c c ) and distinctive components ( c d k ) by minimizing the cross-over parts of each data-set.

In JIVE the configuration of the model is based on the analysis of permuted versions of the original matrix. For the common components complete rows of each data-set are permuted. This removes the link between the objects from the different data-sets, but does not remove the correlation structure inside each block. The eigenvalues for a large number of permuted matrices are determined. The number of common components is defined as that number where the eigenvalues of the original matrix (X) are (still) larger than the permuted ones (with a certain α). For the distinct components per data set X k , the rows of each variable in that data-set are permuted to disturb the variable object relationship. Again the eigenvalues of the original data set are compared to the eigenvalues of the permuted data sets to find the optimal number of distinct components for each X k . These setting are used as input for a new start of the estimation of the number of components. This process is repeated until convergence of the number of common and distinctive components.

The model selection of O2-PLS as described in the papers [16], [17] is not clear about exactly which procedures to follow. We have adopted the strategy of first selecting the number of common components based on the covariance matrix followed by an estimation of the number of individual components per data-set using PCA cross validation after the common parts have been removed from the data-sets using an OPLS approach.

Experimental

To test the three methods in different conditions we use simulated data. We will keep the model itself small with only 1 common component and 1 (or 2) individual component(s) per data-set. To generate the data we use the score and loading structure from Eqs. 2 and 3.

X 1 = T c P c 1 t + T d 1 P d 1 t + E 1 X 2 = T c P c 2 t + T d 2 P d 2 t + E 2

The scores T c , T d 1 and T d 2 are drawn from a standard normal distribution in such a way that they are orthogonal to each other. Then each scores vector was scaled to length 1. The error terms E 1 and E 2 are based on pseudo numbers drawn from a standard normal distribution. The data-sets have 70 observations each (I = 70) and X 1 contains 100 variables (J 1 = 100) and X 2 50 variables (J 2 = 50). The data of each data-set is column-centered and the variance of each block is scaled to unit variance. In our examples we have chosen a set of spectral loadings for illustrative purposes. In functional genomics data-sets e.g. transcriptomics or metabolomics data a similar situation can be envisioned when in functional groups the features are expected to highly correlate. The latent components then describe structured variation of the functional groups over the objects.

The three methods will be evaluated using the model settings that were suggested by the original model estimation procedure of each method respectively and if different from the actual model, with the real model settings as well. Two different scenarios are evaluated in which two different situations are simulated for the two data-sets:

  1. 1.

    Scenario 1, abundant variation in common loadings, almost orthogonal loadings

  2. 2.

    Scenario 2, low abundant variation in common loadings, almost orthogonal loadings

Figure 2 shows the loadings that are used to generate the data of the two blocks for both scenarios. The contributions of the distinctive and common parts for the different scenarios are listed in Additional file 1: Table S1 and Table 2 (Scenario 1: (0.66c1/0.28d1 and 0.85c2/0.13d2), scenario 2: (0.11c1/0.88d1 and 0.62c2/0.36d2)). The first scenario should give insight in the performance of the methods under conditions well suited to find the common variation. The second scenario should reveal issues for data that is more realistic like for example, the detection and removal of batch effects.

Fig. 2
figure 2

The loadings that were used to generate the data with for both scenarios

Table 2 Summary table of explained variances by the different methods in the second scenario using the real model settings (1,2,2)

The three methods will also be applied to experimental data from GlioBlastoma Multiform (GBM) brain tumors available at The Cancer Genome Atlas (TCGA). The mRNA (234 × 23293) and miRNA (234 × 534) data-sets describe the messenger RNA’s and small RNA’s profiles of 234 subjects that suffer from different kinds of brain tumors. The same data was already analysed by JIVE in its original paper [12]. Here we use it for comparison of JIVE and the other two methods.

Results

Scenario 1, abundant common variance, almost orthogonal loadings

The data sets in the 1st scenario did not lead to any problems. All three methods properly select the model of common and distinctive components (i.e. 1 common, and 1 distinctive component for each data-set). The results of DISCO, JIVE and O2-PLS almost exactly match the simulated scores and loadings, which from a mathematical point of view is also expected (see Appendix, “Observations on JIVE, SCA and covariance”). The loadings are plotted in Additional file 1: Figure S3. The correlation of the fitted scores with the original scores is 1 for all methods.

Additional file 1: Table S1 summarizes the explained variances for the fitted results by the different models. The different methods decompose the two data-sets into the same common and distinctive parts. As discussed earlier, the errors for JIVE and O2-PLS are not orthogonal to the common parts and therefore cannot be calculated as the difference of X k and the common and distinctive variance combined (C k  + D k ). In this case however, the data was fabricated with orthogonal common and distinctive scores and we were able to calculated the error as the difference. Furthermore C k C k + E k 2E k 2 which implies that the projection of E k on C k is very small indeed.

Scenario 2, low abundant common variance, almost orthogonal loadings

In the second scenario the model was made more complex with less abundant common variance and more distinctive components per data-set. The difference between the methods already becomes apparent in the model selection. Additional file 1: Table S2 shows the estimated number of component models for the different methods. Each of the three methods selects a different ‘best’ model. With the O2-PLS cross-validation the ‘real’ model is selected. Both JIVE and DISCO select 0 common components.

For completeness, the loading plots and score assessments of the decompositions of JIVE and DISCO with the suggested model settings are included in the Additional file 1. The estimated common and distinctive loadings for the methods with the real model settings (1,2,2) are shown in Fig. 3.

Fig. 3
figure 3

Real and fitted common and distinctive parts for each methods when there is an low abundant common variation (scenario 2). The real common component is show with a blue cross, the first and second distinctive component with an orange and yellow cross. The fits by the methods are identified by a line with in their respective color

The DISCO results with the ‘real’ model settings show a perfect decomposition in loadings and scores for both data-sets. The JIVE results show that all three components of the first data-set are fitted perfectly but that the common component is identified incorrectly; the component with the largest variance is identified as common. Because of the orthogonality restriction of C 1 D 1 t  = 0 and C 1 D 2 t  = 0., the real common component in data-set 2 cannot be selected anymore which results in a score vector of zero (the blue line). The two remaining distinctive components are used to fit the two loadings with the largest variation.

In JIVE the first step is to select the allocated number of common components. At this stage this selection is only determined by the largest variance, regardless whether or not this is ‘real’. If this selected part happens to be the distinctive part, the ‘real’ distinctive part is designated as common variance. In these cases the JIVE algorithm is not able to classify it as common, even after all the iterations. This behavior is investigated further by generating different data-sets with increasing variation in the common component. For each data-set the JIVE decomposition is run and the proper identification of the common and distinctive components is recorded (see Additional file 1). Only when the total common variation is larger than the variation of the largest distinctive component, the proper common component is identified.

The O2-PLS method suggested the real model complexity and the decomposition in loadings and scores show a good fit to the original data. The loading profiles show a good fit for the first data-set but for the second data-set the smallest individual component is under-estimated. This is also reflected in the amount of explained distinctive variation for the second data-set. Table 2 summarizes the explained variation for the fitted blocks by the different models. All methods steer towards a maximum amount of explained variation. Again, the residuals were determined as differences with the original data because the data was generated with orthogonal scores and C k C k + E k 2E k 2.

GlioBlastoma

The mRNA and miRNA measurements of Glioblastoma cells were used in the JIVE paper to introduce the method. We use the data to compare JIVE to DISCO and O2-PLS. We adopted the model settings that were found by the permutation approach (i.e. 5 common components, 33 distinctive components for mRNA and 13 for miRNA). For completeness the optimal number of components for the models was estimated again with each model selection method and the results are shown in Additional file 1: Table S4. The data were mean centered for each feature and each data-set was normalized to unit sum of squares. The data concerns different types of brain tumor cells.

As an example the O2-PLS score plots for both mRNA and miRNA for the common and distinctive parts are presented in Fig. 4. The common part shows a much clearer separation between the groups than the distinctive parts. The explained common and distinctive variation of the methods are listed in Table 3. With the exact same model settings, the JIVE method is able to explain approximately 5 % more of combined distinctive and common variation than DISCO and O2-PLS (C k C k + E k 2E k 2 for both data-sets). In comparison to DISCO and O2-PLS, JIVE describes less common variation but more distinctive variation. This phenomenon can possibly be accounted for by the iterative behavior of JIVE. By iteratively estimating the common and distinctive parts from only a selected part of the variation in the data, the common part seems less affected by over fitting. This phenomenon is further discussed in the Additional file 1.

Fig. 4
figure 4

Score plots of the common (top row) and distinctive (bottom row) parts of the mRNA data-set (left column) and miRNA data-set (right column) after O2-PLS decomposition

Table 3 Summary table of fitted explained variation by the different methods using the real mRNA and miRNA data-sets

To study the overlap in the three methods, the percentages explained variation in the common part and in the distinctive part per gene are plotted against each other in Fig. 5 for mRNA and miRNA. On the left side the results of the common parts are given. The explained variation for the genes in the common part using O2-PLS and DISCO are strongly correlating. The explained variation for the genes using JIVE is clearly different. The common part in JIVE describes a lower amount of explained variation than the other methods. The distinctive part (on the right-hand side) shows the same phenomenon. Again the explained variation for the distinctive part is similar using O2-PLS and DISCO, while JIVE now describes a higher amount of explained variation. The figures on the diagonal show the distribution in explained variation for each of the 3 methods. This is very similar for the three methods. What is striking however is the difference in distribution of explained variation between the common and distinctive parts. In the common part, most genes are hardly explained while a low number of genes is highly explained. For the distinctive part no such preference is observed and a normal like distribution of explained variation is obtained.

Fig. 5
figure 5

Scatter plots of % variance of original variable explained in common parts (left) and % variance of original variable explained in distinctive parts (right) of the mRNA data-set on the top row and of the miRNA on the bottom row between the the different algorithms. DISCO and O2-PLS look very similar. JIVE shows more genes of which more variance is used in the distinctive part which coincides with the increased amount of distinct variance explained by JIVE in comparison to DISCO and O2-PLS

For the miRNA, the situation is similar to the mRNA data. Again JIVE has a lower explained variation for each miRNA in the common part and a higher explained variation in the distinctive part compared to DISCO and O2-PLS. The distribution of the explained variation of the distinctive part is clearly different than for the mRNA. For miRNA, still many features are not well described. This could be related to a lower amount of systematic variation in the miRNA’s and consequently, lower correlation between the different miRNA’s. Therefore, each component only describes few miRNA features.

One explanation for DISCO is that orthogonality restrictions prohibit optimal fitting and as a result the cross over variation (i.e. the variation for miRNA explained by the distinctive score for the mRNA) is significant. For miRNA this was 13 % and for mRNA this was 4 % of the total variance. This amount of cross over variation is much larger for miRNA than mRNA because the 33 distinctive mRNA components all add to the cross over variation of miRNA compared to only 13 components vice versa.

In the O2-PLS method the initial common scores ( T c k ) are estimated from the initial loadings (P k) and original data (X k ). The distinctive components are removed from the remainder R k R k = X k T c k P c k t and X k is updated. However, in the final step (step 12 in the O2-PLS algorithm see Appendix), the common part is recalculated from the updated X k . This recalculation gives a lower amount of variation for the common part than before X k was deflated with distinct components. This variation can neither be described by the distinctive nor common part of the model anymore. Large discrepancies indicate that the estimation of the initial common part contained larger amounts of orthogonal variation. After T c k has been re-estimated, the distinctive part is not recalculated anymore. Perhaps more total variance could have been accounted for if O2-PLS would have used an iterative procedure like JIVE, which is fully iterative.

The score plots of the common and distinctive parts for the different methods all reveal a better separation of the classes in the common part of the miRNA data-set. To indicate the quality of class separation we adopted the standardized subtype within sums of squares (SWISS) from the original JIVE paper. This represents the variability within subtypes (across all rows) as a proportion of total variability. A lower score indicates better class separation. Table 4 shows the SWISS scores for both data sets using all three methods. The SWISS score for the common parts is compared to the SWISS scores of a 5 component PCA solution of both sets to see whether the removal of the distinctive information would provide a better set of common scores compared to the normal PCA scores. For O2-PLS we see a slight improvement to a SWISS of 0.65, while the JIVE SWISS score is worse (0.74). We see that the distinctive parts of the data have lost their discriminative power. Note that the SWISS for the common parts for both data sets is exactly the same for DISCO and JIVE as the common scores are the same for those methods.

Table 4 Summary of the SWISS scores for the common and distinctive parts identified by the different models during the analysis of the mRNA/miRNA GlioBlastoma data

The high correspondence in explained variation for each mRNA and miRNA feature between DISCO and O2-PLS is corroborated by their scores. Table 5 shows the RV matrix correlation [25] between the scores of the different methods. Again a high correlation between the O2-PLS and DISCO scores are observed for the common part. For the distinctive part this cannot be observed.

Table 5 RV modified coefficients of the common and distinctive scores for GlioBlastoma data-sets

Discussion and conlusions

The three methods discussed in this paper to separate common from distinct information all use different approaches, which lead to slightly different models of the data. What is exactly common variation and what is distinctive depends on the different orthogonality constraints applied and the algorithms used to estimate these different parts. When the common variation is abundant, all methods are able to find the correct solution. With real data however, complexities in the data are treated differently by the three methods.

Due to fewer orthogonality constraints that are imposed by JIVE and O2-PLS, there is more freedom to select the scores and loadings for the two data-sets. This freedom is not present in DISCO which has the most severe orthogonality restrictions. In the two scenarios shown in this paper, all scores and loadings were chosen orthogonal. Therefore DISCO was able to find the correct scores and loadings while JIVE and O2-PLS found variations thereof that still obayed their orthogonality assumptions. In case of less abundant common variation, both JIVE and DISCO failed to detect the proper amount of common components which can be understood from the methods themselves. Not knowing the real model however can give rise to unexpected results while decomposing the data in common and distinctive components.

Even with the optimal model settings selected the JIVE method is the most susceptible to identifying the wrong common components. Due to the SCA of the concatenated matrix JIVE has problems finding common components especially when they are smaller than a distinctive component in one of the blocks. If the common and distinctive variation is approximately of the same magnitude, JIVE is able to properly identify them due to its iterative nature. JIVE re-estimates the common and distinctive parts until they converge, while O2-PLS, which only once re-estimates the common part once, seems to be stuck in a sub optimal solution for the distinctive part.

When small data sets with a low number of features (Jk < I) are used, these data sets may not be well represented by the common scores in JIVE, and even worse, the common scores present information that is not even present in these blocks. This may lead to misinterpretation of both common scores and distinctive scores of such a block [24]. The O2-PLS algorithm is the most flexible one and allows the separate and distinctive parts to be determined using block scores instead of super scores. This way no information is transferred from one data-set to the other. The distinctive parts however, are also limited by orthogonality constraints and therefore have a biased interpretability.

In the real data example the three methods all selected a smaller number of common than distinct components. In contrast to the simulations, O2-PLS suggested a smaller number of common components than JIVE and DISCO. This could possibly indicate an over estimation of the number of common components by DISCO and JIVE. It was shown that the lack of structure in the raw miRNA data-set has been replaced by an apparent structure in the common part. The combination of the data-sets has revealed a subset of miRNA’s that mathematically can be linked to the mRNA’s by all three methods. Because the methods are not supervised, the appearing structure gives rise to further biological interpretation of not only the common parts but also the distinctive parts. In situations like these, DISCO, JIVE and O2-PLS can be considered to act as pre-processing steps (i.e. filtering steps).

In summary, all three methods have their own approach to estimate common and distinctive variation with their specific strength and weaknesses. Due to their orthogonality properties and their used algorithms their view on the data is slightly different. By assuming orthogonality between common and distinctive, true natural or biological phenomena that may not be orthogonal at all might be misinterpreted.

Appendix

List of used symbols

Table 6

Algorithms

DISCO

  1. 1.

    Define a target loading matrix (P*) of zeros and ones based on the model that was defined by the common and distinctive components (c c , c 1 , and c 2 );

    P * = 1 J 1 × c 1 0 J 1 × c 2 1 J 1 × c c 0 J 2 × c 1 1 J 2 × c 2 1 J 2 × c c
  2. 2.

    Define the weight matrix as W = 1 − P*, where 1 is a matrix of ones.

  3. 3.

    X = [X 1 |X 2 ]

  4. 4.

    X= U c t S c t V c t t

  5. 5.

    T s c a = U c t

  6. 6.

    P s c a = V c t S c t

  7. 7.

    Randomly initialize B 0 subject to B 0t B 0  = I = B 0 B 0t

  8. 8.

    Intialize iteration index l = 1

  9. 9.

    Y = P sca B l  − 1  + W (P* − P sca B l  − 1 )

  10. 10.

    [U r , S r , V r ] = svd(Y t P sca )

  11. 11.

    B l  = V r U r t

  12. 12.

    Compute h(B l ) = W (P sca B l  − P*) 2

  13. 13.

    Repeat step 9-12 until h(B l ) − h(B l  − 1 ) < τ or l > l max

After convergence and because B is subject to B t B = I the rotated scores and loadings can be calculated via T r  = T sca B and P r  = P sca B. The common and individual scores (T c and T d i ) and loadings ( P c i and P d i ) are separated according to the target matrix (P*). The loadings can then be determined as specific subsets of the rotated loadings to calculate the terms from Eq. 1:

C 1 = T c P c 1 t , C 2 = T c P c 2 t , D 1 = T d 1 P d 1 t and D 2 = T d 2 P d 2 t

JIVE

  1. 1.

    X = [X 1 |X 2 ]

  2. 2.

    X= U c c S c c V c c t

  3. 3.

    P c  = V c S c

  4. 4.

    C = T c P c t

  5. 5.

    R k  = X k  − C k

  6. 6.

    R k T c k T c k t R k = U d k ( c k ) S d k ( c k ) V d k ( c k ) t

  7. 7.

    T d k = U d k

  8. 8.

    P d k = V d k S d k

  9. 9.

    D k = T d k P d k t

  10. 10.

    X = X − [D 1 |D 2 ]

  11. 11.

    Repeat steps 2 through 11 until convergence of C + D, where C = [C 1 |C 2 ] and D = [D 1 |D 2 ]

O2-PLS

Slightly different implementations were published which leaves room for possible different interpretations. In our implementation we followed the pseudo code described by Löfstedt et al ([20], [26], [27]) and made sure that our O2-PLS results corresponded to the 2 data-set OnPLS results.

  1. 1.

    X 2 t X 1 = P c 1 c c D c c P c 2 c c t

  2. 2.

    Intialize iteration index l = 1

  3. 3.

    T c k = X k P k

  4. 4.

    R k = X k T c k P c k t

  5. 5.

    R k t T c k = u d k 1 s d k 1 v d k 1 t

  6. 6.

    t d k , l = X k u d k 1

  7. 7.

    p d k , l = t d k , l t t d k , l 1 X k t t d k , l

  8. 8.

    X k = X k t d k , l p d k , l t

  9. 9.

    Repeat steps 3 through 8 for the number of distinctive components per data-set (l = 1.. c k ) and both data-sets (k = 1..2).

  10. 10.

    T d k = t d k , 1 | t d k , 2 | | t d k , c k

  11. 11.

    P d k = p d k , 1 | p d k , 2 | | p d k , c k

  12. 12.

    T c k = X k P c k

After these steps have been performed the elements from Eq. 1 can be calculated via

C 1 = T c 1 P c 1 t , C 2 = T c 2 P c 2 t , D 1 = T d 1 P d 1 t and D 2 = T d 2 P d 2 t
(9)

Observations on JIVE, SCA and covariance

If there is no distinctive information SCA and O2-PLS give the same result:

  1. 1.

    X 1  = T c P 1 t  + E 1 and X 2  = T c P 2 t  + E 2

Without noise these equations reduce to:

  1. 2.

    X 1  = T c P 1 tand X 2  = T c P 2 t

  2. 3.

    X 2 t X 1  = (T c P 2 t) t (T c P 1 t)  = P 2 (T c t T c )P 1 t

T c can be chosen such that T c tT c  = I so consequently:

  1. 4.

    X 2 tX 1  = P 2 P 1 t

The analysis of the covariance matrix therefor will generate the same result if and only if there is no distinctive variation. This principle likely can be extended (no proof given) to those cases where the common variation is larger than the distinctive variation.

If there is a distinctive part it can be shown that an svd on the covariance matrix is less susceptible to larger distinctive parts and will better identify the common variation than the SCA approach used in JIVE.

  1. 1.

    X 1  = C 1  + D 1  and X 2  = C 2  + D 2

  2. 2.

    X 1 tX 2  = (C 1  + D 1 ) t (C 2  + D 2 ) = C 1 t C 2  + D 1 t D 2

Because the common variation is expected to occur in both data-sets their cross product is expected to be larger than the crosspoduct of the distinctive parts that by definition should show less correlation between the subjects in both data-sets. The crossproducts of the distinctive and common parts can be neglected in comparison because of the orthogonality constraints (in O2-PLS).

Additional file

References

  1. Garg N, Kapono CA, Lim YW, Koyama N, Vermeij MJA, Conrad D, Rohwer F, Dorrestein PC: Mass spectral similarity for untargeted metabolomics data analysis of complex mixtures. Int J Mass Spectrom. 2014, 377 (MS 1960 to now): 719-27.

    PubMed Central  Google Scholar 

  2. Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning. 2nd edition. New York: Springer; 2008.

    Google Scholar 

  3. Bollen KA: Latent Variables In Psychology And The Social Sciences. Annu Rev Psychol. 2002, 53: 605-34. 10.1146/annurev.psych.53.100901.135239.

    Article  PubMed  Google Scholar 

  4. Joliffe I, Morgan B: Principal component analysis and exploratory factor analysis. Stat Methods Med Res. 1992, 1: 69-95. 10.1177/096228029200100105.

    Article  CAS  PubMed  Google Scholar 

  5. De Roover K, Ceulemans E, Timmerman ME, Vansteelandt K, Stouten J, Onghena P: Clusterwise simultaneous component analysis for analyzing structural differences in multivariate multiblock data. Psychol Methods. 2012, 17: 100-19. 10.1037/a0025385.

    Article  PubMed  Google Scholar 

  6. Tan CS, Salim A, Ploner A, Lehtiö J, Chia KS, Pawitan Y: Correlating gene and protein expression data using Correlated Factor Analysis. BMC Bioinformatics. 2009, 10: 272-10.1186/1471-2105-10-272.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Xiao X, Moreno-Moral A, Rotival M, Bottolo L, Petretto E: Multi-tissue analysis of co-expression networks by higher-order generalized singular value decomposition identifies functionally coherent transcriptional modules. PLoS Genet. 2014, 10: e1004006-10.1371/journal.pgen.1004006.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Berger JA, Hautaniemi S, Mitra SK, Astola J: Jointly analyzing gene expression and copy number data in breast cancer using data reduction models. IEEE/ACM Trans Comput Biol Bioinforma. 2006, 3: 2-16. 10.1109/TCBB.2006.10.

    Article  CAS  Google Scholar 

  9. Ponnapalli SP, Saunders MA, van Loan CF, Alter O: A higher-order generalized singular value decomposition for comparison of global mRNA expression from multiple organisms. PLoS One. 2011, 6: e28072-10.1371/journal.pone.0028072.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Alter O, Brown PO, Botstein D: Generalized singular value decomposition for comparative analysis of genome-scale expression data sets of two different organisms. Proc Natl Acad Sci. 2003, 100: 3351-6. 10.1073/pnas.0530258100.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Van Deun K, Van Mechelen I, Thorrez L, Schouteden M, De Moor B: DISCO-SCA and Properly Applied GSVD as Swinging Methods to Find Common and Distinctive Processes. PLoS One. 2012, 7: e37840-10.1371/journal.pone.0037840.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Lock EF, Hoadley KA, Nobel AB: Supplement Joint and individual variation explained (JIVE) for integrated analysis of multiple data types. Ann Appl Stat. 2013, 7 (Supplement): 1-11.

    Google Scholar 

  13. Lock EF, Hoadley KA, Marron JS, Nobel AB: Joint and individual variation explained (JIVE) for integrated analysis of multiple data types. Ann Appl Stat. 2013, 7: 523-42. 10.1214/12-AOAS597.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Schouteden M, Van Deun K, Wilderjans TF, Van Mechelen I: Performing DISCO-SCA to search for distinctive and common information in linked data. Behav Res Methods. 2013, 46: 576-87.

    Google Scholar 

  15. Van Deun K, Smilde AK, Thorrez L, Kiers HAL, Van Mechelen I: Identifying common and distinctive processes underlying multiset data. Chemom Intell Lab Syst. 2013, 129: 40-51. 10.1016/j.chemolab.2013.07.005.

    Article  CAS  Google Scholar 

  16. Trygg J, Wold S: O2-PLS, a two-block (X-Y) latent variable regression (LVR) method with an integral OSC filter. J Chemom. 2003, 17: 53-64. 10.1002/cem.775.

    Article  CAS  Google Scholar 

  17. Trygg J: O2-PLS for qualitative and quantitative analysis in multivariate calibration. J Chemom. 2002, 16: 283-93. 10.1002/cem.724.

    Article  CAS  Google Scholar 

  18. The Cancer Genome Atlas. http://cancergenome.nih.gov

  19. Mathworks Inc.: Matlab. 2013

    Google Scholar 

  20. Löfstedt T, Trygg J: OnPLS-a novel multiblock method for the modelling of predictive and orthogonal variation. J Chemom. 2011, 25: 441-55.

    Google Scholar 

  21. Bylesjö M, Eriksson D, Kusano M, Moritz T, Trygg J: Data integration in plant biology: the O2PLS method for combined modeling of transcript and metabolite data. Plant J. 2007, 52: 1181-91. 10.1111/j.1365-313X.2007.03293.x.

    Article  PubMed  Google Scholar 

  22. Stanimirova I, Michalik K, Drzazga Z, Trzeciak H, Wentzell PD, Walczak B: Interpretation of analysis of variance models using principal component analysis to assess the effect of a maternal anticancer treatment on the mineralization of rat bones. Anal Chim Acta. 2011, 689: 1-7. 10.1016/j.aca.2011.01.008.

    Article  CAS  PubMed  Google Scholar 

  23. Hassani S, Hanafi M, Qannari EM, Kohler A: Deflation strategies for multi-block principal component analysis revisited. Chemom Intell Lab Syst. 2013, 120: 154-68. 10.1016/j.chemolab.2012.08.011.

    Article  CAS  Google Scholar 

  24. Westerhuis JA, Smilde AK: Deflation in multiblock PLS. J Chemom. 2001, 15 (June 2000): 485-93. 10.1002/cem.652.

    Article  CAS  Google Scholar 

  25. Smilde AK, Kiers HAL, Bijlsma S, Rubingh CM, van Erk MJ: Matrix correlations for high-dimensional data: the modified RV-coefficient. Bioinformatics. 2009, 25: 401-5. 10.1093/bioinformatics/btn634.

    Article  CAS  PubMed  Google Scholar 

  26. Löfstedt T. OnPLS : Orthogonal projections to latent structures in multiblock and path model data analysis. Phd Thesis. Umeå universitet; 2012.

    Google Scholar 

  27. Löfstedt T, Hoffman D, Trygg J: Global, local and unique decompositions in OnPLS for multiblock data analysis. Anal Chim Acta. 2013, 791 (June 2012): 13-24. 10.1016/j.aca.2013.06.026.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

This work was funded from STATegra the Seventh Framework Programme [FP7/2007-2013] under grant agreement №306000.

Declarations

Publication charges for this work were funded by the STATegra the Seventh Framework Programme [FP7/2007-2013] under grant agreement №306000.”

This article has been published as part of BMC Bioinformatics Volume 17 Supplement 5, 2016: Selected articles from Statistical Methods for Omics Data Integration and Analysis 2014. The full contents of the supplement are available online at http://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-17-supplement-5.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johan A. Westerhuis.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

FK and PBL contributed to the software development in Matlab (and R). FK performed all calculations and analyses. FK and JW wrote the manuscript. AC focused on the biological interpretation and AS on the statistical interpretation. All authors read and approved the final manuscript.

Electronic supplementary material

12859_2016_1037_MOESM1_ESM.docx

Additional file 1: Supplementary data. This file contains supplementary Tables S1-S4 and Figures S1-S7. (DOCX 735 kb) (DOCX 126 KB)

Authors’ original submitted files for images

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

van der Kloet, F.M., Sebastián-León, P., Conesa, A. et al. Separating common from distinctive variation. BMC Bioinformatics 17 (Suppl 5), S195 (2016). https://doi.org/10.1186/s12859-016-1037-2

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/s12859-016-1037-2

Keywords