 Proceedings
 Open Access
 Published:
Comprehensive evaluation of matrix factorization methods for the analysis of DNA microarray gene expression data
BMC Bioinformatics volume 12, Article number: S8 (2011)
Abstract
Background
Clusteringbased methods on geneexpression analysis have been shown to be useful in biomedical applications such as cancer subtype discovery. Among them, Matrix factorization (MF) is advantageous for clustering gene expression patterns from DNA microarray experiments, as it efficiently reduces the dimension of gene expression data. Although several MF methods have been proposed for clustering gene expression patterns, a systematic evaluation has not been reported yet.
Results
Here we evaluated the clustering performance of orthogonal and nonorthogonal MFs by a total of nine measurements for performance in four gene expression datasets and one wellknown dataset for clustering. Specifically, we employed a nonorthogonal MF algorithm, BSNMF (Bidirectional Sparse Nonnegative Matrix Factorization), that applies bidirectional sparseness constraints superimposed on nonnegative constraints, comprising a few dominantly coexpressed genes and samples together. Nonorthogonal MFs tended to show better clusteringquality and predictionaccuracy indices than orthogonal MFs as well as a traditional method, Kmeans. Moreover, BSNMF showed improved performance in these measurements. Nonorthogonal MFs including BSNMF showed also good performance in the functional enrichment test using Gene Ontology terms and biological pathways.
Conclusions
In conclusion, the clustering performance of orthogonal and nonorthogonal MFs was appropriately evaluated for clustering microarray data by comprehensive measurements. This study showed that nonorthogonal MFs have better performance than orthogonal MFs and Kmeans for clustering microarray data.
Background
DNA microarray can simultaneously measure the expression levels of thousands of genes. Increasingly, the challenge is to interpret such data to reveal molecular biological processes and the mechanism of human diseases. One of the main goals of expression data analysis is to identify the changing and unchanging genes and to correlate these changes with similar expression profiles. One of the major challenges for gene expression analysis is the reduction of dimension. Gene expression data typically have high dimensionality, with tens of thousands of genes whereas the number of observations or experiments is usually under a hundred. Because the number of variables easily exceeds that of experiments, dimension reduction is obviously required for gene expression analysis. This task can be considered as a matrix factorization problem.
Matrix factorization (MF) methods on microarray data can extract distinct patterns from the data [1–5]. Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) are popular analysis methods, and they have been applied to classification problems with satisfactory results [1, 5]. However, because of the holistic nature of PCA or SVD, it is difficult to provide the biologically instinctive interpretation of data from the obtained components. In order to overcome this limitation, Paatero and Tapper [6] and Lee and Seung [7] proposed that nonnegative matrix factorization (NMF) can learn partbased representations that can provide the obvious interpretation. The nonnegativity constraints make the representation purely additive (allowing no subtractions), in comparison with many other linear representations such as PCA and Independent Component Analysis (ICA) [8]. Their work was applied to signal processing and text mining. Brunet et al. [9] applied NMF to describe the gene expression profiles of all genes in terms of a few number of metagenes in order to derive meaningful biological information from cancer expression datasets. They clustered the samples into distinct subtypes by metagene expression patterns.
The gene expression patterns can be sparsely encoded by metagenes, implying a few significantly coexpressed genes. Several groups have proposed NMF formulation that enforces the sparseness of the decomposition. Li et al. [10] proposed local NMF (LNMF) that has additional constraints to enforce the sparseness in the NMF. Hoyer [11, 12] also proposed NMF formulation that can find partsbased representations by explicitly incorporating the concept of sparseness. Wang et al. [13] demonstrated Fisher nonnegative matrix factorization (FNMF) that learns localized features by imposing Fisher constraints. Gao and Church [14] attempted to control sparseness by penalizing the number of nonzero entries unlike other methods.
Samplebased clustering, however, is not the only concern in microarray data analysis. Genebased clustering provides informative sets of tightly coregulated genes. While samplebased clustering relies on metagenes, genebased clustering relies on metasamples. The two processes can be viewed as bidirectionally constrained with each other. Good metagene may support good samplebased clusters and vice versa. Optimizing sample dimension only, sparseness of genedimension is relatively decreased when sparseness of sampledimension is increased. In result, the minimization problem is convex that was subsequently described by others [11, 12, 14, 15] and resulting matrix cannot support genebased clusters well. Therefore, optimizing both sample and gene dimension together may be appropriated for clustering of microarray data. Here, we employed a novel nonorthogonal MF algorithm, Bidirectional Nonnegative Matrix Factorization (BSNMF), with bidirectional sparseness constraints superimposed on nonnegative constraints, comprising a few dominantly coexpressed genes and samples together. The bidirectional optimization process may provide quality clustering with improved biological relevance that may not be achieved by applying MFs for each dimension separately.
Many clusteringbased methods are developed to transform a large matrix of gene expression levels into a more informative set of which genes are highly possible to share biological properties. Although clusteringbased algorithms for microarray data analysis have been extensively studies, most works have not focused on the systematic comparison and validation of clustering results.
Different algorithms tend to lead to different clustering solutions on the same data, while the same algorithm often leads to different results for different parameter settings. Since there is no consensus on choosing among them, the applicable measures should be applied for assessing the quality of a clustering solution in different situations. For example, when the true solution is known and we can compare it to another solution, Minkowski measure [16] or the Jaccard coefficient [17] is applicable. Whereas, when the true solution is not known, there is no agreedupon method for validating the quality of a suggested solution. Several methods evaluate clustering solutions based on intracluster homogeneity or intercluster separation [18, 19]. Meanwhile, the prediction of the correct number of clusters is a basic problem in unsupervised classification problems. To solve this problem, a number of cluster validity indices, assessing the quality of a clustering partition have been proposed.
In the present paper, we would like to systematically evaluate various MFs applied to geneexpression data analysis. We compare six MFs, including two orthogonal MFs (i.e. PCA and SVD) and four nonorthogonal MFs (i.e. ICA, NMF and NMF with sparseness constraints (SNMF) and BSNMF) and a wellknown unsupervised clustering method, Kmeans algorithm. All were evaluated by seven clusterevaluation indices. We evaluated them in view of basic three categories: (1) traditional clustering, (2) orthogonal MFs and (3) nonorthogonal MFs. Predictive power and consistency of the methods are evaluated by using adjusted Rand Index and accuracy index when the class labels of data were available. To evaluate the biological relevance of the resulting clusters from different algorithms, we evaluated the significance of the biological enrichment for the clusters by using Gene Ontology (GO) and biological pathway annotations.
Results
Evaluation of each clusteringbased method
In our study, we applied Kmeans algorithm and six MFs, which are two orthogonal (i.e. SVD and PCA) and four nonorthogonal (i.e. ICA, NMF, SNMF and BSNMF) algorithms to the five benchmarking datasets. We evaluated the seven methods using nine measures, including seven cluster evaluation indices and two prediction power measures. Fig. 1 exhibits results from the seven clusterquality measures. We repeatedly applied the clustering (or MFs) algorithms 20 times for each dataset for each number of clusters, i.e. K = 2 to 4 (for the Iris dataset) or 2 to 5 (for the rest). The values in Fig.1 represent the averages.
Among measures, the GAP statistic is optimized when it decreases (Fig. 1(g)), while others are optimized when they increase (Fig. 1(a) – (f)). The homogeneity, separation, Dunn Index, average silhouette width and Hubert correlation (i.e. Hubert’s gamma) tend to be higher for nonorthogonal MFs than results from orthogonal MFs and Kmeans algorithm. The GAP statistic is lower for nonorthogonal MFs than orthogonal MFs and Kmeans. But, Pearson correlation of cophenetic distance has the highest value for SVD (Fig. 1(e)). Overall, nonorthogonal MFs represented best clustering quality.
We compared homogeneity with separation at the same time (Additional File 1). Results from measures for each dataset were clustered. Results from NMF, SNMF and BSNMF showed higher slope, that is, their homogeneity and separation are more optimized than others. When we compare one of the measures, Hubert correlation of cophenetic distance between MFs, at each number of clusters (Additional File 2), NMF, SNMF and BSNMF showed better performance than others in four datasets except for the Leukemia dataset. ICA has the highest value for the Leukemia dataset. Overall, nonnegative MFs have best clustering quality.
The three datasets, Leukemia, Medulloblastoma and Iris datasets have known class labels as ‘gold standards’. For the three datasets, we measured accuracy or predictive power using the adjusted Rand Index and prediction accuracy. Fig. 2 shows the adjusted Rand Index for the correct classification for the three datasets with the seven methods (i.e. six MFs and Kmeans method). The Leukemia dataset was evaluated at both twoclass (i.e. AML vs. ALL, Fig. 2(a)) and threeclass (i.e. AML vs. T cell type vs. B cell type, Fig. 2(b)) levels. Fig. 2 demonstrates that BSNMF, SNMF and NMF have the highest Adjusted Rand Index for most of the evaluations.
Fig. 3 shows the results from prediction accuracy. SNMF and BSNMF tend to show the best accuracy measures. We also included a voting scheme that simply combines all the results from the various algorithms and returns the best consensus. Voting showed comparable results to SNMF and BSNMF.
Detailed class prediction results for the Leukemia dataset are shown in Table 1. Class assignment is optimized for each dataset when accuracy has the highest value. All methods were tested both at K=2 and K=3. At K=2 level, one AML sample (AML_12) was incorrectly assigned to ALL by SNMF and BSNMF. The result is the same as that of Gao et al. [14]. The error count for NMF was two (ALL_7092_B cell and ALL_14749_B cell). Overall, nonorthogonal MFs like BSNMF, SNMF, NMF and ICA showed higher prediction accuracy than orthogonal MFs and Kmeans algorithm. At K=3 level, BSNMF showed the best results with only one mistake that AML_13 was incorrectly assigned to ALL, while SNMF made two mistakes (AML_13 and ALL_21302_B cell). Table 2 shows the results for the Medulloblastoma dataset K=2. BSNMF showed the best result with 11 mistakes, while SNMF and NMF have 13 and ICA has 14.
Evaluation of biological relevance
To evaluate the biological relevance of the clustering results, we created clusters of genes and assigned them to the corresponding samplewise clusters. For MFs, we clustered genes by using coefficient matrix of genes. For instance, in the Leukemia dataset factorized by NMF at K=2, we clustered genes into two groups by using the coefficient matrix of genes, W, from NMF. Given such a factorization, the matrix W is able to be used to determine the gene cluster membership, that is, a gene i is placed in a cluster j if the w_{ ij } is the largest entry in row i. Applying Kmeans algorithm, we clustered genes using original gene expression data matrix. Then, we labelled genecluster corresponding to the labels of samplecluster.
Genewise clusters are annotated by GO terms and biological pathways. We measured the significance of GO term (or pathway) assignment by using hypergeometric distribution. Here we briefly regard each GO term and biological pathway as a term. Table 3 shows the numbers of significantly enriched terms for the corresponding clusters at p < 0.05. For the Leukemia dataset, BSNMF (N=535) and NMF (N=532) have the highest numbers of significantly enriched terms in ALL. BSNMF has the highest numbers in AML (N=280) and in total (N=815) (Table 3(a)). Table 3(b) shows the results from Medulloblastoma dataset. In cluster 1, BSNMF (N=599) and Kmeans (N=517) have the most significantly enriched terms. In cluster 2, SVD (N=361) and NMF (N=335) have the most terms. The total number of significant terms is the biggest with BSNMF (N=805). Table 3(c) demonstrates that the fibroblast dataset has the biggest total number of significant terms for BSNMF (N=504). Table 3(d) exhibits the result from the mouse dataset. In cluster 1, BSNMF (N=690) and SNMF (N=686) have the most significantly enriched terms. In cluster 2, ICA (N=114) has the most terms. The total number of significant terms is the biggest with BSNMF (N=746). Overall, the numbers of significantly enriched terms resulting from nonorthogonal MFs, BSNMF, SNMF, NMF and ICA, are bigger than those of orthogonal MFs and Kmeans algorithm.
Dueck et al. [20] summarized GO terms with significance to the resulting clusters from various clustering algorithms using two representations: the proportion of factors that are significantly enriched for at least one functional category at α=0.05 and the mean log_{10} (pvalue). We combined two representations. We calculated the weighted pvalues, the proportion of significant GO terms multiplies the negative log_{10} (pvalue). Fig. 4 shows the weighted pvalues of the GO terms significantly annotated to the corresponding clusters for the Leukemia and Medulloblastoma datasets. The weighted pvalues are more significant when they have higher value. For simplicity, we plotted the top 50 terms. Plots for other dataset can be found in the supplement web site (http://www.snubi.org/software/BSNMF/). For the Leukemia dataset, BSNMF and Kmeans were shown to have annotated terms with the highest significance in AML and BSNMF and SNMF in ALL (Fig. 4(a), (b)). Overall, BSNMF and SNMF showed the highest significance for the whole Leukemia dataset (Fig. 4(c)). In the medullobalstoma dataset, BSNMF and Kmeans for the first cluster and BSNMF and SVD for the second cluster had the higher weighted pvalue than other methods. Overall, BSNMF showed the best results (Fig. 4(d)  (f)). Therefore, genes in the clusters created by BSNMF seemed to be more biologically associated in terms of GO term annotations than those created by other methods.
The pvalues are calculated for each GO category and for each pathway resource (Fig. 5). The GO term (or pathway) annotation having lower pvalues represents that corresponding cluster in terms of sharing GO terms (or pathways) is more relevant biologically. The result for Kmeans and BSNMF in the AML cluster is only shown. Other results are found in the supplement web site. Overall, nonorthogonal MFs tend to create more enriched clusters.
The top ranked genes with the largest coefficient in W matrix of BSNMF may be most explanatory for each cluster (Additional File 3). The top ranked 20 genes for the ALL cluster are enriched in significant GO terms like response to external stimulus, immune response and cell growth. Genes for the AML cluster had are enriched in response to external stimulus, immune response and membrane genes. The gene functions in PubMed indicated that the two sets of 20 genes are enriched in chemokines and tumor suppressor genes. Genes for the first cluster of meduloblastoma were related to cytoplasm, cell motility and cell growth and/or maintenance and those for the second cluster to cytoplasm, biosynthesis and protein metabolism genes. Gene sets for other datasets can be found in the supplement web site.
The mean expression profiles of the genewise clusters from the fibroblast dataset were extracted (Additional File 4). We clustered genes by using coefficient matrix of genes when we applied MFs. Coefficient matrix of genes (W matrix) can be used to determine cluster membership of genes, that is, gene i belongs to cluster j if the w_{ ij } is the largest entry in row i. Applying Kmeans algorithm, we clustered genes using original gene expression data matrix. Then, we labelled genecluster corresponding to the labels of samplecluster. According to method mentioned above, genewise clusters were created by the seven methods. Number of genewise clusters is five because Xu et al. [21] and Sharan et al. [18] suggested that optimal number of clusters is five from the fibroblast dataset. While Kmeans, SVD and PCA tend to result a few clusters with dominant profiles with the remaining clusters with relatively flat profiles, nonorthogonal MFs tend to create clusters with even dominance. For example, SVD result shows one major peak and BSNMF result shows much more peaks. Nonorthogonal MFs seem to be more effective in discovering significant patterns.
Discussion
There are various clusteringbased methods which are proposed by many researchers. These methods have become a major tool for gene expression data analysis. Different clusteringbased methods usually produce different solutions and one or a few preferred solutions among them should be selected. However, a systematic evaluation study for the methods has not been reported. Therefore, we evaluated orthogonal (i.e. PCA, SVD), nonorthogonal (i.e. ICA, NMF and SNMF) MFs and a traditional clustering algorithm (i.e. Kmeans) using seven clusteringquality (i.e. homogeneity, separation, Dunn index, average silhouette width, Pearson correlation of cophenetic distance, Hubert correlation of cophenetic distance and the GAP statistic) and two predictionaccuracy measures (i.e. the adjusted Rand index and prediction accuracy) applying to five published datasets. We also included an improving nonorthogonal MFs, BSNMF in the evaluation study.
As a result, we observed that clustering quality and predictionaccuracy indices applying nonorthogonal MFs are better than those of orthogonal MFs and Kmeans. In respect to results from Homogeneity, separation, Dunn index, average silhouette width and Hubert correlation of cophenetic distance, nonorthogonal MFs had higher value than those of orthogonal MFs and Kmeans. The GAP statistic was lower for nonorthogonal MFs than for orthogonal MFs and Kmeans. When we tested predictive accuracy for the three datasets with known class labels, we also observed better performance for nonorthogonal MFs than for the rest. We also investigated the biological significance of clustering genes because it is important to discover biological relevant patterns and interpret biologically for analysis of DNA microarray gene expression data. When we used enrichment analysis with GO terms and biological pathways, we obtained more significant enriched GO terms or pathways for nonorthogonal MFs than for orthogonal MFs and Kmeans. We also identified genes that may be dominantly involved in each subtype. It was demonstrated that BSNMF showed improved performance in predictionaccuracy and biologicalenrichment measures, outperforming other nonorthogonal MFs as well as orthogonal MFs and Kmeans algorithm.
There are various clustering evaluation indices we mentioned. Because they have various results upon datasets, they have limitations to suggest which clusteringbased method is the best. Therefore, improving cluster validation indices is needed to overcome it. We simply suggested a voting scheme that simply combines all the results from the various algorithms and returns the best consensus. Improving evaluation indices can be achieved through the integration of results from various evaluation indices using unifying rules.
Conclusions
In conclusion, the clustering performance of orthogonal and nonorthogonal MFs was appropriately compared for clustering microarray data using various measurements. We clearly showed that nonorthogonal MFs have better performance than orthogonal MFs and Kmeans for clustering microarray data. The characteristic difference among nonorthogonal MFs, orthogonal MFs and Kmeans algorithm implies that nonorthogonal MFs divided whole data into distinct patterns more evenly than orthogonal MFs and Kmeans. This study would help for suitably evaluating diverse clustering methods in other genomewide data as well as microarray data.
Methods
Dataset
For the evaluation study, we used five published datasets. The Leukemia data set [22] has 38 bone marrow samples and 5000 genes after filtering process applied by Brunet et al.[9]. Acute myelogenous Leukemia (AML) and acute lymphoblastic leukemia (ALL) are distinguished as well as ALL can be divided into T and B cell subtypes. The second is Medulloblastoma dataset that is a gene expression profiles from the childhood brain tumors. Although the pathogenesis of the tumor is not well understood, it can be categorized into two known histological subclasses: classic and desmoplastic. Pomeroy et al.[23] demonstrated the correlation of gene expression profiles and the two histological classes. The dataset has 34 samples and over 5800 genes. The third is the gene expression dataset from Zhang et al. (2004, http://hugheslab.med.utoronto.ca/Zhang). This dataset contains gene expression profiles of over 40000 known as well as predicted genes in 55 mouse tissues, organs and cell types. We used over 8200 genes after filtering with low variance. The forth is the human fibroblast geneexpression dataset from Iyer et al.[24] with 18 samples and 517 genes. The last is the wellknown Iris dataset [25]. This famous dataset gives the measurements in centimetres of the length and width of sepal and petal, respectively, for 50 flowers from each of the three species of Iris (i.e. Iris setosa, versicolor and virginica). Among datasets, Leukemia, Medulloblastoma and Iris dataset have known class labels for samples, while the rest have not.
Nonorthogonal matrix factorization for gene expression analysis
The geneexpression data is typically represented as an NbyM matrix A. In the matrix, each row represents the expression values of a gene across all samples. Each column represents the expression values of all genes in a sample. NMF can decompose gene expression data and derive partsbased representation of the whole data. It factorizes a matrix A into the product of two matrices, including nonnegative entries, formulated as A = WH. W and H are NbyK and KbyM matrices, respectively, and K is much smaller than M. The column of W can be regarded as a metagene, consisting of elements w_{ i }. Each element represents the coefficient of gene i in metagene j. The columns of matrix H represent the metagene expression pattern of the corresponding sample. Each element h_{ ij } indicates the expression value of metagene i in sample j. The cluster membership can be determined based on such a factorization of the matrix H. Sample j belongs to cluster i if the h_{ ij } is the largest entry in column j.
Brunet et al. [9] represented parts corresponding to metagenes which represent genes tend to be coexpressed in samples. Here parts mean sets of elements, indicating the building blocks for the whole. These metagenes can overlap, indicating that a single gene may be involved in a number of pathways or biological processes. Therefore, sparseness constraints are needed. NMF with sparseness constraints has been proposed by a few groups. Gao and Church [14] proposed a method to enforce sparseness of H matrix by penalizing the number of nonzero entries. This method enforces sparseness by combining the goal of minimizing reconstruction error with that of sparseness [14]. Specifically, they adopt the pointcount regularization approach that enforces sparseness of H by penalizing the number of nonzero entries rather than the sum of entries ∑h_{ ij } in H [11, 12, 15]. The sparseness is controlled by the parameter and larger parameter makes the H matrix become more and more sparse. Here, the optimization leads the resulting H matrix to contain as many zero entries as possible. Gao’s method enforces sparseness to H matrix only. We applied the sparseness constraints bidirectionally to both W and H. Because microarray gene expression data analysis involves clustering by genes as well as by samples. In microarray data analysis, samplebased clustering can be used to classify samples with similar appearance while genebased clustering can provide informative sets of tightly coregulated genes and information about activity of genes. While samplebased clustering relies on metagenes, genebased clustering relies on metasamples. The two processes can be viewed as bidirectionally constrained with each other. Good metagene may support good samplebased clusters and vice versa. Optimizing sampledimension only, sparseness of genedimension is relatively decreased when sparseness of sampledimension is increased. In result, the minimization problem is convex that was subsequently described by others [11, 12, 14, 15] and the resulting matrix cannot support genebased clusters well. Therefore, optimizing both sample and gene dimension together may be appropriated for clustering of microarray data. This method can optimize both sample and gene clustering. In this paper, we especially focus on BSNMF (Bidirectional Sparseness Nonnegative matrix factorization). The definition and algorithm is described below.
Definition: Bidirectional Sparseness Nonnegative matrix factorization (BSNMF)
Given a nonnegative gene expression data V of size N byM, find the nonnegative matrices W and H of size N byC and C byM (respectively) such that
E(W, H) = VWH^{2}
is minimized, under optional constraints:
Sparseness (w_{ i }) = λ_{1}
Sparseness (h_{ i }) = λ_{2},
where w_{ i } is the i^{th} column of W and h_{ i } is the i^{th} row of H. Here, C denotes the number of components (metagenes), λ_{1} and λ_{2} are the desired sparseness of W and H, respectively. These three parameters are set by the experimenters.
Algorithm: Bidirectional Sparseness Nonnegative matrix factorization (BSNMF)

1.
Initialize W and H to random positive matrices of dimension N byC and C byM, respectively.

2.
Rescale the column of W and the row of H to unit norm.

3.
Iterate until convergence or reach maximum number of allowed iteration.

(1)
If sparseness constraints on H apply

a.
solve W_{(}_{ ia }_{+1)} = W_{ ia }(VH^{T})_{ ia }/(WHH^{T})_{ ia }

b.
Rescale the column of W to unit norm

c.
Solve for each j
min {1/2V_{ j }WH_{ j }^{2} + 1/2λ_{1}H_{ j }^{2}}

d.
if (H_{ ij }<0) then H_{ ij } =0

(2)
If sparseness constraints on W apply,

a.
solve H_{(}_{ ia }_{+1)} = H_{ ia }(W^{T}V)_{ ia }/(W^{T}WH)_{ ia }

b.
Rescale the column of H to unit norm

c.
Solve for each j
min {1/2V_{ j }WH_{ j }^{2} + 1/2λ_{2}W_{ j }^{2}}

d.
if (W_{ ij }<0) then W_{ ij } =0
Measures of clustering evaluation
In this study, we attempt to evaluate various MFs using cluster evaluation indices. Here, we briefly introduce cluster evaluation indices we used.
Compactness The first measures estimate cluster compactness or homogeneity with intra cluster variance. Many variants of betweencluster homogeneity measure are able to estimate average or maximum pair wise betweencluster distances, average or maximum centroidbased similarities or the use of graphbased approaches [26]. For this purpose, we used the homogeneity index by Sharan and Shamir. Homogeneity index is defined as:
In this equation, C is a cluster. C(M) is the cluster centroid and C(x) is each data item. Corr(C(x), C(M)) is the correlation coefficient between each data item and the centroid. N is the number of data items.
Separation The second index quantifies the degree of separation between individual clusters. For example, the average weighted withincluster distances define an overall rating for a partitioning, where the distance between individual clusters can be calculated as the distance between cluster centroids, or as the minimum distance between data items belonging to different clusters. Alternatively, we used cluster separation in a partitioning which may be estimated as the minimum separation observed between individual clusters in the partitioning. Separation is defined as:
Where dist(C_{ k }, C_{ l }) is the minimum distance between a pair of data items, i and j, with i ∈ C_{ k } and j ∈ C_{ l }.
Combinations There are a number of enhanced approaches combining measures of the different types of cluster evaluation indices. Several methods therefore estimate both betweencluster homogeneity and withincluster separation. They compute a resulting score by combining linearly or nonlinearly the two measures. A wellknown linear combination is the SDvalidity Index [27] and nonlinear combinations include the Dunn Index [28], DunnlikeIndices [26], the DaviesBouldin Index [29] and the silhouette width [30]. We used Dunn Index and average silhouette width. The Dunn Index measures the ratio between the smallest cluster distance and the largest betweencluster distance in a partitioning. It is defined as:
where diam(C_{ m }) is the maximum intracluster distance within cluster C_{ m } and dist(C_{ k },C_{ l }) is the minimum distance between pairs of data items, i and j, with i ∈ C_{ k } and j ∈ C_{ l }. The interval of Dunn Index is [0, +∞] and it should be maximized.
The silhouette width for a partitioning is computed as the average silhouette value over all data items [30]. For each observation i, the silhouette width s(i) is defined as
where a(i) is the average dissimilarity between i and all other points of the cluster to which i belongs. b(i) is the average dissimilarity of i between all observations in its neighbour cluster. A large s(i) means that data items are “well clustered”.
Compliance between partitioning and distance information An alternative way of estimating cluster validity is to directly assess the degree to which distance information in the original data is consistent with a partitioning. For that purpose, a partitioning can be represented by means of its cophenetic matrix [31], of which each entry C(i, j) indicates whether the two elements, i and j are assigned to the same cluster or not. In hierarchical clustering, the cophenetic distance between two observations is defined as the intergroup dissimilarity at which two observations are first joined in the same cluster. The cophenetic matrix can be compared with the original dissimilarity matrix using Hubert’s correlation, the normalized gamma statistic, or a measure of correlation such as the Pearson [32] or Spearman’s rank correlation [33]. We used Hubert’s and Pearson correlations. The definition of the Huber’s correlation is given by the equation:
where M = N(N1)/2, P is the proximity matrix of the data set and Q is an NbyN matrix of which (i, j) element represents the distance between the representative points of the clusters where the objects x_{ i } and x_{ j } belong.
Number of clusters Most of the internal measures discussed above can be used to assess the number of clusters. If both clustering algorithms employed and the internal measures are satisfactory for the dataset under consideration, the best number of clusters can be obtained by a knee in the resulting performance curve. To measure whether the ‘optimal’ number of clusters is found, we used Gap Statistic[34]:
K is the total number of clusters giving within dispersion measures W_{ k }, k = 1,2,…, K. The Gap statistic should be minimized to find the ‘optimal’ number of clusters.
Predictive power and accuracy A number of indices can assess agreement between a partitioning and the gold standard by observing the contingency table of the pair wise assignment of the data items. The wellknown index is the Rand Index [35], which determines the similarity between two partitions by penalizing false positive and false negative. There are a number of variations in Rand Index. In particular, the adjusted Rand Index [36] introduces a statistically induced normalization to yield values close to zero for random partitions. Another related indices are the Jaccard coefficient [37] and the Minkowski Score [38]. We used the adjusted Rand Index to estimate the similarity between clustering results and the known class labels. The Adjusted Rand Index is defined as:
where n_{ lk } denotes the number of data items assigned to both cluster l and cluster k. The Adjusted Rand Index has a value in the interval [0, 1] and is to be maximized.
The accuracy of clustering is measured by the following formula [39]:
where I (j_{ i }) is 1 if the cluster assignment is correct for sample j_{ i }, otherwise 0 if the cluster assignment is incorrect.
Biological enrichment analysis
We applied biological enrichment analysis to clustering results in order to assess whether functionally related genes are grouped. The resulting genes from clustering are then subdivided into functional categories for biological interpretation. Such functional categorization was accomplished using GO terms and biological pathways. We used DAVID 2.1 (http://david.abcc.ncifcrf.gov/) for GO term enrichment analysis and ArrayXPath [40, 41] for pathway annotation. A modified Fisher’s exact test is performed to determine whether the proportions of members falling into each category differ by group, when those in two independent groups fell into one of the two mutually exclusive categories. Therefore, lower pvalue indicates a better association of cluster members.
References
 1.
Bicciato S, Luchini A, Di Bello C: PCA disjoint models for multiclass cancer analysis using gene expression data. Bioinformatics 2003, 19(5):571–578. 10.1093/bioinformatics/btg051
 2.
Qi Q, Zhao Y, Li M, Simon R: Nonnegative matrix factorization of gene expression profiles: a plugin for BRBArrayTools. Bioinformatics 2009, 25(4):545–547. 10.1093/bioinformatics/btp009
 3.
Schachtner R, Lutter D, Knollmuller P, Tome AM, Theis FJ, Schmitz G, Stetter M, Vilda PG, Lang EW: Knowledgebased gene expression classification via matrix factorization. Bioinformatics 2008, 24(15):1688–1697. 10.1093/bioinformatics/btn245
 4.
Tan Y, Shi L, Tong W, Hwang GT, Wang C: Multiclass tumor classification by discriminant partial least squares using microarray gene expression data and assessment of classification models. Comput Biol Chem 2004, 28(3):235–244. 10.1016/j.compbiolchem.2004.05.002
 5.
Ma S, Dai Y: Principal component analysis based methods in bioinformatics studies. Brief Bioinform 2011.
 6.
Paatero P, Tapper U: Positive matrix factorization: A nonnegative factor model with optimal utilization of errorest mates of data values. Environmetrics 1994, 5: 111–126. 10.1002/env.3170050203
 7.
Lee DD, Seung HS: Learning the parts of objects by nonnegative matrix factorization. Nature 1999, 401(6755):788–791. 10.1038/44565
 8.
Hyvarinen A, Karhunen J, Oja E: Independent Component Analysis. In Edited by: Interscience W. 2001.
 9.
Brunet JP, Tamayo P, Golub TR, Mesirov JP: Metagenes and molecular pattern discovery using matrix factorization. Proc Natl Acad Sci U S A 2004, 101(12):4164–4169. 10.1073/pnas.0308531101
 10.
Li SZ, Hou XW, Zhang HJ, Cheng QS: Learning spatially localized partsbased representation. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition: December 2001 2001 2001, 207–212.
 11.
Hoyer PO: Nonnegative sparse coding. Neural Networks for Signal Processing XII: 2002; Martigny, Switzerland 2002, 557–565.
 12.
Hoyer PO, Dayan P: Nonnegative Matrix Factorization with sparseness constraints. Journal of Machine Learning Research 2004, 5: 1457–1469.
 13.
Wang Y, Jia Y, Hu C, Turk M: Fisher nonnegative matrix factorization for learning local features. In Asian Conference on Computer Vision. Jeju, Korea; 2004:806–811.
 14.
Gao Y, Church G: Improving molecular cancer class discovery through sparse nonnegative matrix factorization. Bioinformatics 2005, 21(21):3970–3975. 10.1093/bioinformatics/bti653
 15.
Pauca P, Shahnaz F, Berry M, Plemmons R: Text Mining using NonNegative Matrix Factorizations. Proceedings of the Fourth SIAM International Conference on Data Mining: April 22–24 2004; Lake Buena Vista, Florida, USA 2004.
 16.
Sokal RR: Clustering and classification: background and current directions. In Classification and Clustering. Edited by: Van Ryzin J. Academic Press, London; 1977:1–15.
 17.
Everitt BS: Cluster Analysis. Edward Arnold; 1933.
 18.
Sharan R, MaronKatz A, Shamir R: CLICK and EXPANDER: a system for clustering and visualizing gene expression data. Bioinformatics 2003, 19(14):1787–1799. 10.1093/bioinformatics/btg232
 19.
Yeung KY, Haynor DR, Ruzzo WL: Validating clustering for gene expression data. Bioinformatics 2001, 17(4):309–318. 10.1093/bioinformatics/17.4.309
 20.
Dueck D, Morris QD, Frey BJ: Multiway clustering of microarray data using probabilistic sparse matrix factorization. Bioinformatics 2005, 21(Suppl 1):i144–151. 10.1093/bioinformatics/bti1041
 21.
Xu Y, Olman V, Xu D: Clustering gene expression data using a graphtheoretic approach: an application of minimum spanning trees. Bioinformatics 2002, 18(4):536–545. 10.1093/bioinformatics/18.4.536
 22.
Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA, et al.: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 1999, 286(5439):531–537. 10.1126/science.286.5439.531
 23.
Pomeroy SL, Tamayo P, Gaasenbeek M, Sturla LM, Angelo M, McLaughlin ME, Kim JY, Goumnerova LC, Black PM, Lau C, et al.: Prediction of central nervous system embryonal tumour outcome based on gene expression. Nature 2002, 415(6870):436–442. 10.1038/415436a
 24.
Iyer VR, Eisen MB, Ross DT, Schuler G, Moore T, Lee JC, Trent JM, Staudt LM, Hudson J Jr., Boguski MS, et al.: The transcriptional program in the response of human fibroblasts to serum. Science 1999, 283(5398):83–87. 10.1126/science.283.5398.83
 25.
Fisher R: The use of multiple measurements in taxonomic problem. Ann Eugenics 1936, 7: 179–188. 10.1111/j.14691809.1936.tb02137.x
 26.
Bezdek J, Pal N: Some new indexes of cluster validity. IEEE Trans Syst Man Cybernet 1998, 28: 301–315. 10.1109/3477.678624
 27.
Halkidi M, Batistakis Y, Vazirgiannis M: On clustering validation techniques. Journal of Intelligent Information Systems 2001, 17: 107–145. 10.1023/A:1012801612483
 28.
Dunn J: Well separated clusters and optimal fuzzy partitions. Journal of Cybernetics 1974, 95–104.
 29.
Davies DL, Bouldin DW: Acluster separation measure. IEEE Trans Pattern Anal Machine Intell 1979, 1: 224–227.
 30.
Rousseeuw PJ: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 1987, 20: 53–65. 10.1016/03770427(87)901257
 31.
Romesburg HC: Cluster Analysis for Researchers. Belmont, Calif.: Lifetime Learning Publications; 1984.
 32.
Edwards AL: The Correlation Coefficient. In An Introduction to Linear Regression and Correlation. San Francisco, CA: W. H. Freeman; 1976.
 33.
Lehmann EL, D'Abrera HJM: Nonparametrics: Statistical Methods Based on Ranks. Upper Saddle River, N.J. : Prentice Hall; 1998.
 34.
Tibshirani R, Walther G, Hastie T: Estimating the number of clusters in a data set via the gap statistic. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2001, 63(2):411–423. 10.1111/14679868.00293
 35.
Rand WM: Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association 1971, 66(336):846–850. 10.2307/2284239
 36.
Hubert A: Comparing partitions. J Classif 1985, 2: 193–198. 10.1007/BF01908075
 37.
Jaccard S: Nouvelles recherches sur la distribution florale. Bull Soc Vandoise des Sci Nat 1908, 44: 223–270.
 38.
Jardine N, Sibson R: Mathematical Taxonomy. Wiley, London. Kangas; 1971.
 39.
Xu W, Liu X, Gong Y: Documentclustering based on nonnegative matrix factorization. Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval July 28 August 1 2003; Toronto, CA 2003, 267–273.
 40.
Chung HJ, Kim M, Park CH, Kim J, Kim JH: ArrayXPath: mapping and visualizing microarray geneexpression data with integrated biological pathway resources using Scalable Vector Graphics. Nucleic Acids Res 2004, 32(Web Server issue):W460–464.
 41.
Chung HJ, Park CH, Han MR, Lee S, Ohn JH, Kim J, Kim JH: ArrayXPath II: mapping and visualizing microarray geneexpression data with biomedical ontologies and integrated biological pathway resources using Scalable Vector Graphics. Nucleic Acids Res 2005, 33(Web Server issue):W621–626.
Acknowledgements
This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No.20110018257), Systems Biomedical Informatics National Core Research Center and a grant of the Korea Healthcare technology R&D Project, Ministry of Health & Welfare, Republic of Korea (A070001).
This article has been published as part of BMC Bioinformatics Volume 12 Supplement 13, 2011: Tenth International Conference on Bioinformatics – First ISCB Asia Joint Conference 2011 (InCoB/ISCBAsia 2011): Bioinformatics. The full contents of the supplement are available online at http://www.biomedcentral.com/14712105/12?issue=S13.
Author information
Affiliations
Corresponding author
Additional information
Authors' contributions
MHK conceived of the study, implemented algorithms. HJS revised the manuscript and improved the methodic understanding. JGJ improved the manuscript writing. JHK supervised the study.
Competing interests
The authors declare that they have no competing interests.
Electronic supplementary material
Illustration of separation vs. homogeneity
Additional file 1: Illustration of separation vs. homogeneity. Results from each dataset are gathered. Each color means each method. Results from NMF, SNMF and BSNMF have higher slope. That is, homogeneity and separation are more optimized. (DOCX 62 KB)
Illustration of Hubert gamma
Additional file 2: Illustration of Hubert gamma. It is a measure of compliance between partitioning and distance information. Each plot shows result from each datasets at rank K=2, 3, 4 (for Iris dataset) or K=2, 3, 4 and 5 (for the rest). (a) Leukemia dataset (b) medulloblastoma dataset (c) Iris dataset (d) fibroblast dataset (e) Mouse dataset. (DOCX 52 KB)
The twenty common genes in each leukemia subtype
Additional file 3: The twenty common genes in each leukemia subtype (DOCX 18 KB)
Patterns of mean expression level for each cluster for fibroblast dataset
Additional file 4: Patterns of mean expression level for each cluster for fibroblast dataset. (a) Kmeans, (b) SVD, (c) PCA, (d) ICA, (e) NMF, (f) SNMF and (g) BSNMF. Each lines represent for each cluster. (DOCX 257 KB)
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Kim, M.H., Seo, H.J., Joung, JG. et al. Comprehensive evaluation of matrix factorization methods for the analysis of DNA microarray gene expression data. BMC Bioinformatics 12, S8 (2011). https://doi.org/10.1186/1471210512S13S8
Published:
Keywords
 Gene Ontology
 Acute Lymphoblastic Leukemia
 Singular Value Decomposition
 Independent Component Analysis
 Acute Myelogenous Leukemia