- Methodology article
- Open Access
Non-negative matrix factorization by maximizing correntropy for cancer clustering
© Wang et al.; licensee BioMed Central Ltd. 2013
- Received: 16 February 2012
- Accepted: 8 March 2013
- Published: 24 March 2013
Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.
We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.
Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering.
- Nonnegative Matrix Factorization
- Cancer Cluster
- Expectation Conditional Maximization
- Convex Conjugate Function
- Expectation Conditional Maximization Algorithm
Because cancer has been a leading cause of death in the world for several decades, the classification of cancers is becoming more and more important to cancer treatment and prognosis [1, 2]. With advances in DNA microarray technology, it is now possible to monitor the expression levels of a large number of genes at the same time. There have been a variety of studies on analyzing DNA microarray data for cancer class discovery [3-5]. Such methods are demonstrated to outperform the traditional, morphological appearance-based cancer classification methods. In such studies, different cancer classes are discriminated by their corresponding gene expression profiles .
Several clustering algorithms have been used to identify groups of similar expressed genes. Non-negative matrix factorization (NMF) was recently introduced to analyze gene expression data and this method demonstrated superior performance in terms of both accuracy and stability [6-8]. Gao and Church  reported an effective unsupervised method for cancer clustering with gene expression profiles via sparse NMF (SNMF). Carmona et al.  presented a methodology that was able to cluster closely related genes and conditions in sub-portions of the data based on non-smooth non-negative matrix factorization (nsNMF), which was able to identify localized patterns in large datasets. Zheng et al. [5, 7] applied penalized matrix decomposition (PMD) to extract meta-samples from gene expression data, which could captured the inherent structures of samples that belonged to the same class.
NMF approximates a given gene data matrix, X, as a product of two low-rank nonnegative matrices, H and W, as X≈H W. This is usually formulated as an optimization problem, where the objective function is to minimize either the l2 norm or the Kullback-Leibler (KL) distance  between X and HW. Most of the improved NMF algorithms are also based on the minimization of these two distances while adding the sparseness term , the graph regularization term , etc. Sandler and Lindenbaum  argued that measuring the dissimilarity of W and HW by either the l2 norm or the KL distance, even with additional bias terms, was inappropriate in computer vision applications due to the nature of errors in images. Sandler and Lindenbaum  proposed a novel NMF with earth mover’s distance (EMD) metric by minimizing the EMD error between X and HW. The proposed NMF-EMD algorithm demonstrated significantly improved performance in two challenging computer vision tasks, i.e., texture classification and face recognition. Liu et al.  tested a family of NMF algorithms using α-divergence with different α values as dissimilarities between X and HW for clustering cancer gene expression data.
It is widely acknowledged that DNA microarry data contain many types of noise, especially experimental noise. Recently, correntropy was shown to be an effective similarity measurement in information theory due to its stability to outliers or noise . However, it has not been used in the analysis of microarray data. In this paper, we propose a novel form of NMF that maximizes the correntropy. We introduce a new NMF algorithm with a maximum correntropy criterion (MCC)  for the gene expression data-based cancer clustering problem. We call it NMF-MCC. The goal of NMF-MCC is to find a meta-sample matrix, H, and a coding matrix, W, such that the gene expression data matrix, X, is as correlative to the product of H and W as possible under MCC.
He et al.  recently developed a face recognition algorithm, correntropy-based sparse representation (CESR), based on MCC. CESR tries to find a group of sparse combination coefficients to maximize the correntropy between the facial image vector and the linear combination of faces in the database. He et al.  demonstrated that CESR was much more effective in dealing with the occlusion and corruption problems of face recognition than the state-of-the-art methods. However, CESR learns only the combination coefficients while the basis faces (the faces in the database) are fixed. Comparing to CESR, NMF-MCC can learn both the combination coefficients and the basis vectors jointly, which allows the algorithm to obtain more basis vectors for better representation of the data points. Zafeiriou and Petrou  addressed the problem of NMF with kernel functions instead of inner products and proposed the projected gradient kernel nonnegative matrix factorization (PGK-NMF) algorithm. Both NMF-MCC and PGK-NMF employ kernel functions to map the linear data space to a non-linear space. However, as we show later, NMF-MCC computes different kernels for different features, while PGK-NMF computes a single kernel for the whole feature vector. Thus, NMF-MCC allows the algorithm to assign different weights to different features and emphasizes the discriminant features with high weights, thus achieving feature selection. In contrast, like most kernel based methods, PGK-NMF simply replaces the inner product by the kernel-function and treats the features equally, thus there is no feature selection function.
In this section, we first briefly introduce the traditional NMF method. We then propose our novel NMF-MCC algorithm by maximizing the correntropy in NMF. We further propose a expectation conditional maximization-based approach to solve the optimization problem.
Nonnegative matrix factorization
The factorization is quantified by an objective function that minimizes some distance measure, such as:
l 2 norm distance: One simple measure is the square of the l2 norm distance (also known as the Frobenius norm or the Euclidean distance) between two matrices, which is defined as:(2)
Kullback - Leibler (KL) divergence: The second one is the divergence between two matrices , which is defined as:(3)
Maximum correntropy criterion for NMF
Another thing that has to be changed is that the definition of correntropy is not subject to the kernel being Gaussian as they seem to imply through the text, so for instance when they define they can say E(k(x-y)) and one of the common choices of k is the Gaussian kernel giving....
where k σ is a kernel that satisfies the Mercer theory and E[·] is the expectation. One of the common choices of k σ is the Gaussian kernel given as .
We can see that the kernel is applied to the entire feature vector, x, and each feature x d ,d=1⋯,D is treated equally with the same kernel parameter. However, in (7), kernel functions are applied to different functions. This can allow the algorithm to learn different kernel parameters as we will introduce later. In this way, we can assign different weights to different features and thus implement feature selection.
We should notice the significant difference between NMF-MCC and CESR. As a supervised learning algorithm, the CESR represents a test data point, x t , as a linear combination of all the the training data points as and w t =[w1t,⋯,w N t ]⊤ is the combination coefficient vector. CESR aims to find the optimal w t to maximize the correntropy between x t and X w t . Similarly, NMF-MCC also tries to represent a data point x n as a linear combination of some basis vectors as and w n =[w1n,⋯,w K n ]⊤ is the combination coefficient vector. Differently from CESR, NMF-MCC aims to find not only the optimal w n but also the basis vectors in H to maximize the correntropy between x n and H w n , n=1,⋯,N. The internal difference between NMF-MCC and CESR lies in whether to learn basis vectors or not.
In order to solve the optimization problem, we recognize that the expectation conditional maximization (ECM) method  can be applied. Based on the theory of convex conjugate functions , we can derive the following proposition that forms the basis to solve the optimization problem in (9):
and for a fixed z, the supremum is reached at ϱ=−g(z,σ).
where superscript φ is the convex conjugate function φ of g(z) defined in Proposition 1, and ρ=[ρ1,⋯,ρ D ]⊤ are the auxiliary variables.
That is, maximizing F(H,W) is equivalent to maximizing the augmented function .
The NMF-MCC Algorithm
- 1.E-Step: Compute ρ given the current estimations of the meta-sample matrix H and the coding matrix W as:(14)where t means the t-th iteration. In this study, the kernel size (bandwidth) σ2 t is computed by(15)
where Θ is a parameter to control the sparseness of .
- 2.CM-steps: In the CM-step, given , we try to optimize the following function respect to H and W:(16)
where d i a g(·) is an operator that converts the vector ρ to a diagonal matrix.By introducing a dual objective function,(17)the optimal problem in (16) can be reformulated as the following dual problem:(18)Let ϕ d k and ψ k n be the Lagrange multiplier for constraints h d k ≥0 and w k n ≥0, respectively, and Φ=[ϕ d k ] and Ψ=[ψ k n ]. The Lagrange is(19)The partial derivatives of with respect to H and W are(20)and(21)Using the Karush-Kuhn-Tucker optimal conditions, i.e., ϕ d k h d k =0 and ψ k n w k n =0, we get the following equations for h d k and w k n :(22)and(23)These equations lead to the following updating rules to maximize the expectation in (13).
The meta-sample matrix H, conditioned on the coding matrix W:(24)
The coding matrix W conditioned on the newly estimated meta-sample matrix Ht+1:(25)
We should note that if we exchange the numerator and denominator in (24) and (25), new update formulas will be yield. The new update rules are dual for (24) and (25), and our experimental results show that the dual update rules achieve similar clustering performances as (24) and (25).
Algorithm 1 summarizes the optimization procedure.
Algorithm 1 NMF-MCC Algorithm
Proof of convergence
In this section, we will prove that the objective function in (16) is nonincreasing under the updating rules in (24) and (25).
The objective function in (16) is nonincreasing under the update rules (24) and (25).
To prove the above theorem, we first define an auxiliary function.
The auxiliary function is quite useful because of the following lemma:
Since the updating rule is essentially based on elements, it is sufficient to show that each F k n is nonincreasing under the update step of (25).
is an auxiliary function for F k n , which is relevant only to w k n .
Thus, (32) holds and . □
We can now demonstrate the convergence of Theorem 1.
Proof of Theorem 1
Since (30) is an auxiliary function, F k n is nonincreasing under this update rule as in (25).
Similarly, we can also show that O is nonincreasing under the updating steps in (24).
Summary of the six cancer gene expression datasets used to test the NMF-MCC algorithm
Samples ( N)
Genes ( D)
Cancer Classes ( K)
Acute myelogenous leukemia
5 human brain tumor types
4 lung cancer types and normal tissues
9 various human tumor types
Small, round blue cell tumors
Diffuse large B-cell lymphomas
where I(A,B) returns 1 if A=B and 0 otherwise.
We first compared the MCC with other loss functions between X and HW for the NMF algorithm on the cancer clustering problem, including l2 norm distance, KL distance , α-divergence , and earth mover’s distance (EMC) . We further compared the proposed NMF-MCC algorithm with other NMF-based algorithms, including the penalized matrix decomposition (PMD) algorithm , the original NMF algorithm , the sparse non-negative matrix factorization (SNMF) algorithm , the non-smooth non-negative matrix factorization (nsNMF) algorithm  and the projected gradient kernel nonnegative matrix factorization (PGK-NMF).
Traditional unsupervised learning techniques select features with features selection algorithms and then do clustering using the selected features. The NMF-MCC algorithm proposed here achieves both goals simultaneously. The learned gene weight vector reflects the importance of the genes in the gene clustering task, and the coding matrix encodes the clustering results for the samples.
Our experimental results demonstrate that the improvement of NMR-MCC over the other methods increases when the number of genes increases. This shows the ability of the proposed algorithm to effectively select the important genes and cluster samples. This is an important property because high-dimensional data analysis has become increasingly frequent and important in diverse fields of sciences and engineering, and social sciences, ranging from genomics and health sciences to economics, finance and machine learning. For instance, in genome-wide association studies, hundreds of thousands of SNPs are potential covariates for phenotypes such as cholesterol level or height. The large number of features presents an intrinsic challenge to many classical problems, where usual low-dimensional methods no longer apply. The NMF-MCC algorithm has been demonstrated to work well on the datasets with small numbers of samples but large numbers of features. It can therefor provide a powerful tool to study high-dimensional problems, such as genome-wide association studies.
We have proposed a novel NMF-MCC algorithm for gene expression data-based cancer clustering. Experiments demonstrate that correntropy is a better measure than the traditional l2 norm and KL distances for this task, and the proposed algorithm significantly outperforms the existing methods.
The study was supported by a grant from King Abdullah University of Science and Technology, Saudi Arabia. We would like to thank Dr. Ran He for the discussion about the maximum correntropy criterion at ICPR 2012 conference.
- Shi F, Leckie C, MacIntyre G, Haviv I, Boussioutas A, Kowalczyk A: A bi-ordering approach to linking gene expression with clinical annotations in gastric cancer. BMC Bioinformatics. 2010, 11: 477-10.1186/1471-2105-11-477.PubMed CentralView ArticlePubMedGoogle Scholar
- de Souto MCP, Costa IG, de Araujo DSA, Ludermir TB, Schliep A: Clustering cancer gene expression data: a comparative study. BMC Bioinformatics. 2008, 9: 497-10.1186/1471-2105-9-497.PubMed CentralView ArticlePubMedGoogle Scholar
- Gao Y, Church G: Improving molecular cancer class discovery through sparse non-negative matrix factorization. Bioinformatics. 2005, 21 (21): 3970—3975-View ArticlePubMedGoogle Scholar
- Liu W, Yuan K, Ye D: On alpha-divergence based nonnegative matrix factorization for clustering cancer gene expression data. Artif Intell Med. 2008, 44 (1): 1-5. 10.1016/j.artmed.2008.05.001.View ArticlePubMedGoogle Scholar
- Zheng CH, Ng TY, Zhang L, Shiu CK, Wang HQ: Tumor classification based on non-negative matrix factorization using gene expression data. IEEE Trans Nanobioscience. 2011, 10 (2): 86-93.View ArticlePubMedGoogle Scholar
- Kim MH, Seo HJ, Joung JG, Kim JH: Comprehensive evaluation of matrix factorization methods for the analysis of DNA microarray gene expression data. BMC Bioinformatics. 2011, 12 (Suppl 13): S8-10.1186/1471-2105-12-S13-S8.PubMed CentralView ArticlePubMedGoogle Scholar
- Zheng CH, Zhang L, Ng VTY, Shiu SCK, Huang DS: Molecular pattern discovery based on penalized matrix decomposition. IEEE/ACM Trans Comput Biol Bioinformcs. 2011, 8 (6): 1592-1603.View ArticleGoogle Scholar
- Tjioe E, Berry M, Homayouni R, Heinrich K: Using a literature-based NMF model for discovering gene functional relationships. BMC Bioinformatics. 2008, 9 (7): P1-PubMed CentralView ArticleGoogle Scholar
- Carmona-Saez P, Pascual-Marqui R, Tirado F, Carazo J, Pascual-Montano A: Biclustering of gene expression data by non-smooth non-negative matrix factorization. BMC Bioinformatics. 2006, 7: 78-10.1186/1471-2105-7-78.PubMed CentralView ArticlePubMedGoogle Scholar
- Venkatesan R, Plastino A: Deformed statistics Kullback-Leibler divergence minimization within a scaled Bregman framework. Phys Lett A. 2011, 375 (48): 4237-4243. 10.1016/j.physleta.2011.09.021.View ArticleGoogle Scholar
- Cai D, He X, Han J, Huang TS: Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell. 2011, 33 (8): 1548-1560.View ArticlePubMedGoogle Scholar
- Sandler R, Lindenbaum M: Nonnegative matrix factorization with earth mover’s distance metric for image analysis. IEEE Trans Pattern Anal Mach Intell. 2011, 33 (8): 1590-1602.View ArticlePubMedGoogle Scholar
- He R, Zheng WS, Hu BG: Maximum correntropy criterion for robust face recognition. IEEE Trans Pattern Anal Mach Intell. 2011, 33 (8): 1561-1576.View ArticlePubMedGoogle Scholar
- Zafeiriou S, Petrou M: Nonlinear nonnegative component analysis. CVPR: 2009 IEEE Conference on Computer Vision and Pattern Recognition, Vols 1-4. 2010, Miami: IEEE Conference on Computer Vision and Pattern Recognition, 2852-2857.Google Scholar
- Yan H, Yuan X, Yan S, Yang J: Correntropy based feature selection using binary projection. Pattern Recognit. 2011, 44 (12): 2834-2842. 10.1016/j.patcog.2011.04.014.View ArticleGoogle Scholar
- He R, Hu BG, Zheng WS, Kong XW: Robust principal component analysis based on maximum correntropy criterion. IEEE Trans Image Process. 2011, 20 (6): 1485-1494.View ArticlePubMedGoogle Scholar
- Chalasani R, Principe JC: Self organizing maps with the correntropy induced metric. Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN2010). 2010, Barcelona, Spain: , 1-6.View ArticleGoogle Scholar
- Liu W, Pokharel PP, Principe JC: Correntropy: properties and applications in non-gaussian signal processing. IEEE Trans Signal Process. 2007, 55 (11): 5286-5298.View ArticleGoogle Scholar
- Horaud R, Forbes F, Yguel M, Dewaele G, Zhang J: Rigid and articulated point registration with expectation conditional maximization. IEEE Trans Pattern Anal Mach Intell. 2011, 33 (3): 587-602.View ArticlePubMedGoogle Scholar
- BEER G: Conjugate convex-functions and the epi-distance topology. Proc Am Math Soc. 1990, 108 (1): 117-126. 10.1090/S0002-9939-1990-0982400-8.View ArticleGoogle Scholar
- Qi Y, Ye P, Bader J: Genetic interaction motif finding by expectation maximization - a novel statistical model for inferring gene modules from synthetic lethality. BMC Bioinformatics. 2005, 6: 288-10.1186/1471-2105-6-288.PubMed CentralView ArticlePubMedGoogle Scholar
- Lee DD, Seung HS: Algorithms for non-negative matrix factorization. Adv Neural Inf Process Syst. 2001, 13: 556-562.Google Scholar
- Statnikov A, Aliferis C, Tsamardinos I, Hardin D, Levy S: A comprehensive evaluation of multicategory classification methods for microarray gene expression cancer diagnosis. Bioinformatics. 2005, 21 (5): 631-643. 10.1093/bioinformatics/bti033.View ArticlePubMedGoogle Scholar
- Shipp M, Ross K, Tamayo P, Weng A, Kutok J, Aguiar R, Gaasenbeek M, Angelo M, Reich M, Pinkus G, Ray T, Koval M, Last K, Norton A, Lister T, Mesirov J, Neuberg D, Lander E, Aster J, Golub T: Diffuse large B-cell lymphoma outcome prediction by gene-expression profiling and supervised machine learning. Nat Med. 2002, 8 (1): 68-74. 10.1038/nm0102-68.View ArticlePubMedGoogle Scholar
- Golub T, Slonim D, Tamayo P, Huard C, Gaasenbeek M, Mesirov J, Coller H, Loh M, Downing J, Caligiuri M, Bloomfield C, Lander E: Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science. 1999, 286 (5439): 531-537. 10.1126/science.286.5439.531.View ArticlePubMedGoogle Scholar
- Pomeroy S, Tamayo P, Gaasenbeek M, Sturla L, Angelo M, McLaughlin M, Kim J, Goumnerova L, Black P, Lau C, Allen J, Zagzag D, Olson J, Curran T, Wetmore C, Biegel J, Poggio T, Mukherjee S, Rifkin R, Califano A, Stolovitzky G, Louis D, Mesirov J, Lander E, Golub T: Prediction of central nervous system embryonal tumour outcome based on gene expression. Nature. 2002, 415 (6870): 436-442. 10.1038/415436a.View ArticlePubMedGoogle Scholar
- Bhattacharjee A, Richards W, Staunton J, Li C, Monti S, Vasa P, Ladd C, Beheshti J, Bueno R, Gillette M, Loda M, Weber G, Mark E, Lander E, Wong W, Johnson B, Golub T, Sugarbaker D, Meyerson M: Classification of human lung carcinomas by mRNA expression profiling reveals distinct adenocarcinoma subclasses. Proc Natl Acad Sci. 2001, 98 (24): 13790-13795. 10.1073/pnas.191502998.PubMed CentralView ArticlePubMedGoogle Scholar
- Staunton J, Slonim D, Coller H, Tamayo P, Angelo M, Park J, Scherf U, Lee J, Reinhold W, Weinstein J, Mesirov J, Lander E, Golub T: Chemosensitivity prediction by transcriptional profiling. Proc Natl Acad Sci. 2001, 98 (19): 10787-10792. 10.1073/pnas.191368598.PubMed CentralView ArticlePubMedGoogle Scholar
- Khan J, Wei J, Ringner M, Saal L, Ladanyi M, Westermann F, Berthold F, Schwab M, Antonescu C, Peterson C, Meltzer P: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat Med. 2001, 7 (6): 673-679. 10.1038/89044.PubMed CentralView ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.