- Proceedings
- Open Access
Molecular cancer classification using a meta-sample-based regularized robust coding method
- Shu-Lin Wang^{1, 3}Email author,
- Liuchao Sun^{1} and
- Jianwen Fang^{2, 3}Email author
https://doi.org/10.1186/1471-2105-15-S15-S2
© Wang et al.; licensee BioMed Central Ltd. 2014
Published: 3 December 2014
Abstract
Motivation
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data.
Results
In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l_{2}-norm or l_{1}-norm of the coding residual.
Conclusions
Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
Keywords
- Singular Value Decomposition
- Feature Extraction Method
- Cancer Dataset
- Cancer Classification
- Generalize Gaussian Distribution
Introduction
With the advance of DNA microarray and next-generation sequencing (NGS) technology [1], a large amount of gene expression profiles (GEP) data has been rapidly accumulated, which requires novel analysis method to deeply mine these big data to interpret such data to gain insight into the mechanism of tumor development. Since Golub et al. made use of gene expression profiling data, obtained using the DNA microarray technology, to classify acute myeloid leukemia (AML) and acute lymphocytic leukemia (ALL) [2], a great number of GEP-based cancer classification methods have been proposed for classifying cancer types or subtypes [3–6]. It has increasingly become clear that common machine learning methods such as support vector machine (SVM) [7, 8] and artificial neural networks (ANN) [5, 9] may not perform very well because of the curse of dimensionality, as the number of features (genes) is usually much higher than the number of samples in most GEP experiments. Therefore, the key task of GEP-based cancer classification should be the design of dimension reduction method to dramatically decrease the number of features in GEP data before constructing classification models.
Dimension reduction methods can be grouped into two categories: feature selection and feature reduction approaches. Feature selection methods [10], such as the heuristic breadth-first search algorithm, find as many optimal gene subsets as possible and further rank these genes to discover important cancer-related genes [11]. Feature extraction methods instead employ independent component analysis to model the gene expression data [12, 13]. Gene selection methods do not alter the original representation of each gene, while feature extraction methods, which are based on projection, yield new variables that may reflect the intrinsic characteristics of original features. Other feature extraction methods such as principal component analysis (PCA)[14], linear discriminant analysis (LDA) [15], locally linear discriminant embedding (LLDE) [16], and partial least squares (PLS) [17] are also extensively applied to the dimensionality reduction of GEP. These methods can generally achieve satisfactory classification performance with the minimum dimension reduction. Both feature selection and feature extraction methods have their own advantages and disadvantages. For gene selection methods, their main advantage is that the selected genes may be related to the underlying mechanisms of cancer development. However, different gene selection methods may result in significantly different selected genes and therefore the interpretation of the results can be difficult. For the feature extraction methods, small dimension can be obtained by integrating original features. However, it is difficult to precisely interpret the biomedical meanings of derived features.
Machine learning based methods are also often called model-based methods because a predictive model is built for predicting the label of test sample. The model selection is a complex training procedure, which easily leads to over-fitting and decreased prediction performance. Recently, sparse representation (SR), a powerful data processing method that does not require model selection, has been extensively applied to face recognition [18, 19] and further extended to cancer classification recently [20–22]. For example, Hang, et al. proposed a SR-based classification (SRC) method using ${l}_{1}$-norm minimization to classify cancer test sample. The approach models a classification problem as to find a sparse representations of test samples with respect to training samples [22]. They applied the proposed method to six cancer gene expression datasets and their experimental results demonstrated that the performance of the proposed method was comparable to or better than those of SVMs. Especially, the proposed method does not involve model selection and is robust to noise, outliers and even incomplete measurements. Zheng, et al. further presented a new SR-based method for GEP-based cancer classification, termed meta-sample-based SR classification (MSRC), where a set of meta-samples are extracted from training samples, and then a testing sample is represented as the linear combination of these meta-samples by ${l}_{1}$-regularized least square method [20]. Their experiments on publicly available GEP datasets have shown that MSRC is efficient for cancer classification and can achieve higher accuracy than many existing representative schemes such as SVM, SRC and least absolute shrinkage and selection operator (LASSO) algorithm. In addition, Gan et al. proposed a new classifier, meta-sample-based robust sparse representation classifier (MRSRC) based on the MSRC method, for cancer classification [21]. Their experiments show that these methods are efficient and robust.
Previous SR-based model assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be effective for describing the coding residual in practical GEP datasets, and another problem is that the sparsity constraint on coding coefficients leads to the high computational cost of SRC method. To deal with the problem, Yang et al. proposed a new coding model, namely regularized robust coding (RRC) for face recognition [23]. Here, we present a meta-sample-based regularized robust coding classification (MRRCC) method, a novel and effective cancer classification technique combining the ideas of meta-sample-based and RRC methods. A meta-sample can be represented as a linear combination of a set of training samples, which may capture the intrinsic structures of these data. The coefficient vector of a meta-sample may have only a few nonzero elements. The expression patterns over the meta-samples can reflect the gene expression patterns. Test samples belonging to the same subclass will have similar sparse representation, while different subclass would result in different sparse representations [22]. Our extensive experiments on cancer datasets show that MRRCC can achieve higher classification accuracy but with lower time complexity, compared with other SR-based methods and dimension reduction-based methods.
Methods
Description of SR-based problem
where $\phantom{\rule{0.25em}{0ex}}y$ is a given test sample, $\phantom{\rule{0.25em}{0ex}}A$ represents all training samples, $\phantom{\rule{0.25em}{0ex}}\alpha $ is the coding vector of $\phantom{\rule{0.25em}{0ex}}y$ with respect to $\phantom{\rule{0.25em}{0ex}}A$, and $\phantom{\rule{0.25em}{0ex}}\epsilon $ is a small positive constant. By coding the test sample $\phantom{\rule{0.25em}{0ex}}y$ as a sparse linear combination of the training samples via Eq. (1), SR-based classifier assigns the label to the test sample $\phantom{\rule{0.25em}{0ex}}y$ based on the predictions which subclass can produce the least reconstruction error.
Analysis flowchart of cancer GEP data
- 1)
The whole sample set is randomly split into two disjoint parts: training set and test set, and then the meta-samples are extracted only from the training set using singular value composition (SVD).
- 2)
The weight of each gene is calculated according to a weight function, and the genes with lower weight are removed in a test sample ${T}_{o}$ and all meta-samples.
- 3)
The test sample ${T}_{o}$ is represented as a linear combination of all meta-samples, and the coding coefficient of the test sample ${T}_{o}$ can be obtained by using RRC.
- 4)
We can reconstruct the test sample for each subclass by using the meta-samples and the coding coefficient of the original test sample ${T}_{o}$, and the reconstructed test samples (the test sample 1, test sample 2,..., test sample $\phantom{\rule{0.25em}{0ex}}k$) are denoted by ${T}_{1},{T}_{2},\dots ,{T}_{k}$, where $\phantom{\rule{0.25em}{0ex}}k$ denotes the number of subclasses in original dataset.
- 5)
The distance between the processed test sample and each reconstructed test sample ${T}_{i},1\le i\le k$ is calculated, and the original test sample ${T}_{o}$ is assigned to the subclass with minimal distance.
Construct meta-samples
where matrix ${A}_{i}$ is of size $n\times {m}_{i}$, the matrix ${M}_{i}$ is of size $n\times {q}_{i}$ and the matrix ${V}_{i}$ is of size ${q}_{i}\times {m}_{i}$. Each of ${q}_{i}$ columns in matrix ${M}_{i}$ is defined as a meta-sample of the $\phantom{\rule{0.25em}{0ex}}i$-th subclass. Each of ${m}_{i}$ columns in matrix ${V}_{i}$ represents the meta-sample expression pattern of the corresponding samples. $D=\left[{M}_{1},\dots {M}_{i},..{M}_{k}\right]$ denotes the constructed meta-sample set.
Calculating coding coefficient using RRC
where ${\rho}_{\theta}\left(e\right)=-ln{f}_{\theta}\left(e\right)$ and ${\rho}_{o}\left(\alpha \right)=-ln{f}_{o}\left(\alpha \right)$. The coding residual $e=y-D\alpha =\left[{e}_{1};{e}_{2};\dots ;{e}_{n}\right]$ are with the probability density function (PDF) ${f}_{\theta}\left({e}_{i}\right)$ and the coding vector $\alpha =\left[{\alpha}_{1};{\alpha}_{2};\dots ;{\alpha}_{m}\right]$ are with PDF${f}_{o}\left({\alpha}_{j}\right)$. Generally, we assume that the unknown PDF ${f}_{\theta}\left(e\right)$ are symmetric, differentiable and monotonic. Therefore, ${\rho}_{\theta}\left(e\right)$ has following properties: (1) ${\rho}_{\theta}\left(0\right)$is the global minimal of ${\rho}_{\theta}\left(z\right)$; (2)${\rho}_{\theta}\left(z\right)={\rho}_{\theta}\left(-z\right)$; (3) if $\left|{z}_{1}\right|<\left|{z}_{2}\right|$, we can get ${\rho}_{\theta}\left({z}_{1}\right)<{\rho}_{\theta}\left({z}_{2}\right)$. Without loss of generality, we can let ${\rho}_{\theta}\left(0\right)=0$.
where ${W}_{i,i}$ is the weight value of each gene. Thus the minimization problem of the RRC model can be transformed into calculating the diagonal weight matrix $\phantom{\rule{0.25em}{0ex}}W$.
where $\phantom{\rule{0.25em}{0ex}}\Gamma $ is the gamma function.
The RRC model has two vital cases when $\phantom{\rule{0.25em}{0ex}}\beta $ is set as two specific values [23].
Iteratively reweighted regularized robust coding algorithm
Iteratively reweighted regularized robust coding (IR^{3}C) algorithm was designed by Yang, et al. to solve the RRC model efficiently [23]. The overall procedure of the algorithm is as follows.
Input: Normalized test sample $\phantom{\rule{0.25em}{0ex}}y$ with unit ${l}_{2}$-norm; meta-sample set $\phantom{\rule{0.25em}{0ex}}D$extracted from original training samples; ${\alpha}^{\left(1\right)}$.
Output: $\phantom{\rule{0.25em}{0ex}}\alpha $
- 1.
Compute the gene residual ${e}^{\left(t\right)}=y-D{\alpha}^{\left(t\right)}$
- 2.Estimate weight value of each gene as${\omega}_{\theta}\left({e}_{i}^{\left(t\right)}\right)=1/1+\mathsf{\text{exp}}\left(\mu {\left({e}_{i}^{\left(t\right)}\right)}^{2}-\mu \delta \right)$
- 3.
Weighted regularized robust coding coefficient:
- 4.
Update the robust coding coefficients.
If $t=1$, ${\alpha}^{\left(t\right)}={\alpha}^{*}$;
- 5.
Reconstruct the test sample by coding coefficient and all meta-samples
- 6.
Return to the step 1 until the condition of convergence $\left|\right|{W}^{\left(t\right)}-{W}^{\left(t-1\right)}|{|}_{2}/|\left|{W}^{\left(t-1\right)}\right|{|}_{2}<\phi $ ($\phantom{\rule{0.25em}{0ex}}\phi $ is a small positive scalar) is met, or reached the maximal number of iteration.
Algorithm end.
where ${l}_{d}=\left|\right|{W}_{final}^{\frac{1}{2}}\left(y-{D}_{d}{\hat{\alpha}}_{d}\right)|{|}_{2}$, ${D}_{d}$ is the meta-sample set associated with d-th subclass, ${\hat{\alpha}}_{d}$ is the final coding vector associated with d-th subclass, and ${W}_{final}$ is the final weight matrix.
When $\beta =1$, the time complexity of IR^{3}C is $O\left(t{m}^{2}n\right)$, where $\phantom{\rule{0.25em}{0ex}}n$ is the number of genes, $\phantom{\rule{0.25em}{0ex}}m$ is the number of meta-samples, and $\phantom{\rule{0.25em}{0ex}}t$ is the iteration times. When $\beta =2$, the time complexity of IR^{3}C is $O\left(t{k}_{1}mn\right)$, where ${k}_{1}$ is the iteration number in conjugate gradient solution. The time complexity of IR^{3}C with $\beta =1$ or $\beta =2$ is much lower complexity than SRC whose time complexity is $O\left({m}^{2}{n}^{1.5}\right)$[23].
In literature [23] the RRC model with $\beta =1$ is called as RRC_L1 and the RRC model with $\beta =2$ is called as RRC_L2. However, in our method the input $\phantom{\rule{0.25em}{0ex}}D$ of IR^{3}C is actually a set of meta-samples which are extracted by SVD from the original training set, so we call our methods as MRRCC1 (the meta-sample-based regularized robust coding classification 1) and MRRCC2 (the meta-sample-based regularized robust coding classification 2) corresponding to the two cases RRC_L1 and RRC_L2, respectively.
Experiments
Cancer datasets
The summary of the eight cancer datasets.
Types | Datasets | #Samples | #Genes | #Subclasses(K) |
---|---|---|---|---|
Microarray | DLBCL | 77 | 7,129 | 2 |
ALL | 248 | 12,626 | 6 | |
GCM | 190 | 16,063 | 14 | |
Lung | 203 | 12,601 | 5 | |
MLL | 72 | 7,129 | 3 | |
NGS | BRCACancer | 216 | 20531 | 2 |
KIRCCancer | 130 | 20531 | 2 | |
LUADCancer | 110 | 20531 | 2 | |
THCACancer | 112 | 20531 | 2 |
The four NGS datasets are downloaded from the web site: The Cancer Genome Atlas (TCGA) (http://cancergenome.nih.gov/). They include Breast invasive carcinoma (called as BRCACancer), Kidney renal clear cell carcinoma (KIRCCancer), Lung adenocarcinoma (LUADCancer), and Thyroid carcinoma (THCACancer). All samples are matched cancer and normal tissue samples.
Parameter selection
where the vector $e\in {R}^{n}$, ${\gamma}_{1}{\left(e\right)}_{q}$ is the $\phantom{\rule{0.25em}{0ex}}q$-th largest element of the set $\left\{{e}_{j}^{2},\mathsf{\text{j}}=1,\dots ,\mathsf{\text{n}}\right\}$. Parameter $\phantom{\rule{0.25em}{0ex}}\mu $ is used to control the decreasing rate of the weight ${W}_{i,i}$. We can simply set $\mu =s/\delta $, where $s=8$ is defined as a constant. So the $\phantom{\rule{0.25em}{0ex}}\delta $ value, estimated by $\tau $according to Eq. (11), is a very important parameter to distinguish outlier genes. The selection of parameter $\phantom{\rule{0.25em}{0ex}}\tau $ will be further determined by our experiments.
Comparison with other SR-based methods
The classification accuracy obtained by five SR-based methods on the nine cancer datasets.
Types | Datasets | SRC | MSRC | MRSRC | MRRCC1 | MRRCC2 |
---|---|---|---|---|---|---|
Microarray | DLBCL | 94.75 | 96.10 | 94.81 | 97.40 | 94.81 |
All | 97.70 | 97.18 | 97.81 | 96.77 | 97.18 | |
GCM | 82.93 | 82.32 | 78.79 | 79.80 | 78.79 | |
Lung | 94.53 | 95.57 | 96.55 | 96.55 | 96.55 | |
MLL | 96.31 | 95.83 | 98.61 | 97.22 | 98.61 | |
NGS | BRCACancer | 96.76 | 95.83 | 99.07 | 99.07 | 99.07 |
KIRCCancer | 95.92 | 95.38 | 96.92 | 96.92 | 96.92 | |
LUADCancer | 94.91 | 99.09 | 99.09 | 100 | 99.09 | |
THCACancer | 93.30 | 87.50 | 95.54 | 92.86 | 95.54 |
Comparison with dimension reduction-based methods
A two-stage method can be used to reduce the dimensionality of dataset before classification. The first stage is a process of adopting a gene filter method such as KWRST (Kruskal-Wallis rank sum test) [33] or Relief-F [34] to initially select a small set of differentially expressed genes. The second stage is a process of adopting a feature extraction method to further reduce the dimensionality of the dataset. Our previous studies have shown that the predication accuracy of two-stage method is influenced by many factors such as normalization method, gene filter method, feature extraction method, classification method, the number of genes selected and the number of features extracted as well as different division of training set and test set, etc. [35]. In our experiments, training sets and test sets are normalized by samples using the z-score normalization method. KWRST is used to filter genes and 300 top-ranked genes are initially selected. The five feature extraction methods (PCA, LDA, ICA, LLDE, and PLS) are used to reduce the dimensionality of dataset. K-nearest neighbor (KNN), one of simplest classification methods, with correlation distance is used to classify cancer samples (here 5 nearest neighbors are used). For LDA method and the datasets with two subclasses, Euclidean distance is used because only one feature is extracted. To avoid over-fitting, before classification we extract only 5 features using these feature extraction methods except LDA whose number extracted is $K-1$. We call these methods as PCAKNN, LDAKNN, ICAKNN, LLDEKNN, and PLSKNN, respectively.
Conclusions
With the development of microarray and NGS technologies, a huge amount of GEP data is rapidly accumulated, demanding more efficient analysis methods to analyze these data. In this paper we present a novel meta-sample-based regularized robust coding for cancer classification (MRRCC) that firstly represents each test sample as a linear combination of all meta-samples which are extracted from the original training set using SVD. The coefficient vector is then obtained by ${l}_{2}$-regularized least square that is as powerful as l 1-norm regularization but the former has much lower computational cost [23]. The experimental results have demonstrated that MRRCC can achieve higher classification accuracy with lower computational cost than previous state-of-the-art solutions such as SRC, MSRC and MRSRC, as well as many dimension reduction based classification methods.
Declarations
Acknowledgements
This article was funded by the National Science Foundation of China on finding tumor-related driver pathway with comprehensive analysis method based on next-generation sequencing data and the dimension reduction of gene expression data based on heuristic method (grant nos. 61474267, 60973153 and 61133010) and the National Institutes of Health (NIH) Grant P01 AG12993 (PI: E. Michaelis).
This article has been published as part of BMC Bioinformatics Volume 15 Supplement 15, 2014: Proceedings of the 2013 International Conference on Intelligent Computing (ICIC 2013). The full contents of the supplement are available online at http://www.biomedcentral.com/bmcbioinformatics/supplements/15/S15.
Authors’ Affiliations
References
- Desai AN, Jere A: Next-generation sequencing: ready for the clinics?. Clin Genet. 2012, 81 (6): 503-510. 10.1111/j.1399-0004.2012.01865.x.View ArticlePubMedGoogle Scholar
- Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA: Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science. 1999, 286 (5439): 531-537. 10.1126/science.286.5439.531.View ArticlePubMedGoogle Scholar
- Wang SL, Fang YP, Fang JW: Diagnostic prediction of complex diseases using phase-only correlation based on virtual sample template. Bmc Bioinformatics. 2013, 14:Google Scholar
- Wang SL, Zhu YH, Jia W, Huang DS: Robust Classification Method of Tumor Subtype by Using Correlation Filters. IEEE-Acm Transactions on Computational Biology and Bioinformatics. 2012, 9 (2): 580-591.View ArticlePubMedGoogle Scholar
- Wang SL, Li XL, Zhang SW, Gui J, Huang DS: Tumor classification by combining PNN classifier ensemble with neighborhood rough set based gene reduction. Computers in Biology and Medicine. 2010, 40 (2): 179-189. 10.1016/j.compbiomed.2009.11.014.View ArticlePubMedGoogle Scholar
- Zheng CH, Huang DS, Zhang L, Kong XZ: Tumor clustering using nonnegative matrix factorization with gene selection. IEEE Transactions on Information Technology in Biomedicine. 2009, 13 (4): 599-607.View ArticlePubMedGoogle Scholar
- Guyon I, Weston J, Vapnik V: Gene selection for cancer classification using support vector machine. Machine Learning. 2002, 46 (1-3): 389-422.View ArticleGoogle Scholar
- Furey TS, Cristianini N, Duffy N, Bednarski DW, Schummer M, Haussler D: Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics. 2000, 16 (10): 906-914. 10.1093/bioinformatics/16.10.906.View ArticlePubMedGoogle Scholar
- Xu Y, Selaru FM, Yin J, Zou TT, Shustova V, Mori Y, Sato F, Liu TC, Olaru A, Wang S: Artificial neural networks and gene filtering distinguish between global gene expression profiles of Barrett's esophagus and esophageal cancer. Cancer Research. 2002, 62 (12): 3493-3497.PubMedGoogle Scholar
- Saeys Y, Inza I, Larranaga P: A review of feature selection techniques in bioinformatics. Bioinformatics. 2007, 23 (19): 2507-2517. 10.1093/bioinformatics/btm344.View ArticlePubMedGoogle Scholar
- Wang SL, Li XL, Fang JW: Finding minimum gene subsets with heuristic breadth-first search algorithm for robust tumor classification. Bmc Bioinformatics. 2012, 13:Google Scholar
- Huang DS, Zheng CH: Independent component analysis-based penalized discriminant method for tumor classification using gene expression data. Bioinformatics. 2006, 22 (15): 1855-1862. 10.1093/bioinformatics/btl190.View ArticlePubMedGoogle Scholar
- Zheng CH, Chen Y, Li XX, Li YX, Zhu YP: Tumor classification based on independent component analysis. International Journal of Pattern Recognition and Artificial Intelligence. 2006, 20 (2): 297-310. 10.1142/S0218001406004673.View ArticleGoogle Scholar
- Wang SL, Wang J, Chen HW, Zhang BY: SVM-based tumor classification with gene expression data. Advanced Data Mining and Applications, Proceedings. 2006, 4093: 864-870. 10.1007/11811305_94.View ArticleGoogle Scholar
- Sharma A, Paliwal KK: Cancer classification by gradient LDA technique using microarray gene expression data. Data Knowl Eng. 2008, 66 (2): 338-347. 10.1016/j.datak.2008.04.004.View ArticleGoogle Scholar
- Li B, Zheng CH, Huang DS, Zhang L, Han K: Gene expression data classification using locally linear discriminant embedding. Computers in Biology and Medicine. 2010, 40 (10): 802-810. 10.1016/j.compbiomed.2010.08.003.View ArticlePubMedGoogle Scholar
- Nguyen DV, Rocke DM: Tumor classification by partial least squares using microarray gene expression data. Bioinformatics. 2002, 18 (1): 39-50. 10.1093/bioinformatics/18.1.39.View ArticlePubMedGoogle Scholar
- Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y: Robust Face Recognition via Sparse Representation. Ieee Transactions on Pattern Analysis and Machine Intelligence. 2009, 31 (2): 210-227.View ArticlePubMedGoogle Scholar
- Ma P, Yang D, Ge YX, Zhang XH, Qu Y, Huang S, Lu JW: Robust face recognition via gradient-based sparse representation. J Electron Imaging. 2013, 22 (1):Google Scholar
- Zheng CH, Zhang L, Ng TY, Shiu SC, Huang DS: Metasample-based sparse representation for tumor classification. IEEE/ACM Trans Comput Biol Bioinform. 2011, 8 (5): 1273-1282.View ArticlePubMedGoogle Scholar
- Gan B, Zheng CH, Liu JX: Metasample-based robust sparse representation for tumor classification. Engineering. 2013, 5: 78-83.View ArticleGoogle Scholar
- Hang XY, Wu FX: Sparse Representation for Classification of Tumors Using Gene Expression Data. J Biomed Biotechnol. 2009Google Scholar
- Yang M, Zhang L, Yang J, Zhang D: Regularized Robust Coding for Face Recognition. Ieee T Image Process. 2013, 22 (5): 1753-1766.View ArticleGoogle Scholar
- Liebermeister W: Linear modes of gene expression determined by independent component analysis. Bioinformatics. 2002, 18 (1): 51-60. 10.1093/bioinformatics/18.1.51.View ArticlePubMedGoogle Scholar
- Alter O, Brown PO, Botstein D: Singular value decomposition for genome-wide expression data processing and modeling. Proceedings of the National Academy of Sciences of the United States of America. 2000, 97 (18): 10101-10106. 10.1073/pnas.97.18.10101.PubMed CentralView ArticlePubMedGoogle Scholar
- Ramsay J: The elements of statistical learning: Data mining, inference, and prediction. Psychometrika. 2003, 68 (4): 611-612. 10.1007/BF02295616.View ArticleGoogle Scholar
- Hiriart-Urruty JB, Lemaréchal C: Convex analysis and minimization algorithms. 1996, Berlin; New York: Springer-Verlag, 2Google Scholar
- Shipp MA, Ross KN, Tamayo P, Weng AP, Kutok JL, Aguiar RCT, Gaasenbeek M, Angelo M, Reich M, Pinkus GS: Diffuse large B-cell lymphoma outcome prediction by gene-expression profiling and supervised machine learning. Nature Medicine. 2002, 8 (1): 68-74. 10.1038/nm0102-68.View ArticlePubMedGoogle Scholar
- Yeoh EJ RM, Shurtleff SA, Williams WK, Patel D, Mahfouz R, Behm FG, Raimondi SC, Relling MV, Patel A, Cheng C, Campana D, Wilkins D, Zhou X, Li J, Liu H, Pui CH, Evans WE, Naeve C, Wong L, Downing JR: Classification, subtype discovery, and prediction of outcome in pediatric acute lymphoblastic leukemia by gene expression profiling. Cancer Cell. 2002, 1 (2): 133-143. 10.1016/S1535-6108(02)00032-6.View ArticlePubMedGoogle Scholar
- Ramaswamy S, Tamayo P, Rifkin R, Mukherjee S, Yeang CH, Angelo M, Ladd C, Reich M, Latulippe E, Mesirov JP: Multiclass cancer diagnosis using tumor gene expression signatures. Proceedings of the National Academy of Sciences of the United States of America. 2001, 98 (26): 15149-15154. 10.1073/pnas.211566398.PubMed CentralView ArticlePubMedGoogle Scholar
- Bhattacharjee A, Richards WG, Staunton J, Li C, Monti S, Vasa P, Ladd C, Beheshti J, Bueno R, Gillette M: Classification of human lung carcinomas by mRNA expression profiling reveals distinct adenocarcinoma subclasses. Proc Natl Acad Sci USA. 2001, 98 (24): 13790-13795. 10.1073/pnas.191502998.PubMed CentralView ArticlePubMedGoogle Scholar
- Armstrong SA, Staunton JE, Silverman LB, Pieters R, de Boer ML, Minden MD, Sallan SE, Lander ES, Golub TR, Korsmeyer SJ: MLL translocations specify a distinct gene expression profile that distinguishes a unique leukemia. Nature Genetics. 2002, 30 (1): 41-47. 10.1038/ng765.View ArticlePubMedGoogle Scholar
- Kruskal WH, Wallis WA: Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association. 1952, 47 (260): 583-621. 10.1080/01621459.1952.10483441.View ArticleGoogle Scholar
- Kononenko I: Estimating attributes: Analysis and extensions of Relief. European Conference on Machine Learning Springer-Verlag, Catana, Italy. 1994, 171-182.Google Scholar
- Wang SL, You HZ, Lei YK, Li XL: Performance Comparison of Tumor Classification Based on Linear and Non-linear Dimensionality Reduction Methods. Advanced Intelligent Computing Theories and Applications. 2010, 6215: 291-300. 10.1007/978-3-642-14922-1_37.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.