Skip to main content
  • Methodology article
  • Open access
  • Published:

Multiclass classification of microarray data samples with a reduced number of genes

Abstract

Background

Multiclass classification of microarray data samples with a reduced number of genes is a rich and challenging problem in Bioinformatics research. The problem gets harder as the number of classes is increased. In addition, the performance of most classifiers is tightly linked to the effectiveness of mandatory gene selection methods. Critical to gene selection is the availability of estimates about the maximum number of genes that can be handled by any classification algorithm. Lack of such estimates may lead to either computationally demanding explorations of a search space with thousands of dimensions or classification models based on gene sets of unrestricted size. In the former case, unbiased but possibly overfitted classification models may arise. In the latter case, biased classification models unable to support statistically significant findings may be obtained.

Results

A novel bound on the maximum number of genes that can be handled by binary classifiers in binary mediated multiclass classification algorithms of microarray data samples is presented. The bound suggests that high-dimensional binary output domains might favor the existence of accurate and sparse binary mediated multiclass classifiers for microarray data samples.

Conclusions

A comprehensive experimental work shows that the bound is indeed useful to induce accurate and sparse multiclass classifiers for microarray data samples.

Background

A number of multiclass classification methods for microarray data have been developed in the recent years [1, 2]. However, their ability to scale well to the number of classes and to provide accurate and sparse multiclass classification models essentially free of model selection-bias remain challenging issues [3, 4]. Sparse multiclass classification models of microarray data samples are useful; they involve a reduced number of input genes and thus are easy to compute with and to interpret [5].

In this paper, a new gene selection method valid for binary mediated multiclass classification approaches of microarray data samples and able to implicitly model a gene selection sparsity constraint is presented. We rely on the use of output coding [6] methods allowing the binary reduction of M-multiclass classification into n binary classification tasks. We assume a model of independent genes, independent binary classifiers and a principle of information content equipartition among binary classifiers to derive a bound on the maximum number of genes that can be handled by binary classifiers in binary mediated multiclass classification approaches of microarray data samples. The derived bound scales with the inverse n thus providing a way to tackle the computational complexity of finding accurate and sparse multiclass classification models of microarray data samples: just increase the number n of binary classifiers and perform bounded optimum gene selection on lists of predictive genes for individual binary classifiers. In other words, the blessing face of dimensionality might be solution for the problem of accurate and sparse multiclass classifiers of microarray data samples; we just need to guarantee the induction of a large number n of independent binary classifiers. However, the induction of a large number n of independent binary classifiers by means of output coding methods may be hard to achieve when training data is scarce like in microarray data analysis. Hence, we may be forced to accept the best n with regard to the key independence factor [7, 8] of general output coding methods. Just in case the best n is sufficiently large, the design of accurate and sparse multiclass classifiers of microarray data samples would be feasible.

Output coding embodies the design of well-known One Against All (OAA) [9] multiclass classifiers allowing the division of M - multiclass classification problems into n = M binary classification tasks, each binary task dealing with the problem of discriminating a given class against the others. A further generalization of OAA classifiers leads to the design of Error Correcting Output Coding (ECOC) classifiers [10, 11] allowing the division of M - multiclass classification problems into n binary classification tasks, n being determined by the size of some error correcting code. ECOC classifiers can then be used to explore the feasibility of accurate and sparse multiclass classifiers of microarray data samples by letting n approach to infinity. In this paper, the recently introduced [12] class of ECOC classifiers based on LDPC codes [13] is considered. Hence, ECOC classifiers based on LDPC codes of size n up to ⌈15·log2M⌉ and OAA classifiers of size n = M are evaluated. For OAA as well as ECOC classifiers, binary linear Support Vector Machines (SVMs) [14] classifiers are assumed. For the purposes of selecting most important genes at core SVMs, univariate ranking information [15] based on the widely used S2N metric [16–18] is assumed. Using the above setting, a complete experimental protocol is presented for the design of accurate and sparse multiclass classifiers for microarray data samples essentially free of model selection-bias [19–22]. Our approach is evaluated on 8 benchmark microarray datasets. Experimental results confirm the feasibility of our proposed method.

Results and Discussion

An upper bound on the number of genes per binary classifier

How much information can a set of p independent genes convey about a set of M phenotypes? Being aware of such a fundamental limitation could be crucial in the design of accurate and sparse multiclass classifiers of microarray data samples. Let S be a microarray dataset comprising q samples from M ≥ 3 classes, each sample defined by the gene expression measurements of p genes (p ≫ q). Hence, the average information content per class sample in S can be upper bounded by H M = log2M.

In addition, let us assume that genes behave as a collection of p independent identically distributed binary random variables, i.e., a kind of probabilistic boolean model of gene expression is considered [23]. Hence, each gene is in state 1 (expressed) with probability f and in state 0 (not expressed) with probability 1 - f, each state representing gene activity above or below some threshold for an effect. Thus, in this model of gene expression, each gene conveys on average H(f) = - f · log2f - (1 - f) · log2(1 - f) bits of information. Furthermore, let us assume an output coding strategy over S able to induce n independent binary datasets and correspondent binary classifiers. Hence, under a principle of information content equipartition, H b = H M / n = l o g 2 M n bits of information will be available at each binary classifier. Finally, let us assume that each binary classifier is allowed to select a fraction Q of the complete set of genes. Hence, after the selection of Q · p genes, at most Q · p · H(f) bits of information will be available at each binary classifier and this quantity cannot exceed H b

Q ⋅ p ⋅ H ( f ) ≤ l o g 2 M n
(1)

Eq. 1 nicely estimates the maximum fraction of genes (Q max ) that can be selected by any binary classifier in terms of main parameters characterizing any binary mediated multiclass classification problem plus an unknown parameter f. To estimate f, we now turn to the problem of estimating the probability f that a biased coin will come up with heads in a sequence of q independent coin tosses provided k heads have been observed. The maximum likelihood estimate of f, i.e., the value of f with the largest probability for the observed data, is given by k/q. To obtain k, we just need to count the number of expressed genes across the collection of q samples. However, aiming to obtain a more general bound, we would like to avoid overwhelming data dependent counts. If we further assume that averages of gene expression over a sufficiently large population of individuals are equal to averages over many genes, i.e., an ergodic behavior of genes [24] is considered, the fractional f should equal the fraction of genes k*//p that are expressed at any individual. Assuming that k*/p < 0.5 (otherwise not expressed genes can be considered) and recalling that H (f) is a monotonic increasing function in [0, 0.5], we get H(k/q) ≈ H(k*/p) ≥ H(1/p) and the following Q upper bound (Q max ) can be derived

Q m a x ≈ log 2 M p ⋅ n ⋅ H ( 1 / p )
(2)

Overall, Eq. 2 suggests that the computational complexity of finding sparse multiclass classifiers of microarray data samples could be overcome with the induction of a large number n of independent binary classifiers, a requirement which gets easier to satisfy as the number of training samples increases. The evolution of Q max with respect to n on benchmark microarray datasets used in this paper is shown in Figure 1. Before moving onto the next subsection, we notice that a more formal derivation of Q max is given in the Appendix.

Figure 1
figure 1

The maximum fraction of genes per binary classifier. The maximum fraction of genes Q max that can be handled by core binary classifiers in binary mediated multiclass classification of microarray data samples involving p genes and q samples, p > >q. Multiclass classifiers for M ≥ 3 classes built from n binary classifiers, n ≥ ⌈log2M⌉ + 2, are considered. Q max is evaluated on the 8 benchmark microarray datasets used in this paper (Lymphoma, SRCBT, Brain, NCI60, Staunton, Su, GCMRM and GCM).

Bounded optimum S2N gene selection

For a fixed n, we now face the problem of finding the optimum number of genes in the list of top p* Q max (n) most discriminative genes for each binary classifier. Such optimum will follow from a partial search scheme and thus, we provide no guarantee of identifying the optimal gene set [25]. But as n increases, finding such optimum implies finding a sparse representation of a high dimensional feature space from a small number of training samples. Because sparsity is key structural property of most genomic studies involving disease classification, we conjecture that the proposed gene selection method could indeed be a solution for the problem of designing accurate and sparse multiclass classifiers of microarray data samples.

Letting n approach to infinity cannot be realized in practice. Hence, some bounded exploration of the n dimension must be assumed in advance. In this paper, the exploration of n dimension from n min = ⌈log2M⌉ + 2 up to n max = ⌈15·log2M⌉ is considered. Notice that n = ⌈15·log2M⌉ + 1 is not considered; it would entail the use of parity codes only able to detect (but not correct) binary classifiers errors. For practical n ranges, the exhaustive exploration of p* Q max (n) most important genes for each binary classifier may still be too computationally demanding. Thus, a multi-scale resolution approach for the Q-dimension was devised. Firstly, the Q dimension was coarsely quantized with a base 10 logarithmic scale, i.e., Q ∈ [0.001, 0.01, 0.1, 1] was assumed. Secondly, each logarithmic segment, except the last one, was linearly quantized into 10 equal parts; the last logarithmic segment was quantized into 100 equal parts. Finally, genes at each binary classifier were ranked according to their S 2N value (see Methods for details) with respect to the response variable and mapped to the formerly quantized Q-dimension for further selection. As a result, for a fixed computational budget, more computational effort can be put into the exploration of highly discriminative genes, i.e., top ranking genes, than into those of poor discriminative power.

Results on Real Data

We first note that the application of the Shapiro-Wilk test to the empirical distributions of performance measures (classification error, overall fraction of selected genes and gene selection stability) of either ECOC or OAA classifiers frequently rejected the null hypothesis of normally distributed data at the 0.05 α level of significance, thus justifying the use of the more conservative Kolmogorov-Smirnov (KS) and Mann-Whitney (MW) U tests.

Table 1 shows the classification performance of OAA and ECOC classifiers of size n up to ⌈η·log2M⌉ (η = 5, 10, 15) over 200 Montecarlo 4:1 train-test partitions. Despite the η choice, ECOC and OAA classifiers attain comparable classification performance in 5 out of 8 datasets (p > 0.3, two-sided KS tests). Stochastic orderings favorable to OAA classifiers are observed in the SRCBT, NCI60 and GCM datasets (p < 0.05, one-sided KS tests; one-side MW tests consistent). In particular, OAA classifiers perform remarkably well on the hard NCI60 and GCM datasets.

Table 1 The classification performance of OAA and ECOC classifiers

Table 2 shows the overall number of genes selected by OAA and ECOC classifiers of size n up to ⌈η·log2M⌉ (η = 5, 10, 15) under bounded optimum S 2N gene selection over 200 Montecarlo 4:1 train-test partitions. Moving from η = 5 to η = 15 gradually reduces the dimensionality of ECOC classifiers. The strongest reduction effect occurs when moving from η = 5 to η = 10, suggesting η = 10 as a practical upper limit for the exploration of the n dimension with ECOC classifiers. However, the extent of ECOC dimensionality reductions are insufficient to improve naive OAA classifiers. Despite the η choice, significant differences in the number of genes selected by ECOC and OAA classifiers are observed in all datasets (p < 0.05, two-sided KS tests). Stochastic orderings favorable to ECOC classifiers are observed in the Lymphoma and NCI60 datasets (p > 0.2, one-sided KS tests; p < 0.01, one-sided MW tests).

Table 2 The overall number of genes selected by OAA and ECOC classifiers

Table 3 shows the stability of gene selection attained by OAA and ECOC classifiers of size up to ⌈η·log2M⌉ (η = 5, 10, 15) under bounded optimum S 2N gene selection over 200 Montecarlo 4:1 train-test partitions. Despite the η choice, significant differences in the stability of gene selection attained by ECOC and OAA classifiers are observed (p < 2.2e - 16, two-sided KS tests). Stochastic orderings favorable to ECOC classifiers are observed in Lymphoma, SRCBT and Su datasets (p > 0.9, one-sided KS tests; p < 2.2e - 16, one sided MW tests); ambiguous orderings are observed in the Brain, GCM RM and GCM datasets. Remarkably, the stability of gene selection attained by ECOC classifiers is only slightly reduced when moving from η = 5 to η = 15.

Table 3 The stability of gene selection attained by OAA and ECOC classifiers

For the sake of completeness, we also report the performance of OAA and ECOC classifiers of size at most ⌈η·log2M⌉ (η = 5, 10, 15) on two benchmark microarray datasets with a public train-test partition (see Table 4). Results agree with observed trends of the classification error in Montecarlo evaluations. Although both ECOC and OAA classifiers seem to be highly effective in the GCMRM dataset, suggesting that ECOC classifiers may be worthy of exploring in such case, only OAA classifiers perform well on the GCM dataset. Since the GCMRM dataset is just a subsample of the GCM dataset to which a more robust preprocessing protocol has been applied, so that fewer samples, fewer classes and fewer genes than in the original dataset are involved, these results raise the question to what extent specific preprocessing protocols could be a affecting the strength of gene selection attainable with ECOC classifiers.

Table 4 The performance of OAA and ECOC classifiers on train-test partitions

Conclusions

The divide and conquer approach to the design of multiclass classifiers for microarray data samples which we have presented offers the hope that accurate and sparse multiclass classifiers can be constructed without incurring in undesirable forms of gene selection bias hidden in the selection of optimal gene subsets of restricted or unrestricted size [26]. Generalized binary reductions of M-multiclass classification problems into n binary classification tasks and bounded explorations of resulting gene spaces are advised to accomplish this objective. At each binary classifier, the maximum number of genes that can be selected scales with the inverse of n, thus providing a way to accomplish optimum gene selection at affordable computational costs, provided n is sufficiently large.

In this paper, the power of OAA and ECOC binary reductions in the design of accurate and sparse multiclass classifiers for microarray data samples has been evaluated. Without loss of generality, we have restricted ourselves to the class of ECOC classifiers based on LDPC codes, linear SVM binary classifiers and univariate S 2N gene selection. Experimental results show that dimensionality exchange between input and output domains of binary mediated multiclass classifiers of microarray data samples is indeed possible: the larger the size of candidate ECOC classifiers, the greater the chance of selecting smaller sets of genes. Although promising, the dimensionality reduction performance exhibited by ECOC (LDPC) classifiers is not enough to definitely improve naive OAA classifiers, which remain the best practical option.

From an overall view, experimental results suggest that improving the dimensionality reduction ratio of OAA classifiers with ECOC classifiers may not be as easy as it seems. We note, however, that a consensus approach to gene selection and classification on a set of diverse ECOC classifiers under bounded optimum gene selection could finally boost their dimensionality reduction factor beyond that of OAA classifiers. Briefiy, provided individual ECOC solutions are good enough compared to OAA classifiers, a consensus approach to gene selection on a set of diverse ECOC classifiers should preserve most relevant genes and reject a great proportion of irrelevant ones. Since ECOC classifiers based on LDPC codes seem to be closely related neighbors of OAA counterparts, this hypothesis will be focus of future research. Finally, further dimensionality reduction improvements may still be attainable with more elaborated forms of gene selection like SVM-RFE [27].

Overall, our results provide evidence that bounded optimum gene selection in high dimensional binary output domains induced by either OAA or ECOC classifiers may be a solution for the problem of accurate multiclass classification of microarray data samples based on a reduced number of genes.

Methods

To keep the paper self-contained in this section, we would like to briefiy review the design of ECOC classifiers based on LDPC codes. Then we proceed to describe benchmark microarray data and main points of our experimental protocol. The introduction of error correcting codes in the design of ECOC classifiers aims the automatic recovery of binary classifiers errors leading to erroneous multiclass predictions. For this purpose, an ECOC code must be first defined. An ECOC code is a binary matrix of size M by n, the i-th row defining the binary encoding for the i-th class label, i = 1,..., M, and the j-th column defining the binary split to be learn by the j-th core binary classifier, j = 1,..., n. Since codewords of length n = ⌈ ( l o g 2 M ) ⌉ R , 0 <R < 1, are required for redundantly encoding k = ⌈log2M⌉ bits of useful class label information, ECOC classifiers entail output designs of logarithmic complexity with respect to M, which can be an advantage when M is rather large [28]. As noted by [29], ECOC classifiers based on random ECOC codes are asymptotically Bayes Optimal, i.e., they approximate the minimum possible misclassification error, provided core binary classifiers are Bayes classifiers themselves. As noted by [30], the SVM paradigm efficiently approximates the Bayes classification rule. Hence, core binary classifiers were implemented with linear SVMs, a class of binary classifiers that finds the hyperplane that best separate training samples having different class memberships [31], the trade-off between model complexity and empirical error being determined by the constant complexity hyperparameter C > 0. However, regarding the construction of the ECOC coding matrix, we decided to use LDPC codes instead of random codes.

Figure 2
figure 2

The architecture of an ECOC-LDPC classifier under bounded gene selection. Right squares represent constituent ECOC classifiers induced from simple parity codes, left squares represent practical binary classifiers, rectangles represent "channel" functions, ellipses represent binary predictions, and small circles represent gene expression measurements. Edges are put between constituent ECOC classifiers and ideal binary predictions c i taking care that the connectivity profile remains sparse (j < <m, u < <n). Ideal binary predictions c i and practical ones r i are constrained by "channel" functions f i modeling prior statistical knowledge about binary classifiers errors, i = 1,..., n. Each practical binary classifier L i selects a subset of v genes from a pool of p genes taking care that no more than ⌈ log 2 M ⌉ n ⋅ H ( 1 p ) genes get selected.

A key problem with conventional ECOC classifiers based on random codes is that randomness inhibits the systematic control of independence between binary classifiers as n approaches to infinity. A possible way to overcome this problem is to construct large ECOC classifiers from a number of small ECOC classifiers connected via shared binary classifiers. Small constituent ECOC classifiers able to locally control the key independence factor despite the size n of the overall ECOC classifier can be easily designed, for example with simple parity codes. Provided the connectivity profile of constituent ECOC classifiers and binary classifiers remains sparse, the overall ECOC design can be nicely interpreted in terms of the design of LDPC codes.

Briefly, LDPC codes are linear block codes obtained from sparse random bipartite graphs subject to sparsity constraints allowing a divide and conquer interpretation of generated ECOC classifiers[12]. Let G be a bipartite graph with n left nodes (called message nodes) and m right nodes (called check nodes). If the n message nodes are associated to the n coordinates of codewords c defined as those vectors (c1,..., c n ) satisfying the constraint that the sum of the neighboring positions for all check nodes among the message nodes is zero, then G models a linear code of size n which can protect at least k = n - m bits of information and which structure can be dissected into m simple parity codes. In addition, if the connectivity profile of G is sparse, i.e., each codeword bit is constrained by j < <m parity codes and each parity code constraints u < <n codeword bits, then the corresponding linear code turns to be an LDPC code. The sparsity of the graph structure is a key property in the design of efficient LDPC decoding algorithms for a variety of channel models. A channel model subsumes our prior knowledge about the statistics of binary errors. In this paper, the iterative message passing decoding algorithm described in [13] for the Additive White Gaussian Noise channel is used. A factor graph [32] model of a typical LDPC code is shown in Figure 2. The construction of ECOC classifiers based on LDPC codes is straightforward once the bipartite graph model of the underlying LDPC code is given. In factor graph terms, we just need to associate right message nodes to ideal binary classifiers predictions c i and left check nodes to constituent ECOC classifiers constructed from simple parity codes. To complete the factor graph model of an ECOC-LDPC classifier, message nodes r i modeling practical binary classifiers predictions and check nodes f i modeling prior statistical knowledge about pairs (c i , r i ) ("channel functions") must be introduced. A request for an ECOC prediction on a set of input features x starts with the computation of a corrupted codeword r(x) by the set of n binary classifiers. Assuming a suitable channel model specified by check nodes f i , the corrupted codeword r(x) is given to an iterative message passing decoding algorithm for the computation of a hopefully good estimate c ^ ( x ) of the unknown codeword c(x) encoding the unknown class label y associated to x. Remarkably, the computation of c ^ ( x ) can be fully described as a message passing algorithm over the ECOC-LDPC factor graph. In addition to convenient graphical c ^ ( x ) computation, ECOC-LDPC factor graphs also allow for seamless integration of general bounded gene selection strategies. We just need to add message nodes x k , k = 1,..., p, modeling gene expression behavior, check nodes L i , i = 1,..., n, modeling practical binary classifiers and a sparse connectivity profile ensuring that at each L i the number v of incident edges (selected genes) is no more than p ⋅ Q m a x ≈ l o g 2 M n ⋅ H ( 1 p ) , in agreement with Eq.2.

Microarray Datasets

Eight cancer microarray data sets were used in the evaluation of binary mediated multiclass classification with bounded optimum S 2N gene selection. The Lymphoma dataset [33] consists of 62 samples of a specialized cDNA chip spanning M = 3 subtypes of Diffuse large B-cell lymphoma, each sample defined by the expression of p = 4026 genes. Samples in the Lymphoma dataset are highly imbalanced: 42 samples of diffuse large B-cell lymphoma, 9 of follicular lymphoma and 11 of chronic lymphocytic leukemia. Original data is available at http://llmpp.nih.gov/lymphoma/data/figure1. In this study, a preprocessed dataset version compiled by [34] based on [35] was used.

The Small Round Blue Cell Tumors (SRBCT) dataset [36] consists of 63 samples of a specialized cDNA chip spanning M = 4 subtypes of small round blue cell tumors of childhood, each sample defined by the expression of p = 2308 genes. Samples are distributed as follows: 12 samples of neuroblastoma, 20 samples of rhabdomyosarcoma, 8 samples of non-Hodgkin lymphoma and 23 samples of the Ewing family of tumors. In this study, a preprocessed dataset version available at http://research.nhgri.nih.gov/microarray/Supplement/index.html was used.

The Brain dataset [37] consists of 42 samples of the Affymetrix HuGeneFL chip spanning M = 5 tumors classes of the central nervous system, each sample defined by the expression of p = 5597 genes. Samples are distributed as follows: 10 medulloblastomas, 10 malignant gliomas, 10 atypical teratoid/rhabdoid tumors (AT/RTs), 8 primitive neuro-ectodermal, tumors (PNETs) and 4 human cerebella. In this study, the original dataset version (Dataset A) was used. Expression values based on average difference units were computed using the Affymetrix GENECHIP MAS 4.0 analysis software. This dataset is available at http://www.broadinstitute.org/mpr/CNS/.

The NCI60 dataset [35] consists of 61 samples of a specialized cDNA chip spanning M = 8 tumor classes, each sample defined by the expression of p = 5244 genes. Samples are distributed as follows: 7 breast, 5 central nervous system, 7 colon, 6 leukemia, 8 melanoma, 9 non-small cell lung carcinoma, 6 ovarian and 9 renal tumors. Original data is available at http://genome-www.stanford.edu/nci60. In this study, a preprocessed dataset version compiled by [34] based on [35] was used.

The Staunton dataset [38] consists of 60 samples of the Affymetrix Hu6800 chip spanning M = 9 classes of tumors, each sample defined by the expression of p = 5726 genes. Expression values based on average difference units were computed using the Affymetrix GENECHIP MAS 4.0 analysis software. In this study, a preprocessed dataset version compiled by [1] involving the rescaling of gene expression measurements to the interval 0[1] was used. This dataset is available at http://www.gems-system.org/.

The Su[39] consists of 174 samples of the Affymetrix U95a chip spanning M = 11 classes of tumors, each sample defined by the expression values of p = 12533 genes. Expression values based on average difference units were computed using the Affymetrix GENECHIP MAS 4.0 analysis software. In this study, a preprocessed dataset version compiled by [1] involving the rescaling of gene expression values to the interval 0[1] was used. This dataset is available at http://www.gems-system.org/.

The GCM dataset [18] consists of 190 samples of the Affymetrix Hu6800 and Hu35K chips spanning M = 14 tumor classes of primary tumors, each sample defined by the expression of values p = 16063 genes. Expression values based on average difference units were computed using Affymetrix GENECHIP MAS 4.0 analysis software. This dataset, which comes with a public train-test partition involving q = 144 samples for training and 46 for test, is available at http://www.broadinstitute.org/cgi-bin/cancer/datasets.cgi.

The GCM RM dataset [40] consists of 123 samples of the Affymetrix Hu6800 chip spanning M = 11 classes of tumors, each sample defined by the expression values of p = 7129 genes. This dataset was derived from the GCM dataset with the purpose of improving multiclass classification with variability estimates of repeated gene expression measurements. Hence, expression values were computed with the more robust log scale multi-array analysis (RMA) measure. This dataset, which comes with a public train-test partition involving q = 96 samples for training and 27 for test, is available at http://expression.washington.edu/publications/kayee/shrunken_centroid/.

Experimental Protocol

Optimum bounded gene selection over OAA and ECOC multiclass based on linear SVMs classifiers was evaluated on 8 publicly available microarray datasets (M ∈ {3, 4, 5, 8, 9, 11, 14}). Aiming a systematic evaluation of the n-dimension, we restricted ourselves to the class of ECOC classifiers based on LDPC codes. For both OAA and ECOC classifiers, binary classifiers decisions were fusioned by means of soft-decoding techniques. Hence, OAA classifiers based on hinge loss decoding of SVM's outputs and ECOC classifiers based on LDPC codes able to perform soft iterative decoding of SVM's outputs were used. Owing to the constraint p > >q, which highly limits the diversity between induced binary classifiers, just one iterative decoding loop was allowed. The Java Weka library version 3.4.10 [41] was used to provide the implementations of OAA multiclass and binary linear SVM classifiers. An extension of the Weka library was developed to implement ECOC classifiers based on LDPC codes and bounded optimum gene selection for both OAA and ECOC classifiers.

Assessing the classification performance

The classification performance of OAA and ECOC multiclass classifiers was evaluated by means of a randomized strategy. Based on [42] and [35], 200 Montecarlo 4:1 ( 4 5 for training and 1 5 for testing) partitions of available data were considered. For those datasets with a public train-test partition, the specific train-test evaluation was additionally performed. The following performance metrics were considered: the test error rate, the number of binary classifiers, the number of genes per binary classifier, the overall number of selected genes and the stability of gene selection. Briefiy, stability of gene selection measures how multiple classification models resemble between them; models may be close to each other in terms of error, but can be distant in terms of their forms (the identity of selected genes) [43]. Thus, stability of gene selection is an important requirement for ensuring reliable conclusions in microarray data analysis [44, 45]. Stability of gene selection with respect to changes in the training data was measured by means of the Salton's cosine coefficient [46]. Let A i and A j respectively denote the sets of genes selected by classifier A in partitions i and j, i ≠ μj. Hence, the similarity between sets A i and A j according to the Salton's coefficient is given by # g e n e s i n b o t h A i a n d A j # g e n e s i n A i ⋅ # g e n e s i n A j . Using 200 random train-test partitions lead to 200 · 199/2 pairwise similarity measurements from which the mean stability of gene selection can be reported.

Searching the best parameters

Regarding the honest computation [47] of best n and Q(n) parameters for ECOC classifiers, a two-stage optimization approach based on nested 10-Fold CV loops was performed. At each train-test partition, the constant complexity hyperparameter C of binary linear SVM classifiers was set to 1 and the best (n, Q(n)) pair was estimated by a nested 10-Fold CV error minimization loop in the current training dataset over the grid [n min , n max ] × (0, Q max ], n min = ⌈log2M⌉ + 2, n max = ⌈η ·log2M⌉, η = 5, 10, 15. Regarding the exploration of the Q dimension, the S 2N metric was used for inducing ordered lists of genes at each binary classifier. Briefiy, the class discrimination ability of the j - th gene at each binary classifier under the S 2N metric, denoted as S 2N(j), is defined as follows

S 2 N ( j ) = μ ( j ) + − μ ( j ) − σ ( j ) + + σ ( j ) −
(3)

where μ(j)+, μ(j)- and σ(j)+, σ(j)- denote the means and standard deviations of the j - th gene in positive and negative examples in the current (binary) training set. Most g important genes under the S 2N metric are defined as the first g/2 and the last g/2 genes in the ranked list of genes. For a fixed number n of binary classifiers, optimum bounded gene selection requires the estimation of the optimum number of genes g(n), or its fractional equivalent Q ( n ) = g ( n ) p , in the list of p* Q max (n) most important genes. Such threshold can be estimated by a nested 10-Fold CV loop in the current training set using the multiscale resolution approach described in the Results section. The process must be repeated for each candidate n in the range [n min , n max ]. Afterwards, the best performing (n, Q(n)) pair can be reported. In case of multiple solutions, that involving the largest n, i.e., the smallest Q(n), is selected.

An additional nested loop of 10-Fold CV was performed to optimize the constant complexity hyperparameter C of linear SVMs. Although it would have been better to jointly optimize (n, Q(n), C), this would have been computationally prohibitively expensive. Alternatively, the two-step optimization strategy described in [9] was used. Hence, we first set (n, Q(n)) at the best pair of values found at C = 1, and then decreased and increased C until no improvement was observed for three consecutive steps in nested 10-Fold CV loops. The best performing C along with the best performing (n, Q(n)) pair at C = 1 were then used as input parameters for the construction of the best ECOC classifier on the current training set and its posterior evaluation on the testing set. Notice that the final performance estimate obtained by this procedure is selection-bias free because each original testing set is used only once to estimate the performance of a single classification model that was built by using training data exclusively. Except for the preselection of n = M, a similar approach was used to estimate the best Q(M) and the best C for OAA classifiers. Table 5 shows the central tendency and the variation of the best C for ECOC and OAA classifiers over 200 Montecarlo 4:1 train-test partitions. Results suggest that C = 1 is indeed a reasonable initial guess.

Table 5 The best C for ECOC and classifiers based on linear SVMs

Assessing the statistical significance of results

To assess the statistical significance of observed differences between performance measures of ECOC and OAA classifiers, we invoke the concept of first order stochastic dominance [48] developed in the context of international economics [49]. Let F and G denote the cumulative distribution functions of two comparison groups regarding the study of some performance measure, e.g., the gene selection stability of ECOC and OAA classifiers. First-order stochastic dominance of F with respect to G is defined as F (z) - G (z) ≤ 0 uniformly in z ∈ ℜ, with strict equality for some z. Since this considers all moments of the distributions, it is a stricter test of stability differences than just comparing mean levels of stability. In order to implement first-order stochastic dominance analysis, nonparametric two-sided and one-sided Kolmogorov-Smirnov (KS) tests [50] will be used. The KS test looks for differences in two distributions, both in terms of shape and location. Although the KS test has good power for testing general differences in distributions and not just in their central tendencies, it is less sensitive than the t-test if data is normal. Considering this issue, normality of distributions was analyzed first by means of the Shapiro-Wilk test [50, 51]. The two-sided KS statistic tests the hypothesis that both distributions are identical; the null and alternative hypotheses can be expressed as:

H 0 : F ( z ) − G ( z ) = 0 ∀ z ∈ ℜ vs H 1 : F ( z ) − G ( z ) ≠ 0 for some z ∈ ℜ
(4)

By contrast, the one-sided test of stochastic dominance of F over G (the distribution associated with F lies to the right of that associated with G) can be formulated as:

H 0 : G ( z ) − F ( z ) ≥ 0 ∀ z ∈ ℜ vs H 1 : G ( z ) − F ( z ) < 0 for some z ∈ ℜ
(5)

Similarly, the one-sided test of stochastic dominance of G over F (the distribution associated with F lies to the left of that associated with G) can be formulated as:

H 0 : F ( z ) − G ( z ) ≥ 0 ∀ z ∈ ℜ vs H 1 : F ( z ) − G ( z ) < 0 for some z ∈ ℜ
(6)

Hence, in order to conclude that F (G) stochastically dominates G (F ) we need to reject the null hypothesis for the two sided test, but not reject the null for the corresponding one sided test. The test statistics for the two and one sided tests are, respectively:

D = max 1 ≤ i ≤ N | F u ( z i ) − G v ( z i ) |
(7)
D + = max 1 ≤ i ≤ N { G v ( z i ) − F u ( z i ) }
(8)
D − = max 1 ≤ i ≤ N { F u ( z i ) − G v ( z i ) }
(9)

where u and v respectively denote the sample sizes from the empirical distributions of F and G and N = u + v.

Hence, to test whether ECOC classifiers can attain better classification performance than OAA classifiers, the two-sided D (Eq. 7) and the one-sided D- (Eq. 8) statistics were used (the alternative parameter of the ks.test function in the stats R package respectively set to "two.sided" and "less"). A similar approach was used to assess the statistical significance of the differences between the overall fraction of selected genes by ECOC and OAA classifiers. Finally, to assess the statistical significance of stability differences between ECOC and OAA classifiers, the D (Eq. 7) and the D+ (Eq. 9) statistics were used (the alternativ e parameter of the ks.test function in the stats R package respectively set to "two.sided" and "greater"). One-sided KS tests were supplemented with one-sided Mann-Whitney U tests (MW) for analyzing the difference between medians of two groups. A criterion alpha level of 0.05 was used for all statistical tests.

Appendix

A more formal derivation of an upper bound for the number of genes per binary classifier

We consider the problem of designing accurate and sparse binary mediated multiclass classifiers for microarray data samples. In this context, accuracy is mainly determined by the power of the error correction code defining the multiclass to binary mapping and sparsity is mainly determined by the efficacy of gene selection algorithms used at the binary classification level. A natural question that arises in this system is what amount of information genes can transfer to the multiclass classifier output as the number p of genes grows. Knowing such limitation may play a crucial role in the design of effective gene selection algorithms which could significantly reduce their search spaces. Shannon's Information Theory concepts [52] can provide some useful insights into this fundamental question. In particular, the concept of mutual information (MI) can be used to evaluate the information content of a subset of genes with regard to individual binary output classes and the information content of a set of binary output classes with regard to the target multiclass output class. The use of MI for general multiclass classification problems can be motivated by Fano's inequality [53] which gives a lower bound for the probability of error p e when estimating a discrete random variable y ∈ {c1,..., c M } from another random variable x ∈ ℜ p as a function of their MI I(y, x)

p e ≥ H ( y ) − I ( y ; x ) − 1 l o g 2 M
(10)

Where p e = P ( y ^ ≠ y ) , H(y) is the Shannon entropy of y, y ^ = g ( x ) is a discrete random variable used to estimate y and y → x → y ^ is the Markov Chain modeling the overall classification process. Let us now consider Markov Chains y → x → T i v i → L i r i and y → x → r modeling the prediction of a target output class y ∈ {c1,..., c M } from genes x ∈ {0, 1} p by the mediation of binary output classes r = (r i ), each r i modeling the binary output class of a classifier L i on subset of genes v i ∈ {0, 1} g , g <p, extracted by a gene selection algorithm T i on genes x, i = 1,..., n. By the Fano's inequality, minimizing p e requires the maximization of I (y, x) = H (y) - H (y | x). Since y is fixed, we have I(y, x) ≤ H(y) ≤ log2M. On the other hand, by the data processing inequality [54], we have I (y, r) ≤ I (y, x). In other words, the maximization of I (y, x) requires the choice of an error correcting output code such that I (y, r) is maximized. In addition, let r be a set n i.i.d. random variables r i . Thus, we have I(y, r) = Σ i I(y, r i ) and I ( y , r i ) ≤ l o g 2 M n . Again by the data processing inequality, we have I(v i , r i ) ≤ I(y, r i ). If we further assume that T i is a gene selection algorithm able to select just relevant genes to r i , i.e., H(v i | r i ) = 0, we have I(v i , r i ) = H(v i ) - H(v i | r i ) = H(v i ). Finally, let genes in v i be a set of g i.i.d. binary random variables. Thus, we have H(v i ) = H(T i (x)) = Q · p · H(f) where Q is the fraction of relevant genes to r i and H(f) is the binary entropy function measuring the information content of a generic gene which is expressed with probability f and not expressed with probability 1 - f. Hence, the following upper bound on the fraction of genes Q that can be handled by any binary classifier in a binary mediated multiclass classifier for microarray data samples is obtained

Q ≤ l o g 2 M p ⋅ n ⋅ H ( f )
(11)

References

  1. Statnikov A, Aliferis C, Tsamardinos I, Hardin D, Levy S: A comprehensive evaluation of multicategory classification methods for microarray gene expression cancer diagnosis. Bioinformatics 2005, 21: 631–643. 10.1093/bioinformatics/bti033

    Article  CAS  PubMed  Google Scholar 

  2. Liu KH, Xu CG: A genetic programming-based approach to the classification of multiclass microarray datasets. Bioinformatics 2009, 25: 331–337. 10.1093/bioinformatics/btn644

    Article  PubMed  Google Scholar 

  3. Li T, Zhang C, Ogihara M: A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression. Bioinformatics 2004, 20(15):2429–2437. 10.1093/bioinformatics/bth267

    Article  CAS  PubMed  Google Scholar 

  4. Statnikov A, Wang L, Aliferis C: A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification. BMC Bioinformatics 2008, 9: 319. 10.1186/1471-2105-9-319

    Article  PubMed Central  PubMed  Google Scholar 

  5. Fan J, Fan Y: High dimensional classification using features annealed independence rules. Ann Statist 2008.

    Google Scholar 

  6. Allwein EL, Schapire RE, Singer Y: Reducing Multiclass to Binary: A Unifying Approach for Margin classifiers. In ICML '00: Proceedings of the Seventeenth International Conference on Machine Learning. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc; 2000:9–16.

    Google Scholar 

  7. Guruswami V, Sahai A: Multiclass learning, boosting, and error-correcting codes. In COLT '99: Proceedings of the twelfth annual conference on Computational learning theory. USA: ACM Press; 1999:145–155.

    Chapter  Google Scholar 

  8. Masulli F, Valentini G: Dependence among Codeword Bits Errors in ECOC Learning Machines: An Experimental Analysis. Multiple classifier Systems 2001, 158–167.

    Chapter  Google Scholar 

  9. Rifkin R, Klautau A: In Defense of One-Vs-All classification. Journal of Machine Learning Research 2004, 5: 101–141.

    Google Scholar 

  10. Dietterich TG, Bakiri G: Error-correcting output codes: a general method for improving multiclass inductive learning programs. In Proceedings of the Ninth AAAI National Conference on Artificial Intelligence. Edited by: Dean TL, Mckeown K. Menlo Park, CA: AAAI Press; 1991:572–577.

    Google Scholar 

  11. Rifkin R: Everything old is new again: A fresh look at historical approaches in machine learning. PhD thesis. Massachusetts Institute of Technology; 2002.

    Google Scholar 

  12. Tapia E, Bulacio P, Angelone L: Recursive ECOC classification. Pattern Recognition Letters 2010, 31(3):210–215. 10.1016/j.patrec.2009.09.031

    Article  Google Scholar 

  13. Mackay DJC: Good error-correcting codes based on very sparse matrices. Information Theory, IEEE Transactions on 1999, 45(2):399–431. 10.1109/18.748992

    Article  Google Scholar 

  14. Vapnik V: The nature of statistical learning theory (Information Science and Statistics). Springer; 1999.

    Google Scholar 

  15. Saeys Y, Inza I, Larranaga P: A review of feature selection techniques in bioinformatics. Bioinformatics 2007, 23(19):2507–2517. 10.1093/bioinformatics/btm344

    Article  CAS  PubMed  Google Scholar 

  16. Slonim DK, Tamayo P, Mesirov JP, Golub TR, Lander ES: Class prediction and discovery using gene expression data. Recomb 2000, 263–272. full_text

    Chapter  Google Scholar 

  17. Furey T, Cristianini N, Duffy N, Bednarski D, Schummer M, Haussler D: Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics 2000, 16(10):906–914. 10.1093/bioinformatics/16.10.906

    Article  CAS  PubMed  Google Scholar 

  18. Yeang CH, Ramaswamy S, Tamayo P, Mukherjee S, Rifkin RM, Angelo M, Reich M, Lander E, Mesirov J, Golub T: Molecular classification of multiple tumor types. Bioinformatics 2001., 17(Suppl 1):

  19. Furlanello C, Serafini M, Merler S, Jurman G: Entropy-based gene ranking without selection bias for the predictive classification of microarray data. BMC Bioinformatics 2003, 4: 54. 10.1186/1471-2105-4-54

    Article  PubMed Central  PubMed  Google Scholar 

  20. Dupuy A, Simon R: Critical review of published microarray studies for cancer outcome and guidelines on statistical analysis and reporting. J Natl Cancer Inst 2007, 99: 147–157. 10.1093/jnci/djk018

    Article  PubMed  Google Scholar 

  21. Lee S: Mistakes in validating the accuracy of a prediction classifier in high-dimensional but small-sample microarray data. Stat Methods Med Res 2008, 17: 635–642. 10.1177/0962280207084839

    Article  PubMed  Google Scholar 

  22. Aliferis CF, Statnikov A, Tsamardinos I, Schildcrout JS, Shepherd BE, Harrell FE: Factors influencing the statistical power of complex data analysis protocols for molecular signature development from microarray data. PLoS ONE 2009, 4: 3:e4922. 10.1371/journal.pone.0004922

    Article  PubMed  Google Scholar 

  23. Shmulevich I, Dougherty ER, Kim S, Zhang W: Probabilistic Boolean networks: a rule-based uncertainty model for gene regulatory networks. Bioinformatics 2002, 18(2):261–274. 10.1093/bioinformatics/18.2.261

    Article  CAS  PubMed  Google Scholar 

  24. Huang S: Non-genetic heterogeneity of cells in development: more than just noise. Development 2009, 136(23):3853–3862. 10.1242/dev.035139

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  25. Tsamardinos I, Aliferis CF: Towards Principled Feature Selection: Relevancy, Filters and Wrappers. in Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics 2003.

    Google Scholar 

  26. Zhu J, McLachlan G, Jones LBT, Wood I: On selection biases with prediction rules formed from gene expression data. Journal of Statistical Planning and Inference 2008, 138(2):374–386. 10.1016/j.jspi.2007.06.003

    Article  Google Scholar 

  27. Guyon I, Weston J, Barnhill S, Vapnik V: Gene Selection for Cancer Classification using Support Vector Machines. Machine Learning 2002, 46(1–3):389–422. 10.1023/A:1012487302797

    Article  Google Scholar 

  28. Berger A: Error-Correcting Output Coding for Text classification. In Proceedings of IJCAI-99 Workshop on Machine Learning for Information Filtering 1999.

    Google Scholar 

  29. James G, Hastie T: The Error Coding Method and PICTs. Journal of Computational and Graphical Statistics 1998, 7(3):377–387. 10.2307/1390710

    Google Scholar 

  30. Lin Y: Support vector machines and the Bayes rule in classification. Data Mining and Knowledge Discovery 2002, 6: 259–275. 10.1023/A:1015469627679

    Article  Google Scholar 

  31. Cristianini N, Shawe-Taylor J: An introduction to support vector machines: and other kernel-based learning methods. 1st edition. Cambridge University Press; 2000.

    Book  Google Scholar 

  32. Kschischang FR, Frey BJ, Loeliger HA: Factor graphs and the sum-product algorithm. Information Theory, IEEE Transactions on 2001, 47(2):498–519. 10.1109/18.910572

    Article  Google Scholar 

  33. Alizadeh A, Eisen M, Davis R, Ma C, Lossos I, Rosenwald A, Boldrick J, Sabet H, Tran T, Yu X, Powell J, Yang L, Marti G, Moore T, Hudson J Jr, Lu L, Lewis D, Tibshirani R, Sherlock G, Chan W, Greiner T, Weisenburger D, Armitage J, Warnke R, Levy R, Wilson W, Grever M, Byrd J, Botstein D, Brown P, Staudt L: Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. Nature 2000, 403(6769):503–11. 10.1038/35000501

    Article  CAS  PubMed  Google Scholar 

  34. Dettling M: BagBoosting for tumor classification with gene expression data. Bioinformatics 2003, 20(18):3583. 10.1093/bioinformatics/bth447

    Article  Google Scholar 

  35. Dudoit S, Fridlyand J, Speed TP: Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data. Journal of the American Statistical Association 2002, 97(457):77–87. 10.1198/016214502753479248

    Article  CAS  Google Scholar 

  36. Khan J, Wei J, Ringner M, Saal L, Ladanyi M, Westermann F, Berthold F, Schwab M, Antonescu C, Peterson C, Meltzer P: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat Med 2001, 7(6):673–679. 10.1038/89044

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  37. Pomeroy S, Tamayo P, Gaasenbeek M, Sturla L, Angelo M, Mclaughlin M, Kim J, Goumnerova L, Black P, Lau C, Allen J, Zagzag D, Olson J, Curran T, Wetmore C, Biegel J, Poggio T, Mukherjee S, Rifkin R, Califano A, Stolovitzky G, Louis D, Mesirov J, Lander E, Golub T: Prediction of central nervous system embryonal tumour outcome based on gene expression. Nature 2002, 415(6870):436–442. 10.1038/415436a

    Article  CAS  PubMed  Google Scholar 

  38. Staunton JE, Slonim DK, Coller HA, Tamayo P, Angelo MJ, Park J, Scherf U, Lee JK, Reinhold WO, Weinstein JN, Mesirov JP, Lander ES, Golub TR: Chemosensitivity prediction by transcriptional profiling. Proc Natl Acad Sci USA 2001, 98: 10787–10792. 10.1073/pnas.191368598

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  39. Su AI, Welsh JB, Sapinoso LM, Kern SG, Dimitrov P, Lapp H, Schultz PG, Powell SM, Moskaluk CA, Frierson HF, Hampton GM: Molecular Classification of Human Carcinomas by Use of Gene Expression Signatures. Cancer Res 2001, 61(20):7388–7393.

    CAS  PubMed  Google Scholar 

  40. Yeung K, Bumgarner R: Multiclass classification of microarray data with repeated measurements: application to cancer. Genome Biol 2003, 4(12):R83. 10.1186/gb-2003-4-12-r83

    Article  PubMed Central  PubMed  Google Scholar 

  41. Witten I, Frank E: Data mining: Practical machine learning tools and techniques with Java implementations. Morgan Kaufmann; 1999.

    Google Scholar 

  42. Azuaje F: Genomic data sampling and its effect on classification performance assessment. BMC Bioinformatics 2003, 4: 5. 10.1186/1471-2105-4-5

    Article  PubMed Central  PubMed  Google Scholar 

  43. Breiman L: Statistical Modeling: The Two Cultures. Statistical Science 2001, 16(3):199–215. 10.1214/ss/1009213726

    Article  Google Scholar 

  44. Abeel T, Helleputte T, Van de Peer Y, Dupont P, Saeys Y: Robust biomarker identification for cancer diagnosis with ensemble feature selection methods. Bioinformatics 2010, 26(3):392–398. 10.1093/bioinformatics/btp630

    Article  CAS  PubMed  Google Scholar 

  45. Qiu X, Xiao Y, Gordon A, Yakovlev A: Assessing stability of gene selection in microarray data analysis. BMC Bioinformatics 2006, 7: 50. 10.1186/1471-2105-7-50

    Article  PubMed Central  PubMed  Google Scholar 

  46. Salton G: Automatic text processing: the transformation, analysis, and retrieval of information by computer. USA: Addison-Wesley Longman Publishing Co., Inc; 1989.

    Google Scholar 

  47. Ambroise C, Mclachlan GJ: Selection bias in gene extraction on the basis of microarray gene-expression data. Proc Natl Acad Sci USA 2002, 99(10):6562–6566. 10.1073/pnas.102102699

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  48. Hadar J, Russell WR: Rules for Ordering Uncertain Prospects. American Economic Review 1969, 59: 25–34.

    Google Scholar 

  49. Delgado MA, Farinas JC, Ruano S: Firm productivity and export markets: a non-parametric approach. Journal of International Economics 2002, 57(2):397–422. 10.1016/S0022-1996(01)00154-4

    Article  Google Scholar 

  50. Hollander M, Wolfe DA: Nonparametric Statistical Methods. 2nd edition. Wiley-Interscience; 1999.

    Google Scholar 

  51. Shapiro SS, Wilk MB: An analysis of variance test for normality (complete samples). Biometrika 1965., 3(52):

  52. Shannon CE: A Mathematical Theory of Communication. The Bell System Technical Journal 1948, 27: 379–423. 623 623

    Article  Google Scholar 

  53. Fano RM: Transmission of information: a statistical theory of communications. M.I.T. Press & Wiley, London; 1961.

    Google Scholar 

  54. Cover TM, Thomas JA: Elements of Information Theory. Wiley-Interscience; 1991.

    Book  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Javier De Las Rivas, member of the CIC, CSIC/USAL, Spain, for providing initial access to computational resources. The authors would also like to thank anonymous reviewers for their helpful comments. ET's, LO's, PB's and LA's work was supported by projects PICT No. 02226, SECYT, Argentina and Red Sudamericana e Iberoamericana de Bioinformática, PROSUL CNPq 011/2008, Brasil.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elizabeth Tapia.

Additional information

Authors' contributions

ET devised the study, set up and performed simulation experiments, and drafted the manuscript. LO contributed to the design of simulation experiments, to the statistical analysis of experimental results and to the manuscript. PB and LA contributed to the design of simulation experiments, to organize experimental results and to the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tapia, E., Ornella, L., Bulacio, P. et al. Multiclass classification of microarray data samples with a reduced number of genes. BMC Bioinformatics 12, 59 (2011). https://doi.org/10.1186/1471-2105-12-59

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-12-59

Keywords