Skip to main content

Sparse logistic regression with a L1/2 penalty for gene selection in cancer classification

Abstract

Background

Microarray technology is widely used in cancer diagnosis. Successfully identifying gene biomarkers will significantly help to classify different cancer types and improve the prediction accuracy. The regularization approach is one of the effective methods for gene selection in microarray data, which generally contain a large number of genes and have a small number of samples. In recent years, various approaches have been developed for gene selection of microarray data. Generally, they are divided into three categories: filter, wrapper and embedded methods. Regularization methods are an important embedded technique and perform both continuous shrinkage and automatic gene selection simultaneously. Recently, there is growing interest in applying the regularization techniques in gene selection. The popular regularization technique is Lasso (L1), and many L1 type regularization terms have been proposed in the recent years. Theoretically, the Lq type regularization with the lower value of q would lead to better solutions with more sparsity. Moreover, the L1/2 regularization can be taken as a representative of Lq (0 < q < 1) regularizations and has been demonstrated many attractive properties.

Results

In this work, we investigate a sparse logistic regression with the L1/2 penalty for gene selection in cancer classification problems, and propose a coordinate descent algorithm with a new univariate half thresholding operator to solve the L1/2 penalized logistic regression. Experimental results on artificial and microarray data demonstrate the effectiveness of our proposed approach compared with other regularization methods. Especially, for 4 publicly available gene expression datasets, the L1/2 regularization method achieved its success using only about 2 to 14 predictors (genes), compared to about 6 to 38 genes for ordinary L1 and elastic net regularization approaches.

Conclusions

From our evaluations, it is clear that the sparse logistic regression with the L1/2 penalty achieves higher classification accuracy than those of ordinary L1 and elastic net regularization approaches, while fewer but informative genes are selected. This is an important consideration for screening and diagnostic applications, where the goal is often to develop an accurate test using as few features as possible in order to control cost. Therefore, the sparse logistic regression with the L1/2 penalty is effective technique for gene selection in real classification problems.

Background

With the development of DNA microarray technology, the biology researchers can analyze the expression levels of thousands of genes simultaneously. Many studies have demonstrated that microarray data are useful for classification of many cancers. However, from the biological perspective, only a small subset of genes is strongly indicative of a targeted disease, and most genes are irrelevant to cancer classification. The irrelevant genes may introduce noise and decrease classification accuracy. Moreover, from the machine learning perspective, too many genes may lead to overfitting and can negatively influence the classification performance. Due to the significance of these problems, effective gene selection methods are desirable to help to classify different cancer types and improve prediction accuracy.

In recent years, various approaches have been developed for gene selection of microarray data. Generally, they are divided into three categories: filter, wrapper and embedded methods. Filter methods evaluate a gene based on discriminative power without considering its correlations with other genes [1-4]. The drawback of filter methods is that it examines each gene independently, ignoring the possibility that groups of genes may have a combined effect which is not necessarily reflected by the individual performance of genes in the group. This is a common issue with statistical methods such as T-test, which examine each gene in isolation.

Wrapper methods utilize a particular learning method as feature evaluation measurement to select the gene subsets in terms of the estimated classification errors and build the final classifier. Wrapper approaches can obtain a small subset of relevant genes and can significantly improve classification accuracy [5, 6]. For example, Guyon et al. [7] proposed a gene selection approach utilizing support vector machines (SVM) based on recursive feature elimination. However, the wrapper methods greatly require extensive computational time.

The third group of gene selection procedures is embedded methods, which perform the variable selection as part of the statistical learning procedure. They are much more efficient computationally than wrapper methods with similar performance. Embedded methods have drawn much attention recently in the literature. The embedded methods are less computationally expensive and less prone to over fitting than the wrapper methods [8].

Regularization methods are an important embedded technique and perform both continuous shrinkage and automatic gene selection simultaneously. Recently, there is growing interest in applying the regularization techniques in the logistic regression models. Logistic regression is a powerful discriminative method and has a direct probabilistic interpretation which can obtain probabilities of classification apart from the class label information. In order to extract key features in classification problems, a series of regularized logistic regression methods have been proposed. For example, Shevade and Keerthi [9] proposed the sparse logistic regression based on the Lasso regularization [10] and Gauss-Seidel methods. Glmnet is the general approach for the L1 type regularized (including Lasso and elastic net) linear model using a coordinate descent algorithm [11, 12]. Similar to sparse logistic regression with the L1 regularization method, Gavin C. C. and Nicola L. C. [13] investigated sparse logistic regression with Bayesian regularization. Inspired by the aforementioned methods, we investigate the sparse logistic regression model with a L1/2 penalty, in particular for gene selection in cancer classification. The L1/2 penalty can be taken as a representative of Lq (0 < q < 1) penalty and has demonstrated many attractive properties, such as unbiasedness, sparsity and oracle properties [14].

In this paper, we develop a coordinate descent algorithm to the L1/2 regularization in the sparse logistic regression framework. The approach is applicable to biological data with high dimensions and low sample sizes. Empirical comparisons with sparse logistic regressions with the L1 penalty and the elastic net penalty demonstrate the effectiveness of the proposed L1/2 penalized logistic regression for gene selection in cancer classification problems.

Methods

Sparse logistic regression with the L1/2penalty

In this paper, we focus on a general binary classification problem. Suppose we have n samples, D = {(X1, y1), (X2, y2), …, (X n , y n )}, where X i  = (xi1, xi2 , …,  x ip ) is ith input pattern with dimensionality p and y i is a corresponding variable that takes a value of 0 or 1; y i = 0 indicates the ith sample in Class 1 and y i = 1 indicates the ith sample is in Class 2. The vector X i contains p features (for all p genes) for the ith sample and x ij denotes the value of gene j for the ith sample. Define a classifier f(x) = ex /(1 + ex) such that for any input x with class label y, f(x) predicts y correctly. The logistic regression is expressed as:

P Y i = 1 | X i = f X i β = exp X i β 1 + exp X i β
(1)

Where β = (β 0 , β 1 ,…, β p ) are the coefficients to be estimated, note that β0 is the intercept. The log-likelihood is:

l β | D = - i = 1 n y i log f X i β + 1 - y i log 1 - f X i β
(2)

We can obtain β by minimizing the log-likelihood (2). In high dimensional application with p >> n, directly solving the logistic model (2) is ill-posed and may lead to overfitting. Therefore, the regularization approaches are applied to address the overfitting problem. When adding a regularization term to (2), the sparse logistic regression can be modelled as:

β = arg min l β | D + λ j = 1 p P β j
(3)

Where λ > 0 is a tuning parameter and P(B) is a regularization term. The popular regularization technique is Lasso (L1) [10], which has the regularization term P(β) = ∑ |β|. Many L1 type regularization terms have been proposed in the recent years, such as SCAD [15], elastic net [16], and MC+ [17].

Theoretically, the Lq type regularization P(β) = ∑ |β|q with the lower value of q would lead to better solutions with more sparsity. However when q is very close to zero, difficulties with convergence arise. Therefore, Xu et al. [14] further explored the properties of Lq (0 <q <1) regularization and revealed the extreme importance and special role of the L1/2 regularization. They proposed that when 1/2< q <1, the L1/2 regularization can yield most sparse results and its difficulty with convergence is not very high compared with that of the L1 regularization, while when 0< q <1/2, the performance of Lq penalties makes no significant difference and solving the L1/2 regularization is much simpler than solving the L0 regularization. Hence, the L1/2 regularization can be taken as a representative of Lq (0 < q < 1) regularizations. In this paper, we apply the L1/2 penalty to the logistic regression model. The sparse logistic regression model based on the L1/2 penalty has the form:

β 1 / 2 = arg min l β | D + λ j = 1 p β j 1 / 2
(4)

The L1/2 regularization has been demonstrated many attractive properties, such as unbiasedness, sparsity and oracle properties. The theoretical and experimental analyses show that the L1/2 regularization is a competitive approach. Our work in this paper also reveals the effectiveness of the L1/2 regularization to solve the nonlinear logistic regression problems with a small number of predictive features (genes).

A coordinate descent algorithm for the L1/2penalized logistic regression

The coordinate descent algorithm [11, 12] is a “one-at-a-time” approach, and its basic procedure can be described as follows: for each coefficients, to partially optimize the target function with respect to β j (j = 1, 2, …, p) with the remaining elements of β fixed at their most recently updated values.

Before introducing the coordinate descent algorithm for the nonlinear logistic regularization, we first consider a linear regularization case. Suppose the dataset D has n samples, D = {(X1, y1), (X2, y2), …, (Xn, y n )}, where X i  = (xi1, xi2, …, x ip ) is ith input variables with dimensionality p and y i is the corresponding response variable. The variables are standardized: i = 1 n x ij 2 = 1 and i = 1 n y i = 0. Therefore, The linear regression with the regularization term can be expressed as:

R β = arg min 1 n i = 1 n y i - X β 2 + λ j = 1 p P β j
(5)

Where P(B) is the regularization term. The coordinate descent algorithm solves β j and other βkj (kj represent the parameters remained after jth element is removed) are fixed. The equation (5) can be rewritten as:

R β = arg min 1 n y i - k j x ik β k + x ij β j 2 + λ k j P β k + λP β j
(6)

The first order derivative at β j can be estimated as:

R β j = i = 1 n - x ij y i - k j x ik β k - x ij β j + λP β j ' = 0
(7)

Define y ˜ i j = k j x ik β k as the partial residual for fitting β j  and ω j = i = 1 n x ij y i - y ˜ i j , the univariate soft thresholding operator of the coordinate descent algorithm [11] for the L1 regularization (Lasso) can be defined as:

β j = S ω j , λ = ω j + λ if ω j < - λ ω j - λ if ω j > λ 0 if ω j < λ
(8)

Similarly, for the L0 regularization, the thresholding operator of the coordinate descent algorithm can be defined as:

β j = Hard ω j , λ = ωI ω j > λ
(9)

where I is the indicator function. This formula is equivalent to the hard thresholding operator [17].

According to equations (8) and (9), we can know that the different penalties are associated with different thresholding operators. Therefore, Xu et al. [18] proposed a half thresholding operator to solve the L1/2 regularization for linear regression model. It is an iterative algorithm and can be seen as multivariate half thresholding approach. In this paper, we propose the univariate half thresholding operator of the coordinate descent algorithm for the L1/2 regularization. Based on equation (7), the gradient of the L1/2 regularization at β j can be expressed as:

R β j = β j - ω j + λ sign β j 4 β j = 0
(10)

Firstly, we consider the β j > 0 statement, and let, β j = μ , β j  = μ2. When β j  > 0, the equation (10) can be redefined as:

μ 3 - ω j μ + λ 4 = 0
(11)

There are three cases of ω j  < 0, 0 < ω j < 3 4 λ 2 3 , and ω j > 3 4 λ 2 3 respectively.

(i) If ω j  < 0, the three roots of equation (11) can be expressed as follows: μ 1 = - 2 r sin φ 3 , μ 2 = r sin φ 3 + i 3 r cos φ 3 and μ 3 = r sin φ 3 - i 3 r cos φ 3 , where r = ω j 3 , φ = arccos λ 8 r 3 . When r > 0, none of the roots satisfices μ 1 > 0. Thus, there is no solution to equation (11) when ω j  < 0.

(ii) If 0 < ω j < 3 4 λ 2 3 , the three roots of equation (11) are:

μ 1 = - 2 r cos φ 3 , μ 2 = r cos φ 3 + i 3 r sin φ 3

and μ 3 = r cos φ 3 - i 3 r sin φ 3 .

There is still no solution to equation (11) in this case.

(iii) If ω j > 3 4 λ 2 3 , the three roots of equation (11) are given by:

· μ 1 = - 2 r cos φ 3 , · μ 2 = 2 r cos π - φ 3 and μ 3 = 2 r cos π + φ 3 .

In this case, the μ2 is a unique solution of equation (10). Thus, the equation (11) has non-zero roots only when ω j > 3 4 λ 2 3 . The unique solution of equation (10) is as follow:

β j = μ 2 2 = 2 3 ω j 1 + cos 2 π - φ ω j 3

On the other hand, in the β j  < 0 statement, we denoted β j = μ and β j  = - μ2. The equation (10) can be transformed into the equation:

μ 3 - ω j μ - λ 4 = 0
(12)

The equation (12) also has a unique solution when ω j < - 3 4 λ 2 3 :

μ 2 = 2 r cos π - φ 3

and β j = - μ 2 2 = - 2 3 ω j 1 + cos 2 π - φ ω j 3 .

In conclusion, the univariate half thresholding operator can be expressed as:

β j = Half ω j , λ = 2 3 ω j 1 + cos 2 π - φ λ ω j 3 if ω j > 3 4 λ 2 3 0 otherwise
(13)

where φ λ (ω) satisfies:

cos φ λ ω = λ 8 ω 3 - 3 2

The coordinate descent algorithm for the L1/2 regularization makes repeated use of the univariate half thresholding operator. The details of the algorithm will be described later. This coordinate descent algorithm for the regularization can be extended to the sparse logistic regression model. Based on the objective function (3) of the sparse logistic regression, one-term Taylor series expansion for l(B) has the form of

L β , λ 1 2 n i = 1 n Z i - X i β ' W i Z i - X i β + j = 1 p P β j
(14)

Where Z i = X i β ˜ + Y i - f X i β ˜ f X i β ˜ 1 - f X i β ˜ is an estimated response, W i = f X i β ˜ 1 - f X i β ˜ is a weight and f X i β ˜ = exp X i β ˜ / 1 + exp X i β ˜ is a evaluated value at current parameters. Redefine the partial residual for fitting current β ˜ j as Z ˜ i j = i = 1 n W i Z ˜ i - k j x ik β ˜ k and i = 1 n x ij Z i - Z ˜ i j , we can directly apply the coordinate descent algorithm with the L1/2 penalty for sparse logistic regression and the details are given follows:

Algorithm: The coordinate descent algorithm for sparse logistic with the L1/2 penalty

The coordinate descent algorithm for the L1/2 penalized logistic regression works well in the sparsity problems, because the procedure does not need to change many irrelevant parameters and recalculate partial residuals for each update step.

Results

Analyses of simulated data

In this section, we evaluate the performance of the sparse logistic regression with the L1/2 penalty in simulation study. We generate high-dimensional and low sample size data which contain many irrelevant features. Two methods are compared with our proposed approach: Sparse logistic regression with the Elastic Net penalty (LEN) and Sparse logistic regression with the Lasso penalty (L1).

We generated the vectors γi0,γi1,…,γ ip (i = 1,…,n) independently from the standard normal distribution and the predictor vector(i=1,…,n) is generated by x ij = γ ij 1 - ρ + γ i 0 ρ (j=1,…, p), where ρ is the correlation coefficient of the predictor vectors [19]. The simulated data set generated from the logistic model:

log Y i 1 - Y i = β 0 + j = 1 p x ij β j + σ · ϵ
(15)

Where ϵ is the independent random error generated from N(0,1) and σ is the parameter which controls the signal to noise. In every simulation, the dimension p of the predictor vector is 1000, and the first five true coefficients are nonzero: β1 = 1, β2 = 1, β3 = -1, β4 = -1, β5 = 1, and β j  = 0(6 ≤ j ≤ 1000).

The estimation of the optimal tuning parameter λ in the sparse logistic regression models can be done in many ways and is often done by k-fold cross-validation (CV). Note that the choice of k will depend on the size of the training set. In our experiments, we use 10-fold cross-validation (k=10). The elastic net method has two tuning parameters, we need to cross-validate on a two-dimensional surface [16].

We consider the cases with the training sample size n = 50, 80, 100, the correlation coefficient ρ =0.1, 0.4 and the noise control parameter σ =0.2, 0.6 respectively. Each classifier was evaluated on a test data set including 100 samples. The experiments were repeated 30 times and we report the average test errors in Table 1. As shown in Table 1, when the sample size n increases, the prediction performances of all the three methods are improved. For example when ρ =0.1, and σ =0.2, the average test errors of the L1/2 method are 28.2%, 10.7% and 8.1% with the sample sizes n=50, 80, and 100 respectively. When the correlation parameter ρ and the noise parameter σ increase, the prediction performances of all the three methods are decreased. For example, when ρ =0.4 and n =100, the average test errors from the L1/2 method increased from 9.1% to 15.1%, in which σ increased from 0.2 to 0.6. When σ =0.6 and n =80, the average test error from the L1/2 method increase from 18.4% to 20.5%, in which ρ increased from 0.1 to 0.4. Moreover, in our simulation, the influence of the noise may be larger than that of the variable correlation for the prediction performance of all the three methods. On the other hand, at the same parameter setting case, the prediction performance of the L1/2 method is consistent and better than the results of the LEN and L1 methods. For example, whenρ =0.1, σ =0.2 and n=100, the predictive error of the L1/2 method is 8.1% much better than 16.9% and 15.7% got by the LEN and L1 methods respectively.

Table 1 The average errors (%) for the test data sets obtained by the sparse logistic regressions with the L 1/2 , L EN and L 1 penalties in 30 runs

Table 2 shows the average number of the variables selected in 30 runs for each method. Since the simulation datasets have x1-x5 relevant features, the idealized average number of variables selected by each method is 5. In Table 2, the results obtained by the L1/2 penalized method are obviously closed to 5 and 3-10 times smaller than those of the LEN and L1 penalties at the same parameter setting. For example, when ρ =0.1, σ =0.2 and n=100, the average numbers from the LEN and L1 methods are 49.7 and 45.7 respectively, and the result of L1/2 method is 8.9. Moreover, when the sample size n, the correlation parameter ρ, and the noise parameter σ increase, the average numbers from all the three methods increase, but the values of the LEN and L1 methods increase faster than those of the L1/2 method. This means that the L1/2 penalized method consistently outperforms than other two methods in term of variable selection.

Table 2 The average number of variables selected by the sparse logistic regressions with the L 1/2 , L EN and L 1 penalties in 30 runs

To further evaluate the performance of the L1/2 penalized method, we report the frequency with which each relevant variable was selected among 30 runs for each method in Table 3. When the sample size is small (n=50), the L1/2 penalty selects the relevant variables slightly less frequently than the other two methods and all the three methods select true nonzero coefficients with difficulties, especially when ρ and σ are relatively large. For example, when ρ =0.4, σ =0.6, n=50, and for β5, the selected frequencies of the L1/2, LEN and L1 methods are 12, 14 and 13 respectively in 30 runs. As n increases, all the three methods tend to select the true nonzero coefficients more accurately and the L1/2 penalty method performs slightly better, in terms of variable frequencies, than the other two methods under the different parameter settings of ρ and σ. To sum up, Tables 1, 2 and 3 clearly show that the L1/2 method is winner among the competitors in terms of both prediction accuracy and variable selection in the different variable correlation and noise situations.

Table 3 The frequencies of the relevant variables obtained by the sparse logistic regressions with the L 1/2 , L EN and L 1 penalties in 30 runs

Analyses on microarray data

In this section, we compare our proposed L1/2 penalized method with the LEN and L1 methods on 4 publicly available gene expression datasets: Leukaemia, Prostate, Colon and DLBCL. A brief description of these datasets is given below and summarized in Table 4.

Table 4 Four publicly available gene expression datasets used in the experiments

Leukaemia dataset

The original dataset was provided by Golub et al. [7], and contains the expression profiles of 7,129 genes for 47 patients of acute lymphoblastic leukaemia (ALL) and 25 patients of acute myeloid leukaemia (AML). For data preprocessing, we followed the protocol detailed in the supplementary information to Dudoit et al. [1]. After thresholding, filtering, applying a logarithmic transformation and standardizing each expression profile to zero mean and unit variance, a dataset comprising 3,571 genes remained.

Prostate dataset

This original dataset contains the expression profiles of 12,600 genes for 50 normal tissues and 52 prostate tumor tissues. For data preprocessing, we adopt the pretreatment method [20] to obtain a dataset with 102 samples. And each sample contains 5966 genes.

Colon dataset

The colon microarray data set in Alon et al. [21] has 2000 genes per sample and 62 samples which consist of 22 normal tissues and 40 cancer tissues. The Colon dataset are available at http://microarray.princeton.edu/oncology.

DLBCL dataset

This dataset contains 77 microarray gene expression profiles of the 2 most prevalent adult lymphoid malignancies: 58 samples of diffuse large B-cell lymphomas (DLBCL) and 19 observations of follicular lymphoma (FL). Each sample contains 7,129 gene expression values. More information on these data can be found in Shipp MA et al. [22]. For data preprocessing, we followed the protocol detailed in the supplementary information to Dudoit et al. [1], and a dataset comprising 6,285 genes remained.

We evaluate the prediction accuracy of the three penalized logistic regression models using random partition. This means that we divide the datasets at random such that approximate 70-80% of the datasets becomes training samples and the other 20-30% test samples. More information on these data is given in Table 5. For selecting the tuning parameter λ, we employ the ten-fold cross validation scheme using the training set. We repeat this procedure 30 times and the averaged misclassification errors were reported in Table 6. Here the denominators of the ten-fold cross validation errors and the test errors describe the sample size of training and test datasets respectively. The fractions of the ten-fold cross validation errors and the test errors and the number of gene selected are the approximated integers of the corresponding average number at 30 runs. As shown in Table 6, for Leukaemia dataset, the classifier with the L1/2 penalty gives the average ten-fold cross validation error of 2/50 and the average test error of 1/22 with about 2 genes selected. The classifiers with LEN and L1 methods give the average ten-fold cross validation errors of 1/50 and the average test errors of 1/22 with about 9 and 6 genes selected respectively. This means that all three methods can be successfully applied to high-dimensional classification problems and classify the Leukaemia dataset with same accuracies. Note that, the L1/2 method achieved its success using only about 2 predictors (genes), compared to about 9 and 6 for the LEN and L1 methods. For Prostate and Colon datasets, it can be seen the L1/2 method achieves the best classification performances with the highest accuracy rates using much fewer genes compared with those of the LEN and L1 methods. For DLBCL dataset, the L1/2 logistic regression achieves better classification performance than that of the L1 method and worse than that of the LEN method. However, as well as other three datasets, the L1/2 method achieved its success using much less predictors (about 14 genes), compared to about 38 and 23 for the LEN and L1 methods. This is an important consideration for screening and diagnostic applications, where the goal is often to develop an accurate test using as few features as possible in order to control cost.

Table 5 The detail information of 4 microarray datasets used in the experiments
Table 6 The classification performances of different methods for 4 gene expression datasets

Figures 1, 2 and 3 display the solution paths and the gene selection results of the three methods for the Prostate dataset in one sample run. Here the x-axis displays the number of running steps, the y-axis in the left sub-figure is the coefficients measured gene importance and the y-axis in the right sub-figure is the misclassification errors based on the ten-fold cross validation. The optimal results of three methods are shown as vertical dotted lines. Figure 1 indicates that the number of nonzero coefficients (selected genes) of the optimal results obtained by the L1/2 method is 5. In contrast, Figures 2 and 3 indicate that the numbers of nonzero coefficients (selected genes) of optimal results obtained by the LEN and L1 methods are 37 and 26 respectively. Generally speaking, the penalized logistic regression methods can be successfully applied to the cancer classification problems with high dimensional and low samples microarray data, and our proposed L1/2 method achieves better performance especially in gene selection.

Figure 1
figure 1

The results of the sparse logistic regression with the L 1/2 penalty on Prostate dataset. The solution paths and the gene selection results of the sparse logistic L1/2 penalty methods for the Prostate dataset in one sample run.

Figure 2
figure 2

The results of the sparse logistic regression with the L EN penalty on Prostate dataset. The solution paths and the gene selection results of the sparse logistic elastic net penalty methods for the Prostate dataset in one sample run.

Figure 3
figure 3

The results of the sparse logistic regression with the L 1 penalty on Prostate dataset. The solution paths and the gene selection results of the sparse logistic L1 penalty methods for the Prostate dataset in one sample run.

Brief biological analyses of the selected genes

The summaries of the 10 top-ranked informative genes found by the three sparse logistic regression methods for 4 gene expression datasets are shown in Tables 7, 8, 9 and 10 respectively. The genes with star(*) are the most frequently selected genes to construct the classifiers according to the last column of Table 6, and the common genes obtained by each classifier are emphasized with bold. The biologically experimental results proved some genes included in the frequently selected gene sets that produce high classification accuracy rate are mostly and functionally related to carcinogenesis or tumor histogenesis. For example, in Table 7, the most frequently selected gene set of each sparse logistic method for leukemia classification, including cystatin C (CST3) and myeloperoxidase (MPO) genes, that achieve high classification accuracy by the L1/2 method, are experimentally proved to be correlated to leukemia of ALL or AML. The cystatin C gene is located at the extracellular region of the cell and has role in invasiveness of human glioblastoma cells. Decrease of cystatin C in the CSF might contribute to the process of metastasis and spread of the cancer cells in the leptomeningeal tissues [23]. The myeloperoxidase gene is taking role in anti-apoptosis process where cancer cells kill themselves [24]. For the colon dataset (Table 9), the most frequently selected gene set of each sparse logistic method includes genes such as guanylate cyclase activator 2B (GUCA2B), myosin, light chain 6, alkali, smooth muscle and non-muscle (MYL6) and Human desmin (DES) genes. These genes are the top 3 significant informative genes ranked by our proposed L1/2 method and also selected by Ben-Dor et al. [25], Yang and Song [26] and Li et al. [27]. On the top of these genes lists is guanylate cyclase activator 2B (GUCA2B) gene. Notterman et al. [28] showed that a reduction of uroguanylin might be an indication of colon tumors, and Shailubhai et al. [29] reported that treatment with uroguanylin has a positive therapeutic significance to the reduction in pre-cancerous colon ploys.

Table 7 The 10 top-ranked informative genes found by the three sparse logistic regression methods from the Leukaemia dataset
Table 8 The 10 top-ranked informative genes found by the three sparse logistic regression methods from the Prostate dataset
Table 9 The 10 top-ranked informative genes found by the three sparse logistic regression methods from the colon dataset
Table 10 The 10 top-ranked informative genes found by the three sparse logistic regression methods from the DLBCL dataset

In Tables 7, 8, 9 and 10, some genes are only frequently selected by the L1/2 method, but not discovered by the LEN and L1 methods. The evidence from the literatures showed that they are cancer related genes. For example, for the colon dataset, the genes cholinergic receptor, nicotinic, delta polypeptide (CHRND) and platelet/endothelial cell adhesion molecule-1 (PECAM1) were also selected by Maglietta R. et al. [30], Wiese A.H. et al. [31], Wang S. L. et al. [32], and Dai J. H. and Xu Q. [33]. These genes can significantly discriminate between non-dissected tumors and micro dissected invasive tumor cells. It is remarkable that apparently (to our knowledge) some discovered genes that have not been seen in any past studies.

On the other hand, from Tables 7, 8, 9 and 10, we found that the most frequently selected genes and their ranking orders by the LEN and L1 methods are much similar compared with those of the L1/2 method. The main reasons are that the classification hypothesis needs not be unique as the samples in gene expression data lie in a high-dimensional space, and both of the LEN and L1 methods are based on the L1 type penalty.

Construct KNN classifier with the most frequently selected relevant genes

In this section, to further evaluate the performance and prediction generality of the sparse logistic regression with L1/2 penalty, we constructed KNN (k =3, 5) classifiers using the relevant genes which were most frequently selected by the L1/2 penalized logistic regression method. In this experiment, we use the random leave-one-out cross validation (LOOCV) to evaluate the predictive ability and repeat 50 runs.

Table 11 summarizes classification accuracies of four datasets with KNN classifiers with selected genes by our proposed methods. From Table 11, we can see that all the classification accuracies are high than 90%, especially the classification accuracy on the Leukaemia dataset is 98.3%. The KNN classifiers with relevant genes which were selected by the sparse logistic regression with the L1/2 penalty can achieve high classification accuracy. The results indicate that the sparse logistic regression with the L1/2 penalty can select power discrimination genes.

Table 11 Summary of the results of KNN classifiers using the most frequently selected genes by our proposed L 1/2 penalized logistic regression method

Conclusions

In cancer classification application based on microarray data, only a small subset of genes is strongly indicative of a targeted disease. Thus, feature selection methods play an important role in cancer classification. In this paper, we propose and model sparse logistic regression with the L1/2 penalty, and develop the corresponding coordinate descent algorithm as a novel gene selection approach. The proposed method utilizes a novel univariate half thresholding to update the estimated coefficients.

Both simulation and microarray data studies show that the sparse logistic regression with the L1/2 penalty achieve higher classification accuracy than those of ordinary L1 and elastic net regularization approaches, while fewer but informative genes are selected. Therefore, the sparse logistic regression with the L1/2 penalty is the effective technique for gene selection in real classification problem.

In this paper, we use the proposed method to solve binary cancer classification problem. However, many cancer classification problems involve multi-category microarray data. We plan to extend our proposed method to solve multinomial penalized logistic regression for multiclass cancer classification in our future work.

References

  1. Dudoit S, Fridlyand S, Speed TP: Comparison of discrimination methods for the classification of tumors using gene expression data. J Am Stat Assoc. 2002, 97 (457): 77-87. 10.1198/016214502753479248.

    Article  CAS  Google Scholar 

  2. Li T, Zhang C, Ogihara M: A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression. Bioinformatics. 2004, 20: 2429-2437. 10.1093/bioinformatics/bth267.

    Article  CAS  PubMed  Google Scholar 

  3. Lee JW, Lee JB, Park M, Song SH: An extensive evaluation of recent classification tools applied to microarray data. Com Stat Data Anal. 2005, 48: 869-885. 10.1016/j.csda.2004.03.017.

    Article  Google Scholar 

  4. Ding C, Peng H: Minimum redundancy feature selection from microarray gene expression data. J. Bioinform. Comput. 2005, 3 (2): 185-205. 10.1142/S0219720005001004.

    Article  CAS  Google Scholar 

  5. Monari G, Dreyfus G: Withdrawing an example from the training set: an analytic estimation of its effect on a nonlinear parameterized model. Neurocomputing Letters. 2000, 35: 195-201. 10.1016/S0925-2312(00)00325-8.

    Article  Google Scholar 

  6. Rivals I, Personnaz L: MLPs (mono-layer polynomials and multi-layer perceptrons) for nonlinear modeling. J Mach Learning Res. 2003, 3: 1383-1398.

    Google Scholar 

  7. Golub T, Slonim D, Tamayo P, Huard C, Gaasenbeek M, Mesirov J, Coller H, Loh M, Downing J, Caligiuri M, Bloomfield C, Lander E: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science. 1999, 286: 531-537. 10.1126/science.286.5439.531.

    Article  CAS  PubMed  Google Scholar 

  8. Guyon I, Elisseff A: An Introduction to variable and feature selection. J Mach Learning Res. 2003, 3: 1157-1182.

    Google Scholar 

  9. Shevade SK, Keerthi SS: A simple and efficient algorithm for gene selection using sparse logistic regression. Bioinformatics. 2003, 19: 2246-2253. 10.1093/bioinformatics/btg308.

    Article  CAS  PubMed  Google Scholar 

  10. Tibshirani R: Regression shrinkage and selection via the lasso. J. R. Statist. Soc. B. 1996, 58: 267-288.

    Google Scholar 

  11. Fiedman J, Hastie T, Hofling H, Tibshirani R: Path wise coordinate optimization. Ann. Appl. Statist. 2007, 1: 302-332. 10.1214/07-AOAS131.

    Article  Google Scholar 

  12. Fiedman J, Hastie T, Hofling H, Tibshirani R: Regularization paths for generalized linear models via coordinate descent. J. Statist. Softw. 2010, 33: 1-22.

    Google Scholar 

  13. Gavin CC, Talbot LC: Gene selection in cancer classification using sparse logistic regression with Bayesian regularization. Bioinformatics. 2006, 22: 2348-2355. 10.1093/bioinformatics/btl386.

    Article  Google Scholar 

  14. Xu ZB, Zhang H, Wang Y, Chang XY, Liang Y: L1/2 regularization. Sci China Series F. 2010, 40 (3): 1-11.

    Google Scholar 

  15. Fan J, Li R: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 2001, 96: 1348-1361. 10.1198/016214501753382273.

    Article  Google Scholar 

  16. Zou H, Hastie T: Regularization and variable selection via the elastic net. J Royal Stat Soc Series B. 2005, 67 (2): 301-320. 10.1111/j.1467-9868.2005.00503.x.

    Article  Google Scholar 

  17. Zhang CH: Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 2010, 38: 894-942. 10.1214/09-AOS729.

    Article  Google Scholar 

  18. Xu ZB, Chang XY, Xu FM, Zhang H: L1/2 Regularization: a thresholding representation theory and a fast solver. IEEE Transact Neural Networks Learn Syst. 2012, 23 (7): 1013-1027.

    Article  Google Scholar 

  19. Sohn I, Kim J, Jung SH, Park C: Gradient lasso for Cox proportional hazards model. Bioinformatics. 2009, 25 (14): 1775-1781. 10.1093/bioinformatics/btp322.

    Article  CAS  PubMed  Google Scholar 

  20. Yang K, Cai ZP, Li JZ, Lin GH: A stable gene selection in microarray data analysis. BMC Bioinformatics. 2006, 7: 228-10.1186/1471-2105-7-228.

    Article  PubMed Central  PubMed  Google Scholar 

  21. Alon U, Barkai N, Notterman D, Gish K, Ybarra S, Mack D, Levine A: Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc Nat Acad Sci USA. 1999, 96 (12): 6745-6750. 10.1073/pnas.96.12.6745.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  22. Shipp MA, Ross KN, Tamayo P, Weng AP, Kutok JL, Aguiar RC, Gaasenbeek M, Amgel M, Reich M, Pinkus GS, Ray TS, Kovall MA, Last KW, Norton A, Lister TA, Mesirov J, Neuberg DS, Lander ES, Aster JC, Golub TR: Diffuse large B-cell lymphoma outcome prediction by gene expression profiling and supervised machine learning. Nat Med. 2002, 8: 68-74. 10.1038/nm0102-68.

    Article  CAS  PubMed  Google Scholar 

  23. Nagai A, Terashima M, Harada T, Shimode K, Takeuchi H, Murakawa Y, et al: Cathepsin B and H activities and cystatin C concentrations in cerebrospinal fluid from patients with leptomeningeal metastasis. Clin Chim Acta. 2003, 329: 53-60. 10.1016/S0009-8981(03)00023-8.

    Article  CAS  PubMed  Google Scholar 

  24. Moroz C, Traub L, Maymon R, Zahalka MA: A novel human ferritin subunit from placenta with immunosuppressive activity. J Biol Chem. 2002, 277: 12901-12905. 10.1074/jbc.M200956200.

    Article  CAS  PubMed  Google Scholar 

  25. Ben-Dor A, et al: Tissue classification with gene expression profiles. J Comput Biol. 2000, 7: 559-583. 10.1089/106652700750050943.

    Article  CAS  PubMed  Google Scholar 

  26. Yang AJ, Song XY: Bayesian variable selection for disease classification using gene expression data. Bioinformatics. 2010, 26: 215-222. 10.1093/bioinformatics/btp638.

    Article  CAS  PubMed  Google Scholar 

  27. Li HD, Xu QS, Liang YZ: Random frog: an efficient reversible jump Markov chain Monte Carlo-like approach for variable selection with applications to gene selection and disease classification. Anal Chim Acta. 2012, 740: 20-26.

    Article  CAS  PubMed  Google Scholar 

  28. Notterman DA, Alon U, Sierk AJ, Levine AJ: Minimax probability machine. Advances in neural processing systems. Cancer Res. 2001, 61: 3124-3130.

    CAS  PubMed  Google Scholar 

  29. Shailubhai K, Yu H, Karunanandaa K, Wang J, Eber S, Wang Y, Joo N, Kim H, Miedema B, Abbas S, Boddupalli S, Currie M, Forte L: Uroguanylin treatment suppeesses polyp formation in the Apc(Min/+) mouse and indices apoptosis in human colon adenocarcinoma cells via cyclic GMP. Cancer Res. 2000, 60: 5151-5157.

    CAS  PubMed  Google Scholar 

  30. Maglietta R, Addabbo A, Piepoli A, Perri F, Liuni S, Pesole G, Ancona N: Selection of relevant genes in cancer diagnosis based on their prediction accuracy. Art Intell Med. 2007, 40: 29-44. 10.1016/j.artmed.2006.06.002.

    Article  Google Scholar 

  31. Wiese AH J, Lassmann S, Nahrig J, Rosenberg R, Hofler H, Ruger R, Werner M: Identification of gene signatures for invasive colorectal tumor cells. Cancer Detect Prev. 2007, 31: 282-295. 10.1016/j.cdp.2007.07.003.

    Article  PubMed  Google Scholar 

  32. Wang SL, Li XL, Zhang SW, Gui J, Huang DS: Tumor classification by combining PNN classifier ensemble with neighborhood rough set based gene reduction. Comp Biol Med. 2010, 40: 179-189. 10.1016/j.compbiomed.2009.11.014.

    Article  CAS  Google Scholar 

  33. Dai JH, Xu Q: Attribute selection based on information gain ratio in fuzzy rough set theory with application to tumor classification. App Soft Comp. 2013, 13: 211-221. 10.1016/j.asoc.2012.07.029.

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by Macau Science and Technology Develop Funds (Grant No. 017/2010/A2) of Macau SAR of China and the National Natural Science Foundations of China (Grant No. 2013CB329404, 11131006, 61075054, and 11171272).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Liang.

Additional information

Competing interests

All authors declare they have no competing interests.

Authors’ contributions

YL, CL and XZL developed the gene selection methodology, designed and carried out the comparative study, wrote the code, and drafted the manuscript. KSL, TMC, ZBX and HZ brought up the biological problem that prompted the methodological development and verified and provided discussion on the methodology, and co-authored the manuscript. The authors read and approved the manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Liang, Y., Liu, C., Luan, XZ. et al. Sparse logistic regression with a L1/2 penalty for gene selection in cancer classification. BMC Bioinformatics 14, 198 (2013). https://doi.org/10.1186/1471-2105-14-198

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-14-198

Keywords