Skip to main content
  • Methodology article
  • Open access
  • Published:

Kernel based methods for accelerated failure time model with ultra-high dimensional data

Abstract

Background

Most genomic data have ultra-high dimensions with more than 10,000 genes (probes). Regularization methods with L1 and L p penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n m (the number of genes), directly identifying a small subset of genes from ultra-high (m > 10, 000) dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds) of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes.

Results

The accelerated failure time (AFT) model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes) is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited.

Conclusions

Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

Background

Survival prediction and prognostic factor identification play a very important role in medical research. Survival data normally include the censoring variable that indicates whether some outcome under observation (like death or recurrence of a disease) has occurred within some specific follow-up time. The modeling procedures must take into account such censoring. It is even more difficult to develop a proper statistical learning method for survival prediction.

Several models for survival predictions have been proposed in statistical literature. The most popular one is the Cox proportional hazards model [13], in which model parameters are estimated with partial log likelihood maximization. Another one is the accelerate failure time (AFT) model [46]. AFT is linear regression model in which the response variable is the logarithm or a known monotone transformation of a failure (death) time. There are mainly two approaches in literature for fitting a AFT model. One is the the Buckley-James estimator which adjusts censored observations using the Kaplan Meier estimator [7, 8], and the other is a semiparametric estimation of AFT model with an unspecific error distribution [911]. However, the semi-parametric Bayesian approach based on complex MCMC procedures is computationally intensive and tends to have inaccurate results, and the Stute's weighted least squares (LS) estimator only implicitly accounts for the censored time. The model has not been widely used in practice due to the difficulties in computing the model parameters [12], and there is no nonlinear AFT model in the literature.

Kernel based methods such as support vector machines (SVM) have been extensively studied recently in the framework of classification and regression [13] in the area of pattern recognition and statistical learning. The concept of kernel formulated as an inner product in the feature space allows us to build nonlinear extensions of many linear models [14]. It would have been a potential alternative if it were not for the complexity of censoring. Moreover, LASSO type penalty and its generalized versions have been proposed for gene (variable) selection with high dimensional genomic profiles with censored survival outcomes [1518]. However, since the sample size n m (the number of variables), methods based the primary formulation with a huge m (m > 40, 000) are not efficient. Consequently, in current microarray analysis, what people really do is select a couple of thousands (or hundreds) of genes using filter-based methods (such as T-test) and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure will lead to missing biologically important genes and introducing bias. The dual solution with kernel proposed in this article attempts to resolve these inadequacies by solving a much smaller n × n matrix.

In this paper, we propose a nonlinear kernel ridge regression for censored survival outcome prediction under the framework of AFT model. We also develop an efficient dual solution with adaptive kernel ridge regression for ultra-high dimensional genomic data analysis.

Unlike the weighted least square method, our model explicitly accounts for censoring. The proposed models are evaluated with simulation and real data and the prediction error of the test data.

Results and Discussion

Simulation Data

Simulation studies are conducted to evaluate the performance of the proposed methods under different assumptions. The following describes a method to generate input data with censored survival outcomes that emulates the mechanisms presented by the actual data.

  1. 1.

    Sample 12 -dimensional input data x with 100 training and test samples respectively from a multivariate normal distribution with mean zero and variance-covariance matrix Σ. The pairwise correlation between the i th and the j th input variables in Σ is r|i-j|and different correlations (r = 0.2, 0.4, 0.6, and 0.8) will be chosen to assess the performance of the proposed method.

  2. 2.

    Choose the model parameters w = [1, 1, 1, 1, 1, 1, -1, -1, -1, -1, -1, -1]T, and generate the event time from log T = wt xk + ε, where ε ~ N(0, σ2) and σ is determined by the signal to noise ratio (SNR = μ surv ). For instance, with the mean log survival time of 3, and SNR = 3 : 1, we have σ = 1. SNR = 3 : 1 is used in all of our simulations. Finally, k indicates the k th power of input variables, so the log survival time is associated with the input variables nonlinearly.

  3. 3.

    The censoring variables are generated as uniformly distributed and independent of the events. Letting d i = (rand + C)T i , the censoring status will be δ i = T i < d i . Different Cs give a different portion of censored data. Roughly 40% - 60% censored data are produced in our simulations.

We analyze the simulation data with the proposed DKRR algorithm and build the model with training data, evaluate the performance of the model with the test data. The performance of the DKRR algorithm with different kernels and different correlation structures are shown in Figure 1. As shown in the upper panels of Figure 1, when the the survival data are simulated with k = 1 and the true model is linear, the linear model has the best performance with the the average relative root mean squared error (RRMSE) around 0.1. Models with the radial basis function (rbf) kernel have the second best performance with different correlation structures (r = 0.2 -0.8). Models with the third order polynomial have the worst performance with the mean RRMSE around 0.4. On the other hand, when the survival data are generated with a quadratic model with k = 2 as shown in the lower panels of Figure 1, Model with second order polynomial kernel and rbf kernel are the two top performers with the average test RRMSE around 0.2, and the linear model performs the worst with the largest average test RRMSE around 0.6. These results indicate that model specification is very important. A misspecified model may lead to inaccurate predictions. Finally, there are no statistical significant differences for input variables with different correlations (r = 02 -0.8).

Figure 1
figure 1

Test RRMSE with Different Correlation Structures. Test Relative Root Mean Squared Error (RRMSE) with Different Models and Different Correlation Structures: L - linear; p2 - second order polynomial kernel; p3 - third order polynomial kernel; and rbf - radial basis function kernel. The upper panels show the performance with the linear model (k = 1) and the lower panels show the performance with quadratic model(k = 2).

To evaluate the performance of AKRR method, the survival data are generated from linear model with r = 0.4, and different w s. The generated input data have the dimensions of 100, 1000, 10000, 50000, and 100000, but only 12 variables at the positions of 1, 11, 21, 31, ..., 101, 111 are nonzero with the values of [w1, w11, w21, ..., w101, w111]T = [1, 1, 1, 1, 1, 1, -1, -1, -1, -1, -1, -1]T , [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, -0.2, -0.2, -0.2, -0.2, -0.2, -0.2]T, or [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1]T respectively. The rest coefficients are set to 0. The random noise and rest of the variables are generated from the distribution of N(0, σ2), and σ is determined by the mean survival time and the signal to noise ratio (SNR = 3:1). The test RRMSEs with different input dimensions are shown in Figure 2. Figure 2 shows that the test RRMSEs have not changed significantly when the input dimension increases from 100 to 100000, which indicates that AKRR method performs well even with a huge number of variables. The frequencies of first 12 component variables being selected out of 100 random simulations with different w are given in Table 1. Table 1 shows that AKRR can correctly identify the survival associated variables with high accuracy. AKRR identifies all 12 variables with over 88% ratios and identifies 10 out of 12 variables with over 96% ratios when |w i | = 1. Moreover, the performances are still very impressive when the associations between survival time and covariates are weak. AKRR identifies 10 out of 12 variables with over 95% and 94% ratios when |w i | = 0.2 and |w i | = 0.1 respectively. Table 2 gives more details about the average number of variables being selected and the ratios of correctly-detected, over-fitting, and under-fitting. The optimal parameters in the parenthesis are decided by 10-fold cross-validation with the training data only. p* is chosen from the values of 0.6, 0.7, 0.8, 0.9, 1, since our computational experiments show that AKRR seems to converge to the same solution when p ≥ 0.6 with different initializations for the same data set. λ* is chosen from 0:0.001:1. The average number of selected variables varies from 11.43-12.61 around the true number 12. AKRR identifies exactly the same 12 variables with the ratios of 75%, 54%, and 52% for |w i | = 1, 0.2, and 0.1 respectively. In all three cases, AKRR chooses the number of variables in the range of 11-13 with over 90% ratio.

Table 1 Frequencies of Correctly Identified variables with Different Parameters Out of 100 Simulations
Table 2 Model performance with Simulation Data and Different Parameter Values
Figure 2
figure 2

RRMSE with Different Input Dimensions. Test Relative Root Mean Squared Error (RRMSE) with Different Input Dimensions. The input dimensions vary from 100 to 100,000.

For comparison purposes, we also implement the primal version of LASSO for AFT model with Gauss-Seidel method to optimize w directly. The computational time for different input dimensions is listed in Table 3. Table 3 shows that AKRR is so computational efficient that it only takes 17.5 seconds for one run to identify variables from 100,000 candidate variables, while LASSO might take days. With 50000 variables, AKRR only needs 7.5 seconds on average to converge, while LASSO fails to converge after 2 hours. When the input dimension is large, AKRR is much more efficient. This is reasonable since the computational time of AKRR is mainly associated with the sample size and dual variables. This method will be fast even with ultra-high dimensional input as long as the sample size is small, which is common in genomic data analysis.

Table 3 Computational Time (in Seconds): AKRR vs LASSO

Diffuse Large B-cell Lymphoma Data

We now consider one diffuse large B-cell lymphoma (DLBCL) data [19] evaluating gene expression profiles associated with the patient's survival. In this study, tumor-biopsy specimens and clinical data were obtained retrospectively from 240 patients with untreated diffuse large-B-cell lymphoma who had no previous history of lymphoma, according to a protocol approved by the National Cancer Institute institutional review board. The median follow-up time was 2.8 years overall (7.3 years for survivors), and 57 percent of patients died during this period. The median age of the patients was 63 years, and 56 percent were men. CDNA microarray data with 7,399 probes were collected. We divide the data into two equal parts with 120 training data and 120 test data. We utilize the two-fold cross validation scheme to choose the optimal λ and evaluate our method. We randomly split the data into two roughly equal-sized subsets and build the model with one subset and test it with the other. To avoid the bias arising from a particular partition, the procedure is repeated 100 times, each time we split the data randomly into two folds and do cross validation. The relevance count is utilized to count how many times a gene is selected in the cross validation. Clearly the maximum relevance count for a gene is 200 with the two-fold cross validation and 100 repeating. The optimal λ* is in the range of 0.26-0.3, and the optimal p* is set to 0.7 in all the experiments. The test RRMSE is 0.07 on average, which is better than the average test RRMSE (0.101) with LASSO based primal model. This is reasonable, since AKRR has one additional parameter p. Genes associated with survival time are shown in Table 4. We identify 23 probes with over 100 relevant counts. Those 23 probes are corresponding to 21 known genes. All of the selected genes play important roles in apoptotic processes and/or the development and progress of various cancers. 17 out of 21 genes are associated with different lymphoma according to PubMed. For example, BMP6 is the top gene in other category associated with poor outcome and HLA-C gene is from the major histocompatibility class (MHC) II family, both genes were also identified by Rosenwald et al. 2002. Moreover, CD86, CD79a, and CD19 are well known antigens and MHC II signatures associated with favorable survival outcomes. We then perform pathway analysis using DAVID (david.abcc.ncifcrf.gov) and identify 5 lymphoma associated pathways: NOD-like Receptor Signaling Pathway, Pathways in Cancer, Allograft Rejection, Focal Adhesion, and Graft-versus-host Disease. Four out 5 pathways (except for NOD-like Receptor Signaling Pathway) are known to be associated with DLBCL from PubMed.

Table 4 Genes Associated with Survival Time for DLBCL Data

Follicular Lymphoma (FL) Data

Follicular lymphoma is a common type of Non-Hodgkin Lymphoma (NHL). It is a slow growing lymphoma that arises from B-cells, a type of white blood cell. It is also called an "indolent" or "low-grade" lymphoma for its slow nature, both in terms of its behavior and how it looks under the microscope. A study was conducted to predict the survival probability of patients with gene expression profiles of tumors at diagnosis [20]. Fresh-frozen tumor biopsy specimens and clinical data were obtained from 191 untreated patients who had received a diagnosis of follicular lymphoma between 1974 and 2001. The median age of patients at diagnosis was 51 years (range 23 - 81) and the median follow up time was 6.6 years (range less than 1.0 - 28.2). The median follow up time among patients alive was 8.1 years. Four records with missing survival information were excluded from the analysis. Affymetrix U133A abd U133B microarray genechips were used to measure gene expression levels from RNA samples. A log 2 transformation was applied to the Affymetrix measurement. Detailed experimental protocol can be found from the original paper. There are total of 42928 probes. It is time consuming to directly apply standard LASSO methods to this problem without an initial reduction of dimensions. Our method takes less than 10 seconds for one run. Similar two-fold cross validation scheme with 100 random partitions is utilized to this data. The optimal λ* is in the range of 0.1 - 0.12 with the optimal p* = 0.6. The test RRMSE is 0.09. The final results are shown in Table 5.

Table 5 Genes Associated with Survival Time for FL Data

Thirteen probes with over 100 relevance counts are identified. Those 13 probes are corresponding to 11 known genes associated with lymphoma and related diseases. For instance, gene C4A localizes to the major histocompatibility complex (MHC) class III region on chromosome 6. It plays a central role in the activation of the classical pathway of the complement system. C4A anaphylatoxin is a mediator of local inflammatory process. It induces the contraction of smooth muscle, increases vascular permeability, and causes histamine release from mast cells and basophilic leukocytes. C4A is on the pathway of Systemic Lupus Erythematosus (SLE). Patients with SLE can increase the risk of certain cancers, including non-Hodgkin's lymphoma. We find that C4A is negatively associated with survival time according the estimated coefficient of C4A. ALDH2 is another well studied gene which is significantly associated with acetaldehyde-induced micronuclei and alcohol-induced facial flushing. Defects in ALDH2 are a cause of acute alcohol sensitivity and alcohol induced cancers. There are accumulating evidences that ALDH2-deficient individuals are at much higher risk of esophageal cancer and malignant lymphoma. Our study indicates that the up-regulated ALDH2 is positively associated with patient survival outcomes. Six other genes are also associated with different cancers including follicular lymphoma.

Conclusions

We proposed kernel based methods for nonlinear AFT model and variable selection for ultra-high dimensional data. Our evaluations with simulation and real data illustrate that the proposed methods can effectively reduce the dimension of the covariates with sound prediction accuracy. In many studies, both clinical and genomic data are available. Due to the ultra-high dimension in genomic data, directly applying LASSO based methods to genomic data is usually not feasible. Our proposed method provides an efficient solution for it. Kernel based nonparametric methods have been well studied in statistical learning, but there are not many studies for survival analysis. In this paper, we provide a basis for further explorations in this field.

Methods

To formulate the model, consider a set of n independent observations { T i , δ i , x i } i = 1 n , where δ i is the censoring indicator, T i is the survival time (event time) if δ i = 1 or censoring time if δ i = 0, and x i = (xi 1, xi 2, ..., x im )t is the m-dimensional input vector of the i th sample. Letting w = (w1, w2, ..., w m )t be a vector of regression coefficients and ϕ(x i ) is the nonlinear transform of x i in feature space, the AFT model is defined as

M ( x i ) = w t φ ( x i ) , i = 1 , ... , n ,
(1)

where M(x i ) > log T i if δ i = 0 and M(x i ) = log T i if δ i = 1. Because there are both equality and inequality constraints in the model, new methods need to be developed.

Kernel Ridge Regression (KRR)

The kernel ridge regression for right censored survival data is as follows:

min 1 2 n i = 1 n ξ i 2 + λ 2 w t w s . t . | w t φ ( x i ) log T i | < ξ i , if δ i = 1 ; w t φ ( x i ) > log T i ξ i , if δ i = 0 ; ξ i 0 , 1 i n .
(2)

When ties in the event times are presented, variables associated with each tied time appear in the constraints independently. We can define an index function I(δ i ) = 1 if δ i = 1, and for censored data with δ i = 0, I(δ i ) is defined as I(δ i ) = 1 if log T i wtϕ(x i ) and 0 otherwise. Then equation (2) is equivalent to the following quadratic function:

J ( w ) = 1 2 n i = 1 n I ( δ i ) { w t φ ( x i ) log T i } 2 + λ 2 w t w ,
(3)

where λ ≥ 0. If we set the gradient of J(w) with respect to w to zero, then the solution for w is a linear combination of the vectors ϕ(x i ):

w = 1 n λ i = 1 n I ( δ i ) { w t φ ( x i ) log T i } φ ( x i ) = i = 1 n a i φ ( x i ) = Φ t a ,
(4)

where Φ is the design matrix, whose ith row is given by ϕ(x i )t, and a = (a1, a2, ..., a n )t are the dual variables, defined by

a i = I ( δ i ) n λ { w t φ ( x i ) log T i }
(5)

Substituting w = Φta into a i , we obtain

a i = I ( δ i ) n λ { φ t ( x i ) Φ t a log T i } = I ( δ i ) n λ { K ( x i , . ) a log T i } ,
(6)

where K = (K(x i , x j )) n × n = (ϕ(x i )tϕ(x j )) n × n is a kernel matrix which can be defined explicitly and K(x i ,.) = ϕ(x i )tΦt is the i-th row of the kernel matrix. Popular kernels include:

  • Linear kernel:

    K ( x i , x j ) = x i t x j ,
  • Radial basis function (Gaussian) kernel:

    K ( x i , x j ) = exp ( | x i x j | 2 σ 2 ) ,
  • Polynomial kernel:

    K ( x i , x j ) = ( x i t x j + p 2 ) p 1 ,
  • Sigmoid kernel:

    K ( x i , x j ) = tanh ( β x i t x j ) .

Our kernel ridge regression algorithm based on the dual equation (DKRR) (6) is as follows:

The Dual Kernel Ridge Regression (DKRR) Algorithm

Given λ, training data { x i , log T i , δ i } i = 1 n , test data { x k , log T k , δ k } k = 1 n k , and a small ε.

Calculate the kernel matrices K = [ K ( x i , x j ) ] n × n and K t e = [ K ( x k , x i ) ] n k × n .

Center the kernels and the survival times: K = ( I n 1 n 1 n 1 n t ) K ( I n 1 n 1 n 1 n t ) and K t e = ( K t e 1 n 1 n k 1 n t K ) ( I n 1 n 1 n 1 n t ) , where 1 n : a vector with n 1's. and I n : an identity matrix, and log T k = log T k k = 1 n k log T k n k . Let a0 = [0, ..., 0, 0]t, and j = 0

WHILE 1,

  • FOR i = 1 to n,

             I ( δ i ) = { 1 if  δ i > 0 , 1 if  δ i = 0 , & K ( x i , . ) a j log T i , 0 otherwise .

             a i j + 1 = I ( δ i ) n λ { K ( x i , . ) a j log T i }

             a j + 1 = [ a 1 j + 1 , ... , a i j + 1 , a i + 1 j ... , a n j ] t

         END

  • j = j + 1

  • IF |aj+1- aj| < ε, BREAK.

  • aj= aj+1

END

This dual kernel ridge regression (DKRR)algorithm designed for a quadratic error function with linear constraints is a convex function with convex constraints. Theoretically this algorithm will always converge and global optimal solution is guaranteed irrelevant to initial value a0. In our computational experiments, the differences of the estimated parameters with different initial values are very small (less than 0.01 with the infinity norm).

Adaptive Kernel Ridge Regression

When the number of variables is greater than the sample size n, regularization is needed to obtain a stable estimator w. We propose a L p = i = 1 n | w i | p ( p < 1 ) penalty for variable selection and estimation simultaneously. Unlike LASSO, L p penalty and its different approximation schemes (i.e., adaptive LASSO) possess the oracle property [21, 22]. Here the oracle property of a method means that it can correctly identify the nonzero coefficients with probability converging to one and that the estimators of nonzero coefficients are asymptotically normal with the same means and covariances as what they would have the zero coefficients be known in advance. We therefore propose the following penalized AFT model:

J ( w ) = 1 2 n i = 1 n I ( δ i ) { w t φ ( x i ) log T i } 2 + λ 2 i = 1 m | w i | p = 1 2 n i = 1 n I ( δ i ) { w t φ ( x i ) log T i } 2 + λ 2 i = 1 m | w i | 2 | w i | 2 p ,
(7)

where λ ≥ 0. With n m, linear kernel is more appropriate, since model with linear kernel has less over-fitting. We will take ϕ(x i ) = x i , introduce an auxiliary (latent) variable vector u = [u1, u2, ..., u m ]t, and develop an adaptive procedure based on equation (7). Equation (7) can be rewritten as:

J ( w , u ) = 1 2 n i = 1 n I ( δ i ) { w t x i log T i } 2 + λ 2 i = 1 m | w i | 2 | u i | 2 p ,
(8)
and , u = w .
(9)

With equation (8) and (9), we will find the first order derivative for w with a fixed u and then update u = w. After taking the first order derivative, we have the following equation:

w = 1 n λ i = 1 n I ( δ i ) { w t x i log T i } ( x i | u | 2 p ) = i = 1 n a i ( x i | u | 2 p ) = X u t a ,
(10)

where represents the componentwise product of two vectors and

X u = ( x 1 t ( | u | 2 p ) t x n t ( | u | 2 p ) t ) ,

and

a i = I ( δ i ) n λ { w t x i log T i } .
(11)

We substitute w = X u t a and define a new kernel function K u = X X u t . Then we have K u ( x i , . ) = x i t X u t , which is the ith row of K u . So,

a i = I ( δ i ) n λ { x i t X u t a log T i } = I ( δ i ) n λ { K u ( x i , . ) a log T i } .
(12)

The adaptive kernel ridge regression algorithm based on dual variables a with equation (10) and (12) is as follows:

Adaptive Kernel Ridge Regression (AKRR) Algorithm

Given a λ, p (0.1], training data { x i , log T i , δ i } i = 1 n , and a small ε and η.

Initializing w = u = rand(m, 1), and a = [0, ..., 0]t

Setting u(u i == 0) = 10e - 5 and j = 1.

While |w - u| > ε

  • u = w,

  • K u = X X u t
  • FOR i = 1 to n,

             I ( δ i ) = { 1 if  δ i > 0 1 if  δ i = 0 , & K ( x i , . ) a j log T i , 0 otherwise .

             a i j + 1 = I ( δ i ) n λ { K u ( x i , . ) a j log T i }

             a j + 1 = [ a 1 j + 1 , ... , a i j + 1 , a i + 1 j ... , a n j ] t

         END

  • j = j + 1

  • w = X u t a

    .

END

w(w < η) = 0

Unlike other LASSO based methods which seek to find optimal w directly, AKRR algorithm updates the m-dimensional w through updating a much smaller n-dimensional dual variable a. This method is computationally highly efficient when n m, which is common in genomic data. Although the proposed method is based on the dual problem, the primal variable w is explicitly updated in the computation. Theoretically AKRR algorithm will always converge to global optimal solution when p = 1 irrelevant to initial values of w, u, and a, as the error function is convex under L1 penalty, but only local optimal solution is guaranteed when p < 1. However, in our computational experiments with simulation and real data, even though we may have different optimal solutions with different initializations only when p ≤ 0.5, most selected features are still the same in different runs. AKRR always reach the same optimal solution in all of our experiments when p ≥ 0.6. One possible explanation is that the error function may still be near convex or convex almost everywhere when p is large. Therefore it may be possible that we enjoy both the oracle property with less bias and the global optimal solution with larger p (0.6 ≤ p < 1). Theoretical study for the near convex error function, however, is out of the scope of this paper. To prevent the results stick to a local optimal solution when p ≤ 0.5, we run AKRR 30 times and the best solution is chosen from the run with smallest test error. Even though AKRR does choose different variables with different p s, a small subset (≥ 5) of most important genes are always selected in our experiments. The model performance can be evaluated with cross-validation and the relative root mean squared error ( RRMSE = i ( ( y i y ^ i ) / y i ) 2 n ) of the test data. There are two parameters p and λ for the adaptive kernel ridge regression (AKRR) algorithm. One efficient way is to set p = 0.1, 0.2, ..., 1 alternatively, and then search for the best λ through cross-validation. The range of can be determined by the path of the optimal solution. λmin = 0 and λmax is set to be the smallest value with all zero estimated parameters by multiple trials. We search the optimal λ from λ (0, 1] in this paper. Usually we have a larger λmax for p = 1, and smaller λmax when p is smaller.

References

  1. Cox D: Regression models and life-tables (with discussion). Journal of Royal Statistical Society, Series B 1972, 34: 187–220.

    Google Scholar 

  2. Gui J, Li H: Variable Selection via Non-concave Penalized Likelihood and its Oracle Properties. Journal of the American Statistical Association, Theory and Methods 2001, 96: 456.

    Google Scholar 

  3. Van Houwelingen H, Bruinsma T, Hart A, Van't Veer L, Wessels L: Cross-validated Cox regression on microarray gene expression data. Stat Med 2006, 25: 3201–3216. 10.1002/sim.2353

    Article  PubMed  Google Scholar 

  4. Kalbfleisch J, Prentice R: The Statistical Analysis of Failure Time Data. New York: John Wiley; 1980.

    Google Scholar 

  5. Wei L: The accelerated failure time model. a useful alternative to the Cox regression model in survival analysis. Statistics in Medicine 1992, 11: 1871–1879. 10.1002/sim.4780111409

    Article  CAS  PubMed  Google Scholar 

  6. Ying Z: A large sample study of rank estimation for censored regression data. Annals of Statistics 1993, 21: 76–99. 10.1214/aos/1176349016

    Article  Google Scholar 

  7. Stute W, Wang J: The strong law under random censorship. Annals of Statistics 1993, 14: 1351–1365.

    Google Scholar 

  8. Stute W: Distributional convergence under random censorship when covariables are present. Scandinavia Journal of Statistics 1996, 23: 461–471.

    Google Scholar 

  9. Christensen R, Johnson W: Modelling accelerated failure time with a Dirichlet process. Biometrika 1988, 75: 693–704. 10.1093/biomet/75.4.693

    Article  Google Scholar 

  10. Kuo L, Mallick B: Bayesian semiparametric inference for the accelerated failure time model. Canadian J Stat 1997, 25: 457–472. 10.2307/3315341

    Article  Google Scholar 

  11. Bedrick E, Christensen R, Johnson W: Bayesian accelerated failure time analysis with application to veterinary epidemiology. Stat Med 2000, 19: 221–237. 10.1002/(SICI)1097-0258(20000130)19:2<221::AID-SIM328>3.0.CO;2-C

    Article  CAS  PubMed  Google Scholar 

  12. Jin Z, Lin D, Wei L, Ying Z: Rank-based inference for the accelerated failure time model. Biometrika 2003, 90: 341–353. 10.1093/biomet/90.2.341

    Article  Google Scholar 

  13. Vapnik V: Statistical Learning Theory. New York: Wiley and Sons; 1998.

    Google Scholar 

  14. Shawe-Taylor J, Cristianini N: Kernel Methods for Pattern Analysis. London: Cambridge University Press; 2004.

    Book  Google Scholar 

  15. Ma S, Huang J: Additive risk survival model with microarray data. BMC Bioinformatics 2007, 8: 192. 10.1186/1471-2105-8-192

    Article  PubMed  PubMed Central  Google Scholar 

  16. Sha N, Tadesse M, Vannucci M: Bayesian variable selection for the analysis of microarray data with censored outcomes. Bioinformatics 2006, 22(18):2262–2268. 10.1093/bioinformatics/btl362

    Article  CAS  PubMed  Google Scholar 

  17. Liu Z, Gartenhaus R, Chen X, Howell C, Tan M: Survival Prediction and Gene Identification with Penalized Global AUC Maximization. Journal of Computational Biology 2009, 16(12):1661–1670. 10.1089/cmb.2008.0188

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Liu Z, Jiang F: Gene identification and survival prediction with Lp Cox regression and novel similarity measure. Int J Data Min Bioinform 2009, 3(4):398–408. 10.1504/IJDMB.2009.029203

    Article  PubMed  Google Scholar 

  19. Rosenwald A, Wright G, Chan W, Connors J, Campo E, Fisher R, Gascoyne R, Muller-Hermelink H, Smeland E, Giltnane J, Hurt E, Zhao H, Averett L, Yang L, Wilson W, Jaffe E, Simon R, Klausner R, Powell J, Duffey P, Longo D, Greiner T, Weisenburger DD, DBLJVJAJMELGAGTMTLM, Sanger WG, Ott G, Kvaloy S, Delabie J, Holte H, Krajci P, Stokke T, Staudt L: The use of molecular profiling to predict survival after themotheropy for diffuse large-B-cell lymphoma. The New England Journal of Medicine 2002, 346: 1937–1947. 10.1056/NEJMoa012914

    Article  PubMed  Google Scholar 

  20. Dave S, Wright G, Tan B, Rosenwald A, Gascoyne R, Chan W, Fisher R, Braziel R, Rimsza L, Grogan T, Miller T, LeBlanc M, Greiner T, Weisenburger D, Lynch J, Vose J, Armitage J, Smeland E, Kvaloy S, Holte H, Delabie J, Connors J, Lansdorp P, Ouyang Q, Lister T, Davies A, Norton A, Muller-Hermelink H, Ott G, Campo E, Montserrat E, Wilson W, Jaffe E, Simon R, Yang L, Powell J, Zhao H, Goldschmidt N, Chiorazzi M, Staudt L: Prediction of survival in follicular lymphoma based on molecular features of tumor-in filtrating immune cells. N Engl J Med 2004, 351(21):2159–2169. 10.1056/NEJMoa041869

    Article  CAS  PubMed  Google Scholar 

  21. Knight K, Fu W: Asymptotics for Lasso-type estimators. Annals of Statistics 2000, 28: 1356–1378. 10.1214/aos/1015957397

    Article  Google Scholar 

  22. Fan J, Peng H: On Nonconcave Penalized Likelihood With Diverging Number of Parameters. The Annals of Statistics 2004, 32: 928–961. 10.1214/009053604000000256

    Article  Google Scholar 

Download references

Acknowledgements

We thank the Associate Editor and the two anonymous referees for their constructive comments, which improve this manuscript significantly. This work was partially supported by the 1R03CA133899 grant from the National Cancer Institute and by the National Science Foundation CCF-0729080 grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhenqiu Liu.

Additional information

Authors' contributions

ZL conceptualized and designed method, developed the software, and wrote the manuscript. FJ and RG analyzed and interpreted the data on its biological contents. DC and MT helped in method design and manuscript writing and revised the manuscript critically. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Liu, Z., Chen, D., Tan, M. et al. Kernel based methods for accelerated failure time model with ultra-high dimensional data. BMC Bioinformatics 11, 606 (2010). https://doi.org/10.1186/1471-2105-11-606

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-11-606

Keywords