Boosting the discriminatory power of sparse survival models via optimization of the concordance index and stability selection

Background When constructing new biomarker or gene signature scores for time-to-event outcomes, the underlying aims are to develop a discrimination model that helps to predict whether patients have a poor or good prognosis and to identify the most influential variables for this task. In practice, this is often done fitting Cox models. Those are, however, not necessarily optimal with respect to the resulting discriminatory power and are based on restrictive assumptions. We present a combined approach to automatically select and fit sparse discrimination models for potentially high-dimensional survival data based on boosting a smooth version of the concordance index (C-index). Due to this objective function, the resulting prediction models are optimal with respect to their ability to discriminate between patients with longer and shorter survival times. The gradient boosting algorithm is combined with the stability selection approach to enhance and control its variable selection properties. Results The resulting algorithm fits prediction models based on the rankings of the survival times and automatically selects only the most stable predictors. The performance of the approach, which works best for small numbers of informative predictors, is demonstrated in a large scale simulation study: C-index boosting in combination with stability selection is able to identify a small subset of informative predictors from a much larger set of non-informative ones while controlling the per-family error rate. In an application to discover biomarkers for breast cancer patients based on gene expression data, stability selection yielded sparser models and the resulting discriminatory power was higher than with lasso penalized Cox regression models. Conclusion The combination of stability selection and C-index boosting can be used to select small numbers of informative biomarkers and to derive new prediction rules that are optimal with respect to their discriminatory power. Stability selection controls the per-family error rate which makes the new approach also appealing from an inferential point of view, as it provides an alternative to classical hypothesis tests for single predictor effects. Due to the shrinkage and variable selection properties of statistical boosting algorithms, the latter tests are typically unfeasible for prediction models fitted by boosting. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-1149-8) contains supplementary material, which is available to authorized users.


The complete algorithm for boosting the C-index
The aim of the algorithm is to optimize a prediction model η with respect to the concordance index via the estimator proposed by Uno et al. [1].
Directly using − C Uno (T, η) as loss function for gradient boosting, however, is unfeasible because C Uno (T, η) is not differentiable with respect to η i .
We therefore follow the approach of Ma and Huang [2] and approximate the indicator function in by the sigmoid function K(u) = 1/(1 + exp(−u/σ)). Replacing the indicator function in (1) by its smoothed version results in the smoothed estimator with weights Now, the smoothed estimator C smooth (T, η) is differentiable with respect to the predictor η i . Its derivative is given by In order to use − C smooth (T, η) as loss function to be minimized, the algrotihm needs to fit the baselearners to the negative gradient of − C smooth (T, η) which is therefore the derivative in (4).
The complete component-wise gradient boosting algorithm for the optimization of the smoothed C-index is given as follows (for details see [3]): (1) Initialize the estimate of the marker combinationη [0] with offset values. For example, setη [0] = 0, leading toβ [0] l = 0 for all components l = 1, . . . , p. Choose a sufficiently large maximum number of iterations m stop and set the iteration counter m to 1.
(2) Compute the negative gradient vector by using formula (4) and evaluate it at the marker combinationη [m−1] of the previous iteration: (3) Fit the negative gradient vector U [m] separately to each of the components of X via the baselearners b l (·): l (x l ) for l = 1, ..., p.
(4) Select the component l * that best fits the negative gradient vector according to the least squares criterion, i.e., select the base-learner b l * defined by (5) Update the marker combinationη for this component: where sl is a small step length (0 < sl 1). A common choice for this value is sl = 0.1; as a result only 10% of the fit of the base-learner is added to the current model [4,5].
As only the base learnerb l * was selected, only the effect of component l * is updated (β

Implementation
The most flexible implementation of gradient boosting for statistical modelling, which is also relatively easy to extend [6], is the mboost [7] add-on package for the Open Source programming environment R [8]. For a tutorial on the how to apply the package for practical data analysis, see [4].
For stability selection we apply the stabsel() function from the stabs [9] package, which is also incorporated in mboost for boosting models. It provides an implementation of the classical approach proposed by Meinshausen and Bühlmann [10] and the extended sampling scheme by Shah and Samworth [11]. For evaluating the discriminatory power of the resulting model on test data, we use the UnoC() function of the survAUC [12] package.
To apply gradient boosting to fit linear statistical models that are optimal for the C-index in the version of Uno et al. [1], one needs to define the following Cindex() family to be used withon the glmboost() function, for details see Mayr and Schmid [3].

Example
We will briefly demonstrate how to apply the Cindex() family in practice in combination with stability selection to derive the optimal combination biomarkers. We will use the van de Vijver et al. [13] data set of 144 lymph node positive breast cancer patients. The data set is publicly available as part of the R add-on package penalized [14]. The 70-gene signature for metastasis-free survival after surgery was originally developed by van't Veer et al. [15].