- Methodology article
- Open Access
A boosting method for maximizing the partial area under the ROC curve
© Komori and Eguchi; licensee BioMed Central Ltd. 2010
- Received: 24 February 2010
- Accepted: 10 June 2010
- Published: 10 June 2010
The receiver operating characteristic (ROC) curve is a fundamental tool to assess the discriminant performance for not only a single marker but also a score function combining multiple markers. The area under the ROC curve (AUC) for a score function measures the intrinsic ability for the score function to discriminate between the controls and cases. Recently, the partial AUC (pAUC) has been paid more attention than the AUC, because a suitable range of the false positive rate can be focused according to various clinical situations. However, existing pAUC-based methods only handle a few markers and do not take nonlinear combination of markers into consideration.
We have developed a new statistical method that focuses on the pAUC based on a boosting technique. The markers are combined componentially for maximizing the pAUC in the boosting algorithm using natural cubic splines or decision stumps (single-level decision trees), according to the values of markers (continuous or discrete). We show that the resulting score plots are useful for understanding how each marker is associated with the outcome variable. We compare the performance of the proposed boosting method with those of other existing methods, and demonstrate the utility using real data sets. As a result, we have much better discrimination performances in the sense of the pAUC in both simulation studies and real data analysis.
The proposed method addresses how to combine the markers after a pAUC-based filtering procedure in high dimensional setting. Hence, it provides a consistent way of analyzing data based on the pAUC from maker selection to marker combination for discrimination problems. The method can capture not only linear but also nonlinear association between the outcome variable and the markers, about which the nonlinearity is known to be necessary in general for the maximization of the pAUC. The method also puts importance on the accuracy of classification performance as well as interpretability of the association, by offering simple and smooth resultant score plots for each marker.
- Receiver Operating Characteristic Curve
- False Positive Rate
- Score Function
- True Positive Rate
- Score Plot
The receiver operating characteristic (ROC) curve has been widely used in various scientific fields, in situations where the evaluation of discrimination performance is of great concern for the researchers. The area under the ROC curve (AUC) is the most popular metric because it has a simple probabilistic interpretation  and consists of two important rates used to assess classification performance: the true positive rate (TPR) and the false positive rate (FPR). The former is a probability of an affected subject being correctly judged as positive; the latter is that of an unaffected subject being improperly judged as positive. These two rates are shown to be more adequate to evaluate the classification accuracy than the odds ratio or relative risk . However, the AUC has been severely criticized for inconsistency arising between statistical significance and the corresponding clinical significance when the usefulness of a new marker is evaluated . Recently, Pencina et al.  propose a criterion termed integrated discriminant improvement and show the advantage over the AUC in the assessment of a new marker. In this context, the partial AUC (pAUC) is paid more attention than the AUC, especially in clinical settings where a low FPR or a high TPR is required [5–7].
Dodd and Pepe  propose a regression modeling framework based on the pAUC, and apply this framework to investigation of a relationship between a test result and the patient characteristics. Cai and Dodd  make some modifications to improve the efficiency of the estimation for parameters, and provide graphical tools for the model checking. In regard to classification problems, Pepe and Thompson  propose a method for deriving a linear combination of two markers that optimizes the AUC as well as the pAUC. However, as recognized by Pepe et al. , more general approaches are required when the number of markers is large. Moreover, the nonlinear combination of markers is necessary to maximize the AUC as well as the pAUC even in a simple setting such that normality is assumed to the distribution of markers . However, the existing methods [10, 13, 14] only deal with the linear combination of markers.
In this paper, we propose a new statistical method designed to maximize the pAUC, as an extension of AUCBoost , using a boosting technique and the approximate pAUC. The approximation-based method makes it possible to nonlinearly combine more than two markers, based on basis functions of natural cubic splines as well as decision stumps. The resultant score plots for each marker enable us to observe how the markers are associated with the outcome variable in a visually apparent way. Hence, our boosting method attaches importance not only to the classification performance but also to the interpretation of the results, which is essential in clinical and medical fields.
This paper is organized as follows. In the Methods section, we present a new boosting method for the maximization of the pAUC after giving a brief review of the pAUC and the approximate pAUC. Then, we show a relationship between the pAUC and the approximate pAUC in Theorem 1, which justifies the use of the approximate pAUC in the boosting algorithm. In the Results and Discussion section, we compare the proposed method with other existing ones such as SDF , AdaBoost , LogitBoost  and GAMBoost . In addition, we demonstrate the utility of the proposed method using real data sets; one of them is breast cancer data, in which we use both clinical and genomic data. In the last section, we summarize and make concluding remarks.
pAUC and approximate pAUC
Partial area under the ROC curve
where H is the Heaviside function: H(z) = 1 if z ≥ 0 and 0 otherwise, and g0(x) and g1(x) are probability density functions given class 0 and class 1, respectively. Note that FPR and TPR are also dependent on the score function F. However, for the sake of simplicity, we abbreviate it when the abbreviation does not cause ambiguity.
pAUCBoost with natural cubic splines
Start with a score function F 0(x i ) = 0, i = 1, 2, ..., n, where n = n 0 + n 1.
For t = 1, ..., T
Based on this iterative procedure, we propose the pAUCBoost algorithm after defining the object function.
without loss of generality.
The maximum value that is attained by a set of (F1, F2, ..., F p ) can take the larger value by replacing the functions with p sets of natural cubic splines. This can be proved in the same way as the result of generalized additive models , because the penalty term is the same. Hence, we find that the maximizer of the pAUCBoost objective function is the natural cubic spline.
Using weak classifiers f 's ∈ ℱ, we construct a score function F for the maximization of the pAUC. Note that the coefficient β cannot be determined independently of the weak classifier, so we denote it as β(f) in the following algorithm.
For t = 1, ..., T
Update β t-1(f) to β t (f) with a one-step Newton-Raphson iteration.
The dependency of the on thresholds and makes it necessary to pick up the best pair ( β t ( f t ), f t ) at the same time in step 2.(c). This process is quite different from that of AdaBoost, in which β t and f t are determined independently in Equations (3) and (4). Because of the dependency and the difficulty of getting the exact solution of β t (f t ), the one-step Newton-Raphson calculation is conducted in the boosting process. The one-step Newton-Raphson update is also employed in LogitBoost  and GAMBoost . The details of the pAUCBoost algorithm are given in additional file 1: Details of the pAUCBoost algorithm.
where F(-i)denotes a score function that is generated by the data without i-th subset, and is calculated by the i-th subset only. The optimal parameters are obtained at the maximum value of pAUCcv(λ, T) in a set of grid points (λ, T). In the case where the values of the pAUCcv(λ, T) are unstable, we calculate the pAUCcv(λ, T) 10 times and take the average to determine the optimal parameters. In our subsequent discussion, we set K = 10 and explicitly demonstrate the procedure in the section regarding real data analysis.
Relationship between pAUC and approximate pAUC
We investigate the relationship between the pAUC and the approximate pAUC, which gives a theoretical justification of the use of the approximate pAUC in the pAUCBoost algorithm.
See additional file 2: Proof of Theorem 1 and Corollary 1 for the details. Note that Theorem 1 holds for the approximate pAUC by a sigmoid function, so it also gives the justification of the AUC-based methods of Ma and Huang  and Wang et al. , as a special case where α1= 0 and α2 = 1. As proved in Eguchi and Copas  and Mcintosh and Pepe , the likelihood ratio Λ( x ) is the optimal score function that maximizes the AUC as well as the pAUC. In general, the Bayes risk consistency has been well discussed under an assumption of convexity for a variety of loss functions . Theorem 1 suggests a weak version of the Bayes risk consistency for the nonconvex function in the limiting sense.
We also have a following corollary from Theorem 1.
where η is a score function, and γ is a scalar. For a fixed FPR of F γη , the TPR of F γη becomes a increasing function of γ if and only if η = m(Λ), where m is a strictly increasing function.
See additional file 2: Proof of Theorem 1 and Corollary 1 for the details. Note that the corollary holds for any FPR in the range of (0,1). Hence, we find that the score function that moves every and all TPR's upward from the original positions, is nothing but the optimal score function derived from likelihood. This fact is not derived from the Neyman-Pearson fundamental lemma , from which m(Λ) is proved to maximize the AUC as well as pAUC. This corollary characterizes another property of the optimal score function m Λ.
In the simulation study, we apply pAUCBoost with = 0 and = 0.1. The training data set contains 50 controls and 50 cases, and the accuracy of the performance is evaluated based on 100 repetitions using test data sets of size 1000 (500 for each class).
Comparison with SDF
Note that the ROC curve is invariant to a monotone transformation of the score function.
Although a nonlinear transformation could be applied to the data in advance, it is not practical to examine all marginal distributions and decide the appropriate transformations in general situations. Hence, it is better to take the nonlinearity into consideration in the method itself in this way.
It is interesting to note that almost the same results are obtained by these quite different statistical methods. SDF uses the estimated values of pAUC to derive a score function; on the other hand, pAUCBoost directly uses the empirical value of the approximate pAUC in the algorithm.
Comparison with other boosting methods
We focus on only the most practical situation in disease screening such as the second situation in Figure 1. Pepe et al.  show the utility of the use of the pAUC, in selection of potential genes that are useful for discrimination between normal and cancer tissues. The point is that the value of pAUC reflects the overlap of two distributions of controls and cases, so that we can select genes that are suitable for the purpose of further investigation. For example, some overexpressed genes encourage us to investigate the corresponding protein products. However, the task of how to combine the selected genes for better discrimination is still pending.
Suppose we select 50 genes by a filtering procedure, which are closely correlated each other, such that 50-dimensional gene vectors for class 0 and class 1 are distributed as and , respectively. The covariance matrices are designed as Σ0 = 0.95 × W0 + 0.05 × I and Σ1 = 0.95 × W1 + 0.05 × I , where W0 and W1 are 50 × 50 matrices that are sampled from Wishart distribution with the identity matrix and 10 degrees of freedom at every repetition of the simulation. The identity matrix I is added for avoiding the singularity of the covariance matrices. These matrices are normalized to have 1's on the diagonal part in the similar way to the simulation setting of Zhao and Yu , and the range of the correlations turns out to be about between 0.8 and -0.8. Then, we randomly replace 10 percent of samples from class 1 with those that are distributed as for each gene, so that each gene is informative in the sense of the pAUC as shown in the second situation of Figure 1.
Mainly, there are two types of weak classifiers: smoothing splines and decision stumps. Buhlmann and Yu  proposed to use smoothing splines in the L2 Boost algorithm, and Tutz and Binder  used B-splines in GAMBoost. However, the way of fitting the weak classifiers in pAUCBoost is different from those methods. Our algorithm updates a score function with a basis function of a natural cubic spline for one marker in Equations (9) and (10). On the other hand, their algorithms update a score function with a set of basis functions for one marker. Hence, our resultant score functions have tendency to have simpler forms (See the illustrations of score plots in the next section), which also leads to simple interpretation of the association between the markers and the outcome variable. Note that there exists a trade-off between the simplicity and the number of markers necessary for good performance of discrimination. However, the simplicity depends on the number of basis functions used for the selected markers, so the more complicated association can be expressed by increasing the number of the basis functions.
In AdaBoost and LogitBoost, decision stumps are used as weak classifiers [29, 30]. The advantage of using decision stumps is that we can apply the boosting methods independently of the scale of the marker values. Hence, the decision stump-based method is resistant to outliers, which often occur in real data. However, it easily suffers from false discovery, as clearly shown in simulation studies. This causes poor performance in a setting where non-informative genes are mixed with informative ones. We have also confirmed that pAUCBoost with decision stumps for weak classifiers shows worse performance than that of pAUCBoost with natural cubic splines. Hence, we have to be much careful about which weak classifiers to be employed. It depends on the types of markers or the purpose of the analysis we are engaged in.
Breast cancer data
The breast cancer data of van't Veer et al.  contains not only gene expression profiles but also clinical markers such as Age, age of patients; Size, diameter of breast cancer; Grade, tumour grade; Angi, existence or nonexistence of angioinvasion; ERp, ER expression; PRp, PR expression; and Lymp, existence or nonexistence of lymphocytic infiltrate. First, we apply AUCBoost to these clinical markers and investigate their utility. The weak classifiers we use are natural cubic splines for continuous markers (Age and Size), and decision stumps to discrete or categorical markers. Second, we apply pAUCBoost with = 0 and = 0.1 to the gene expression data after a pAUC-based filtering procedure proposed by Pepe et al. . The training data set and the test data set are the same as those in , that is, 44 patients with good prognosis and 34 patients with distant metastases for training data, and 7 and 12 patients for test data, respectively.
The top 30 genes ranked by the probability of gene selection, and the values of the pAUC and AUC.
P g (100)
Ovarian cancer data
We have developed the pAUCBoost algorithm to maximize the pAUC based on the approximate pAUC in the additive model. The use of the approximate pAUC is justified by showing a relationship with the pAUC in Theorem
A resultant score function is decomposed componentwise into functions that are useful for understanding the associations between each marker and the outcome variable, as shown in real data analysis. Natural cubic splines that give the maximum of the pAUCBoost objective function are used for markers taking continuous values. In addition, using decision stumps for markers that take discrete or categorical values the proposed method enables us to treat various kinds of markers together.
We have also provided a consistent way to analyze gene expression data in the sense of the pAUC, as shown in the analysis of the breast cancer data, ovarian cancer data and leukemia data. The pAUC is shown to be useful by Pepe et al.  for selection of informative genes, some of which are overexpressed or underexpressed in cancer tissues. However, how to combine the selected genes and how to discriminate the cancer tissues from normal tissues, have not been addressed. We nonlinearly combined the genes ranked by the pAUC in order to produce a score function, by which the classification of controls and cases is done. Interestingly, we have found 4 genes in common with the 70 genes of van't Veer et al. : Contig63649_RC, AA555029_RC, Contig40831_RC, NM_L006931. 6 genes among the selected 11 genes are related to protein coding. We also applied pAUCBoost to the 70 genes for comparison with the result from the 11 genes. We found that it yielded a poor result, especially about the value of pAUC for test data. Hence, pAUCBoost with FPR restricted to be small should be applied to the genes or markers that have gone through a pAUC-based filtering procedure beforehand. In the usual analysis setting, in which markers do not have especially high values of the pAUC, AUCBoost is preferable because of the stable performance due to the comprehensive information it can take into the algorithm.
The authors would like to express acknowledgement to Professor John Copas who kindly gave us some useful comments and suggestions to this paper. We also note that this study is supported by the Program for Promotion of Fundamental Studies in Health Sciences of the National Institute of Biomedical Innovation (NIBIO).
- Bamber D: The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. Journal of Mathematical Psychology 1975, 12: 387–415. 10.1016/0022-2496(75)90001-2View ArticleGoogle Scholar
- Pepe MS, Janes H, Longton G, Leisenring W, Newcomb P: Limitation of the odds ratio in gauging the performance of a diagnostic, prognostic, or screening marker. American Journal of Epidemiology 2004, 159: 882–890. 10.1093/aje/kwh101View ArticlePubMedGoogle Scholar
- Cook NR: Use and misuse of the receiver operating characteristic curve in risk prediction. Circulation 2007, 115: 928–935. 10.1161/CIRCULATIONAHA.106.672402View ArticlePubMedGoogle Scholar
- Pencina MJ, D'Agostino RB Sr, D'Agostino RB Jr, Vasan RS: Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond. Statistics in Medicine 2008, 27: 157–172. 10.1002/sim.2929View ArticlePubMedGoogle Scholar
- Baker SG: The central role of receiver operating characteristic (ROC) curves in evaluating tests for the early detection of cancer. Journal of the National Cancer Institute 2003, 95: 511–515. 10.1093/jnci/95.7.511View ArticlePubMedGoogle Scholar
- Qi Y, Joseph ZB, Seetharaman JK: Evaluation of different biological data and computational classification methods for use in protein interaction prediction. Proteins: Structure, Function, and Bioinformatics 2006, 63: 490–500. 10.1002/prot.20865View ArticleGoogle Scholar
- Hadjiiski L, Sahiner B, Chan HP, Petrick N, Helvie M: Classification of malignant and benign masses based on hybrid ART2LDA approach. IEEE Transactions on Medical Imaging 1999, 18: 1178–1187. 10.1109/42.819327View ArticlePubMedGoogle Scholar
- Dodd LE, Pepe MS: Partial AUC estimation and regression. Biometrics 2003, 59: 614–623. 10.1111/1541-0420.00071View ArticlePubMedGoogle Scholar
- Cai T, Dodd LE: Regression analysis for the partial area under the ROC curve. Statistica Sinica 2008, 18: 817–836.Google Scholar
- Pepe MS, Thompson ML: Combining diagnostic test results to increase accuracy. Biostatistics 2000, 1: 123–140. 10.1093/biostatistics/1.2.123View ArticlePubMedGoogle Scholar
- Pepe MS, Cai T, Longton G: Combining predictors for classification using the area under the Receiver Operating Characteristic curve. Biometrics 2006, 62: 221–229. 10.1111/j.1541-0420.2005.00420.xView ArticlePubMedGoogle Scholar
- Komori O: A boosting method for maximization of the area under the ROC curve. Annals of the Institute of Statistical Mathematics 2009.Google Scholar
- Ma S, Huang J: Regularized ROC method for disease classification and biomarker selection with microarray data. Bioinformatics 2005, 21: 4356–4362. 10.1093/bioinformatics/bti724View ArticlePubMedGoogle Scholar
- Wang Z, Chang YI, Ying Z, Zhu L, Yang Y: A parsimonious threshold-independent pretein feature selection method through the area under receiver operating characteristic curve. Bioinformatics 2007, 23: 2788–1794. 10.1093/bioinformatics/btm442View ArticlePubMedGoogle Scholar
- Freund Y, Schapire RE: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences 1997, 55: 119–139. 10.1006/jcss.1997.1504View ArticleGoogle Scholar
- Friedman J, Hastie T, Tibshirani R: Additive logistic regression: a statistical view of boosting. The Annals of Statistics 2000, 28: 337–407. 10.1214/aos/1016218223View ArticleGoogle Scholar
- Tutz G, Binder H: Generalized Additive modeling with implicit variable selection by likelihood-based boosting. Biometrics 2006, 62: 961–971. 10.1111/j.1541-0420.2006.00578.xView ArticlePubMedGoogle Scholar
- Dodd LE, (Ed): Regression methods for areas and partial areas under the ROC curve. University of Washington: Ph.D. thesis; 2001.Google Scholar
- Pepe MS, (Ed): The Statistical Evaluation of Medical Tests for Classification and prediction. New York: Oxford University Press; 2003.Google Scholar
- Eguchi S, Copas J: A class of logistic-type discriminant functions. Biometrika 2002, 89: 1–22. 10.1093/biomet/89.1.1View ArticleGoogle Scholar
- Bühlmann P, Yu B: Boosting with the L2loss: regression and classification. Journal of the American Statistical Association 2003, 98: 324–339. 10.1198/016214503000125View ArticleGoogle Scholar
- Murata N, Takenouchi T, Kanamori T, Eguchi S: Information geometry of U -boost and Bregman divergence. Neural Computation 2004, 16: 1437–1481. 10.1162/089976604323057452View ArticlePubMedGoogle Scholar
- Hastie T, Tibshirani R, (Eds): Generalized Additive Models. Chapman & Hall; 1990.Google Scholar
- McIntosh MW, Pepe MS: Combining several screening tests: Optimality of the risk score. Biometrics 2002, 58: 657–664. 10.1111/j.0006-341X.2002.00657.xView ArticlePubMedGoogle Scholar
- Lugosi BG, Vayatis N: On the Bayes-risk consistency of regularized boosting methods. The Annals of Statistics 2004, 32: 30–55.Google Scholar
- Neyman J, Pearson ES: On the problem of the most efficient tests of statistical hypotheses. Philosophical Transaction of the Royal Society of London. Series A 1933, 231: 289–337. 10.1098/rsta.1933.0009View ArticleGoogle Scholar
- Pepe MS, Longton G, Anderson GL, Schummer M: Selecting differentially expressed genes from microarray experiments. Biometrics 2003, 59: 133–142. 10.1111/1541-0420.00016View ArticlePubMedGoogle Scholar
- Zhao P, Yu B: Stagewise Lasso. Journal of Machine Learning Research 2007, 8: 2701–2726.Google Scholar
- Ben-Dor A, Bruhn L, Friedman N, Nachman I, Schummer M, Yakhini Z: Tissue classification with gene expression profiles. Journal of Computation Biology 2000, 7: 559–583. 10.1089/106652700750050943View ArticleGoogle Scholar
- Dettling M, Bühlmann P: Boosting for tumor classification with gene expression data. Bioinformatics 2003, 19: 1061–1069. 10.1093/bioinformatics/btf867View ArticlePubMedGoogle Scholar
- van't Veer LJ, Dai H, van de Vijver MJ, He YD, Hart AAM, Mao M, Peterse HL, van der Kooy K, Marton MJ, Witteveen AT, Schreiber GJ, Kerkhoven RM, Roberts C, Linsley PS, Bernards R, Friend SH: Gene expression profiling predicts clinical outcome of breast cancer. Nature 2002, 415: 530–536. 10.1038/415530aView ArticleGoogle Scholar
- Golub TT, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA, Bloomfield CD, Lander ES: Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science 1999, 286: 531–537. 10.1126/science.286.5439.531View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.