The C^{1}C^{2}: A framework for simultaneous model selection and assessment
 Martin Eklund^{1}Email author,
 Ola Spjuth^{1} and
 Jarl ES Wikberg^{1}
https://doi.org/10.1186/147121059360
© Eklund et al; licensee BioMed Central Ltd. 2008
Received: 07 April 2008
Accepted: 02 September 2008
Published: 02 September 2008
Abstract
Background
There has been recent concern regarding the inability of predictive modeling approaches to generalize to new data. Some of the problems can be attributed to improper methods for model selection and assessment. Here, we have addressed this issue by introducing a novel and general framework, the C^{1}C^{2}, for simultaneous model selection and assessment. The framework relies on a partitioning of the data in order to separate model choice from model assessment in terms of used data. Since the number of conceivable models in general is vast, it was also of interest to investigate the employment of two automatic search methods, a genetic algorithm and a bruteforce method, for model choice. As a demonstration, the C^{1}C^{2} was applied to simulated and realworld datasets. A penalized linear model was assumed to reasonably approximate the true relation between the dependent and independent variables, thus reducing the model choice problem to a matter of variable selection and choice of penalizing parameter. We also studied the impact of assuming prior knowledge about the number of relevant variables on model choice and generalization error estimates. The results obtained with the C^{1}C^{2} were compared to those obtained by employing repeated Kfold crossvalidation for choosing and assessing a model.
Results
The C^{1}C^{2} framework performed well at finding the true model in terms of choosing the correct variable subset and producing reasonable choices for the penalizing parameter, even in situations when the independent variables were highly correlated and when the number of observations was less than the number of variables. The C^{1}C^{2} framework was also found to give accurate estimates of the generalization error. Prior information about the number of important independent variables improved the variable subset choice but reduced the accuracy of generalization error estimates. Using the genetic algorithm worsened the model choice but not the generalization error estimates, compared to using the bruteforce method. The results obtained with repeated Kfold crossvalidation were similar to those produced by the C^{1}C^{2} in terms of model choice, however a lower accuracy of the generalization error estimates was observed.
Conclusion
The C^{1}C^{2} framework was demonstrated to work well for finding the true model within a penalized linear model class and accurately assess its generalization error, even for datasets with many highly correlated independent variables, a low observationtovariable ratio, and model assumption deviations. A complete separation of the model choice and the model assessment in terms of data used for each task improves the estimates of the generalization error.
Keywords
Background
A common task in computational biology/bioinformatics and computational chemistry/chemometrics (hereafter abbreviated BBCC) is to model a dependent variable from a set of independent variables; this gives insight into the workings of the process being modeled and enables prediction of future observations. Typical examples include analyzing potential drug activity through proteochemometrics and quantitative structureactivity relationship (QSAR) modeling [1–3], discovering gene regulatory bindingsite modules [4], and predicting clinical outcomes of cancer from gene expression data [5]. However, recent articles have indicated that predictive modeling approaches have not fully fulfilled expectations for solving real problems. This issue has for instance been discussed in the fields of QSAR [6] and microarray gene expression data modeling [7, 8]. While some of the problems may be attributed to incorrect use and interpretation of the models, others can be ascribed to improper model selection and assessment. Our aim is here to address the latter issue by introducing the C^{1}C^{2}, a general framework for model choice and assessment.
where D_{hk}denotes the dataset D with the k th and h th subsets omitted. In the present work we build on and generalize this idea into the C^{1}C^{2} framework. In general, the number of models in $\mathcal{M}$ is huge, thus it is unfeasible to go through even a small subset of them manually. Hence, for a framework such as the C^{1}C^{2} to be useful in practice, automated methods for searching the model space $\mathcal{M}$ are necessary; in this sense the C^{1}C^{2} is similar to the automatic modelling approaches taken in for instance [12–14]. Here, the specific use of the C^{1}C^{2} is demonstrated by applying it together with two search methods to simulated and realworld datasets. The results are compared to those obtained by employing the function (1) for model selection and assessment. In the interest of clarity, we have restricted our attention to the study of model choice and assessment within a linear model class, ${\mathcal{M}}^{ridge}$ (defined below) for a quadratic loss function. We discuss the results of the demonstrations, the pros and cons of the generality of the C^{1}C^{2}, and set out some directions for further research.
Results
Algorithm
Let C^{1}, C^{2} ∈ C= {C_{1},..., C_{ J }}, where C is a set of model assessment criteria and C^{1}, C^{2} represent two specific criteria (i.e. C^{1} = C_{ i }, C^{2} = C_{ j }, i, j = 1,..., J). Further, let S ∈ S, where S is a set of search methods; let L ∈ L, where L is a set of loss functions; let G denote a sequence of data processing steps (e.g. meancentering, transformations, whitening, etc) and let G ' contain the results of G applied to D_{k}(the roles of G and G ' are exemplified in the discussion following the pseudococe below); let R be a positive integer and K an integer between 1 and n, where n is the number of observations. The C^{1}C^{2} procedure is outlined with the following pseudocode:
Initiate $\mathcal{M}$, G, L, C^{1}, C^{2}, R, and K.
for (r in 1,..., R) {
a. Partition data, D= {D_{ k }}_{k = 1,..., k}.
for (k in 1,..., K) {
b. Apply G to D_{k}. Save results in G'.
c. Search $\mathcal{M}$ using the data D_{k}and C^{2} as objective function. Assume ${\mathcal{M}}_{l}$ is found to maximize (or minimize) C^{2}. Save ${\mathcal{M}}_{l}$.
d. Apply G to D_{ k }using G'.
e. Assess ${\mathcal{M}}_{l}$ using C^{1} and D_{ k }. Save assessment result.
}
}
The data partitioning in (a) separates data for the model choice from data for the model assessment. Note that the partitioning is dependent on the choice of C^{1} and does not necessarily need to be done in a crossvalidation fashion. For instance, the choice C^{1} = ".632 estimator" [15, 16], partitions the data by independently sampling n rows from D with replacements and lets the observations not included among the sampled observations constitute the test set. The output from the C^{1}C^{2} is also dependent on the choice of C^{1}; for example, the choice C^{1} = C^{2} = Bayesian Information Criterion (BIC, see [17] and Methods) would not give a direct estimation of the generalization error, but rather an assessment of model overfitting. To clarify the roles of G and G', we give the following example: Let G only contain a processing step that scales to unit variance. In (b) G is applied to D_{i}and the standard deviation of each column of D_{k}is saved in G '. In (d), G ' is applied to D_{ k }, that is, the columns in D_{ k }are scaled using the standard deviations calculated in (b). This treatment of G ensures that D_{ k }indeed constitutes an independent testset. The 'for loop' over r is introduced to enable calculation of confidence intervals for estimates and, by averaging the estimates over R repetitions, it permits reduction of the variance in parameter and error (or overfitting) estimates by a factor of 1/R.
Datasets used in the testing
Both the simulated and the real data used for evaluating a new method or algorithm should reflect typical dataset properties found in realworld application domains. Examples of such properties in BBCC are multicollinearity, a large number of independent variables relative to the number of observations, and binary and categorical independent variables.
Simulated data
We simulated datasets as follows:
Let Δ = (δ_{ i })_{i = 1,..., p}, where ${\delta}_{i}=\{\begin{array}{l}1\text{iff}{x}_{i}\text{isinthemodel}\hfill \\ 0\text{else}\hfill \end{array}$, represents a subset of X_{ n }, and let β_{ p }(Δ) = (β_{ i }(δ_{ i }))_{i = 1,..., p}, where ${\beta}_{i}({\delta}_{i})=\{\begin{array}{l}{\beta}_{i}\ne 0\text{iff}{\delta}_{i}=1\hfill \\ 0\text{else}\hfill \end{array}$, is a vector of regression coefficients. The data matrix, X_{ n }was sampled from a twentydimensional multivariate normal distribution. Thus, x_{ i }~ N_{20}(0_{20}, Σ_{20}), i = 1,..., n, where 0_{20} is a twentydimensional vector of zeroes and Σ_{20} is either the I_{20} identity matrix or a covariance matrix, S_{20}, estimated from a real inhouse QSAR dataset that originated from HIV protease inhibitors. The HIV QSAR dataset contains highly correlated independent variables resulting in an S_{20} with many large absolute values in the offdiagonal elements. Three δ_{ i }in Δ were chosen to be nonzero and equal to one; the positions were chosen at random to be 11, 14, and 18 (but remained fixed throughout the experiment for evaluation purposes). The corresponding regression coefficients, β_{11}(δ_{11}), β_{14}(δ_{14}), and β_{18}(δ_{18}), were, for simplicity, all set equal to 1. The variables 11, 14, and 18 were slightly correlated with an estimated covariance matrix ${\Sigma}_{(11,14,18)}=\left(\begin{array}{ccc}\text{4}\text{.07}& \text{15}\text{.28}& \text{0}\text{.01}\\ \text{15}\text{.28}& \text{3423}\text{.20}& \text{0}\text{.33}\\ \text{0}\text{.01}& \text{0}\text{.33}& \text{0}\text{.01}\end{array}\right).$.
 1.
n = 15, Σ_{20} = I_{20}
 2.
n = 200, Σ_{20} = I_{20}
 3.
n = 15, Σ_{20} = S_{20}
 4.
n = 200, Σ_{20} = S_{20}
The simulated datasets are available in CSV format from Additional files 1, 2, 3, 4.
The Selwood dataset
This is a real dataset, made available from a website [18] and originally published in 1990 [19]. It is a widely studied dataset (see [20, 21] and references therein). It contains one dependent variable, 53 independent variables, and 31 observations. The 53 independent variables correspond to numerical descriptions of molecules (antifilarial antimycin analogues) designed to capture their physicochemical properties. The dependent variable is the in vitro antifilarial activity of the molecules. This dataset exhibits extremely strong correlations between the independent variables and contains real valued, binary and categorical independent variables. It is known from previous studies that this dataset contains nonlinearities, but that decent models can be found using linear methods.
Testing
To demonstrate the use of the C^{1}C^{2}, it was applied to the simulated and realworld datasets described above (hereafter referred to as "the datasets"). Below we describe the choices for R, K, $\mathcal{M}$, G, L, C^{1}, C^{2}, and S and the motivation for each selection.
Choice of R and K
The larger the choice of R, the higher the accuracy in the estimate of the generalization error; the choice of R is thus constrained by time and is dependent on the size of the dataset and the computational complexity of the choices of $\mathcal{M}$, G, L, C^{1}, C^{2}, and S. The choice of K is a tradeoff between bias and variance; the larger the K, the more variance and the less bias in the estimates of the generalization error [22]. R was here set to 12 and K to 5.
Choice of $\mathcal{M}$
We make the assumption that a normal linear model forms a reasonable approximation of the data. This model is given by: y_{ n }= X_{ n }β_{ p }+ ε_{ n }, where the subscripts denote the number of rows in a matrix, β_{ p }are regression coefficients, and ε_{ n }~N(0,σ^{2} I). Further, because n <p and the columns in X_{ n }are highly correlated in some of the datasets, we decide to use the ridge estimator, ${\widehat{\beta}}_{p}^{ridge}$ (see Methods), of the regression coefficients, β_{ p }. Let ${\mathcal{M}}^{ridge}$ be the linear class of models given by: y_{ n }= X_{ n }β_{ p }+ ε_{ n }, where β_{ p }is estimated by ${\widehat{\beta}}_{p}^{ridge}$. We thus choose $\mathcal{M}={\mathcal{M}}^{ridge}$. The problem of model choice within ${\mathcal{M}}^{ridge}$ reduces to the problem of variable selection, i.e. choosing which subset of the p columns in X_{ n }to include in the model, and the problem of choosing the ridge penalizing parameter λ (see Methods). Hence, letting Δ = (δ_{ i })_{i = 1,..., p}(see simulated data above) represent a subset of X_{ n }, we want to choose Δ and λ using the C^{1}C^{2} framework. A choice of Δ and λ for given values of r and k will be termed "an estimate" of Δ and λ, respectively, and be denoted $\widehat{\Delta}$ and $\widehat{\lambda}$. Averages of estimates over the K folds and the R repeats in the C^{1}C^{2} are denoted $\overline{\widehat{\Delta}}$ and $\overline{\widehat{\lambda}}$, respectively.
Choice of G
As the columns in the Selwood dataset are measured in different units using different scales, we choose to make G contain mean centering and scaling to unit variance processing steps.
Choice of L
Choice of C^{1}and C^{2}
m = 1,..., M, where M is the number of models in ${\mathcal{M}}^{ridge}$. df in (5) is the number of free parameters in the model ${\mathcal{M}}_{m}^{ridge}$ (note that this is not equal to the number of parameters in the model ${\mathcal{M}}_{m}^{ridge}$, see for instance [16]). The choice of C^{1} is motivated by that we wish to get a direct estimate, ${\widehat{\epsilon}}_{gen}$, of the generalization error, ε_{ gen }, of our model choice. Provided that the assumptions behind BIC are fulfilled, the choice C^{2} = BIC has several advantages over C^{2} = crossvalidation, including: a reduction of bias in parameter estimates [22], a reduction of variance by the RaoBlackwell type relation derived in [23], and a drastic reduction of the computational cost of the procedure.
Choice of S
A genetic algorithm (GA) was chosen as a search method because it is very easy to adapt to different situations and in general effective for nondeterministic polynomialtime hard combinatorial problems, such as the problem of estimating Δ [24]. A trial solution in the GA is here a varying length chromosome that contains a realvalued number representing λ and a vector of integers representing the indices of the δ_{ i }in Δ that are nonzero. The fitness function is our choice of C^{2}. For the simulated datasets, we also chose to run the C^{1}C^{2} with a brute force search method: for each λ ∈ {0,0.01,0.02,...,10} an exhaustive search in the Δ space (i.e. an allsubset regression) was performed. This enabled comparisons between the GA method and a search method guaranteed to find the optimal model (given a specific objective function and the resolution and limits of λ).
Some remarks regarding the demonstration
To enable comparisons with the estimates of Δ, λ, and ε_{ gen }obtained with repeated Kfold crossvalidation, the demonstration described above was repeated with the function (1) used as a criterion for model choice and for assessing the model. Note that, since the C^{1}C^{2} includes the 'for loop' over r, (1) was repeated R = 12 times, each time with a new, independent data partitioning. This was done to facilitate an impartial comparison between the two methods.
The demonstration of the C^{1}C^{2} framework can be compared with the work of for instance Nicolotti and Carotti [20], where a genetic algorithm was employed to estimate Δ. In contrast to that approach, the C^{1}C^{2} framework completely separates model choice and assessment whereby more accurate generalization error estimates in general are achieved. Further, the use of specific ad hoc objective functions is avoided by choosing C^{2} to be a formally derived model selection criterion, and simultaneous estimation of model parameters other than Δ (for example, the ridge parameter λ in the demonstration) can be afforded. Typically, in works that have employed a search method for estimating Δ, a given number of nonzero δ_{ i }in Δ is assumed (see for instance [20, 25]). Therefore it was of interest to investigate the effect of this assumption on producing good estimates of Δ and ε_{ gen }. This can be tested for the simulated datasets in the demonstration, where the number of nonzero δ_{ i }is known. The C^{1}C^{2} was therefore applied to the simulated datasets both with an assumption about the number of nonzero δ_{ i }and without the assumption. For simplicity (however somewhat unrealistically), we assumed the correct number of nonzero δ_{ i }.
Results of the testing
Simulated datasets
The four simulated datasets in combination with the use of either the C^{1}C^{2} or repeated Kfold crossvalidation for model choice and assessment, the GA or the bruteforce search method, and either with or without the assumption of prior knowledge of the number of nonzero δ_{ i }constitute a twolevel, fivefactor, full factorial experimental design. The C^{1}C^{2} and the repeated Kfold crossvalidation were applied four times to each factor combination, thus providing four replicates of the whole demonstration for the simulated data. The design can be analyzed within the normal linear modelw_{ iv }= γ_{0} + γ_{1} z_{1i}+ γ_{2}z_{2i}+ γ_{3}z_{3i}+ γ_{4}z_{4i}+ γ_{5}z_{5i}+ η i (i = 1,...,128),
where z_{ ji }, j = 1,2,3,4,5, are factors corresponding to C^{1}C^{2} or repeated Kfold crossvalidation model choice and assessment, brute force or GA search, Σ_{20} = I_{20} or Σ_{20} = S_{20} in the multivariate normal distribution from which the data was sampled, 200 or 15 observations, and assuming three nonzero δ_{ i }or no such assumption, respectively. i goes from 1 to 128 in (6) as there are 32 factor combinations in four replicates. w_{ iv }, v = 1,2,3, are response variables defined according to the following: the Euclidean norm w_{i 1}= ${\Vert \overline{\widehat{\Delta}}\Delta \Vert}_{i}$ was used to measure how well Δ on average was estimated, w_{i 2}= ${\overline{\widehat{\lambda}}}_{i}$ was used as a response variable in the λ case (as the correct choice of λ is not known), and w_{i 3}= ${\overline{\left{\widehat{\epsilon}}_{gen}{\tilde{\epsilon}}_{gen}\right}}_{i}$ was used to measure how well the generalization error ε on average was estimated; ${\widehat{\epsilon}}_{gen}$ denotes the estimate of ε_{ gen }for given values of r and k; ${\tilde{\epsilon}}_{gen}=\frac{1}{N}{\displaystyle {\sum}_{j=1}^{N}{({y}_{j,ext}{\widehat{y}}_{j,ext}({x}_{j,ext},{\mathcal{M}}_{l}))}^{2}}$ denotes the generalization error estimate obtained by using the corresponding choice of model, ${\mathcal{M}}_{l}$, to predict the response values in a large (N = 500,000) external test set, generated in the same way as the dataset used for choosing the model and estimating ${\widehat{\epsilon}}_{gen}$; the bar denotes the average over the R·K individual estimates. The generalization error can be decomposed into three parts: one irreducible error (corresponding to the error added when simulating the data), the squared bias, and the variance. The latter two are dependent on the model choice and consequently the generalization error is dependent on the model choice. We here assume that the largesample estimate of the generalization error, ${\tilde{\epsilon}}_{gen}$, closely represents the true generalization error, ε, for a given model choice.
Coefficient estimates of model (6) with w_{i 1}= ${\Vert \overline{\widehat{\Delta}}\Delta \Vert}_{i}$, i = 1, ..., 128, as a response variable.
Estimate  Std.Error  tvalue  Pr(>t)  

intercept  0.01992  0.04606  0.433  0.6661 
c1c2  0.04337  0.03761  1.153  0.2511 
ga  0.15683  0.03761  4.170  5.72e05 
cor  0.07211  0.03761  1.918  0.0575 
15  0.21324  0.03761  5.670  9.75e08 
all  0.32754  0.03761  8.710  1.78e14 
Coefficient estimates of model (6) with w_{i 2}= ${\overline{\widehat{\lambda}}}_{i}$, i = 1, ..., 128 as a response variable.
Estimate  Std.Error  tvalue  Pr(>t)  

Intercept  0.02864  0.04732  0.605  0.546181 
c1c2  0.04804  0.03863  1.244  0.216065 
ga  0.07329  0.03863  1.897  0.060193 
cor  0.56058  0.03863  14.510  < 2e16 
15  0.14307  0.03863  3.703  0.000321 
all  0.11504  0.03863  2.977  0.003506 
Coefficient estimates of model (6) with w_{i 3}= ${\overline{\left{\widehat{\epsilon}}_{gen}{\tilde{\epsilon}}_{gen}\right}}_{i}$, i = 1, ..., 128 as a response variable.
Estimate  Std.Error  tvalue  Pr(>t)  

intercept  0.034642  0.004841  7.157  6.73e11 
c1c2  0.024003  0.003952  6.073  1.47e08 
ga  0.001149  0.003952  0.291  0.771710 
cor  0.006198  0.003952  1.568  0.119423 
15  0.013469  0.003952  3.408  0.000888 
all  0.012089  0.003952  3.059  0.002732 
Selwood dataset
Applying the C^{1}C^{2} to the Selwood dataset yielded $\widehat{\Delta}$ that on average contained 11.4 ± 3.5 nonzero ${\widehat{\delta}}_{i}$. The 11 most frequently picked variables were the partial atomic charge for atoms ATCH1, ATCH3, and ATCH6; electrophilic superdelocalizability for atom ESDL3; the van der Waal's volume VDWVOL; the surface area SURF_A; the principal moments of inertia MOFI_Y and MOFI_Z; the principal ellipsoid axis PEAX_X; the partition coefficient LOGP; and the sum of F substituent constant SUM_F (see [19] for more details about the variables). The estimation of λ by $\overline{\widehat{\lambda}}$ was 2.50 ± 0.09 and the estimation of the generalization error by ${\overline{\widehat{\epsilon}}}_{gen}$ was 0.42 ± 0.038, where ${\overline{\widehat{\epsilon}}}_{gen}$ is an average over the R·K ${\widehat{\epsilon}}_{gen}$ produced in the C^{1}C^{2}.
Applying repeated Kfold crossvalidation for model choice and assessment to the Selwood dataset gave on average 14.1 ± 4.8 selected variables. The 14 most frequently picked variables included the same 11 variables picked by the C^{1}C^{2} (see above) plus DIPMOM (the dipole moment), ATCH7 (partial charge of atom 7), and DIPV_Y (the dipole moment vector in the Ydirection). The estimation of λ by $\overline{\widehat{\lambda}}$ was 3.01 ± 0.22 and the estimation of the generalization error by $\overline{\widehat{\epsilon}}$ was 0.35 ± 0.041, where $\overline{\widehat{\epsilon}}$ is an average over the R·K $\widehat{\epsilon}$ produced in the repeated Kfold crossvalidation.
Implementation
Computer programs to implement the C^{1}C^{2} were written in Java (Sun Microsystems [26]) as a part of the library P, that will serve as the data analysis plugin for Bioclipse [27]. P is available under the GSPL license from the website [28]; it is open source and free for academics. It has a modular architecture that enables plugging in new features, including modeling methods, model selection criteria, and search procedures. P relies on a modified version of the JGap library (available from the website [29]) for the genetic algorithm computations (the modifications are available under the LGPL license from the website [30]). The Rpackage, pvclust [31, 32], was used for the cluster analysis (see Discussion).
Discussion
Simulated datasets
The model (6) fitted to w = $\Vert \overline{\widehat{\Delta}}\Delta \Vert $ (see Table 1) showed a relatively clear significant difference (on the 90% level) in average Δ estimates depending on whether the data came from a multivariate normal distribution with Σ_{20} = I_{20} or Σ_{20} = S_{20}. Furthermore, we observed significant positive impacts on average Δ estimates with more observations and knowledge about the number of nonzero δ_{ i }. All these findings were expected; highly correlated variables should provide worse estimates of Δ, whereas more observations and trustworthy prior knowledge should provide better estimates. A significant improvement in average Δ estimates was observed when using the bruteforce search compared to the GA. The GA on average selected slightly more variables than needed and than what the bruteforce method did. No clear significant difference could be seen between using the C^{1}C^{2} rather than repeated Kfold crossvalidation.
It can be shown that the optimal choice of λ (in terms of minimized expected generalization error) tends to zero as the number of observations tends to infinity and decreases with decreasing number of variables (see for instance [33]) and with decreasing correlations between the independent variables. The model (6) fitted with w = $\overline{\widehat{\lambda}}$ as a response (see Table 2) showed that the average estimated λ was significantly smaller for the data that came from a multivariate normal distribution with Σ_{20} = I_{20} compared to Σ_{20} = S_{20}, when more observations were used, and when prior knowledge about the number of nonzero δ_{ i }in Δ was assumed. Although the true value of λ is not known, these results are thus consistent with theory and provide evidence that both the C^{1}C^{2} and repeated Kfold crossvalidation gave reasonable estimates of λ in the demonstration. However, the average λ estimates are not equal to zero for all orthogonal datasets, presumably due to the stochastic nature of the GA and to errors in y_{ n }. No significant differences were observed between using the GA or the brute force search methods or between the C^{1}C^{2} and the repeated Kfold crossvalidation.
Selwood dataset
Models containing LOGP and one variable each from the green and blue clusters have high predictive power and comply to the QUICK rules for credible predictive models proposed previously [21]. Furthermore, these models have been found credible in the works of others [20, 25, 34]. The C^{1}C^{2} chose 11 variables belonging to the green, red, and blue clusters, whereas Kfold crossvalidation chose an additional three variables: ATCH7 in the green cluster, and DIPMOM and DIPV_Y belonging to the third distinct (yellow) cluster. Variables from the yellow cluster do not improve the internal predictive ability when testing models containing LOGP and one variable from the respective green, blue, and yellow clusters on the whole Selwood dataset (data not shown); this result is supported by the work of Nicolotti and Carotti (see Table 1 in [20]). More variables in the Selwood dataset were thus on average selected with repeated Kfold crossvalidation than when using the C^{1}C^{2} (the difference was significant on the 80% level, tested by a onesided Welch's ttest), including two that not seem to improve the predictive ability of the models. The generalization estimates obtained with the Kfold crossvalidation was lower than those obtained with the C^{1}C^{2} (significant on the 70% level). Although these differences are not highly significant, it is tantalizing to arrive at the conclusion that the models selected by repeated Kfold crossvalidation in this particular case are more prone to overfitting and that this is reflected in the lower generalization error estimates.
The C^{1}C^{2}
We have here introduced the C^{1}C^{2} framework for simultaneous model choice and assessment. The main idea is a complete separation of the choice of a model and its assessment in terms of the data used for each task. The C^{1}C^{2} was applied to the problem of choosing a model, ${\mathcal{M}}_{l}^{ridge}\in {\mathcal{M}}^{ridge}$. Previously, others have described methods that, within the linear model, tackle the problem of regression coefficient shrinkage and variable selection simultaneously, for example, the lasso [35]. However, the C^{1}C^{2} framework is general and is easily applied to other settings. For instance, different choices of C^{2} are favorable in different situations; Akaike's information criterion (AIC) [36] is known to be consistent within the linear model if the true model is very complex, whereas BIC is favorable within linear models of finite dimension [37], and crossvalidation is preferable to use in situations where the degrees of freedom of a model is difficult to define, and so forth. The search method can also be tailored to the problem at hand; for instance, bruteforce methods are advantageous for small problems, whereas GAs are faster and thus applicable to larger problems. Moreover, if required, $\mathcal{M}$ can be chosen to contain nonlinear models, L can be chosen to be exponential in order to increase the penalty on outliers, and instead of using the search method to produce an estimate of Δ (as we did in the demonstration) we can let G contain a dedicated variable selection method. The cost of this generality is uncharacterized convergence rates (in finite time) of the parameter estimates, which is coupled to the need of employing a general search method (like a GA) rather than solving standard convex problems. Running the C^{1}C^{2} R times enables averaging of estimates and calculation of confidence intervals, but renders problems in choosing which out of the R models to use for interpretation and future predictions. A potential remedy to these problems is, instead of choosing a model, to employ all chosen models in a stackinglike schema (see [38] for details on stacking). Testing this idea and further testing of the C^{1}C^{2} for other choices of $\mathcal{M}$, G, L, C^{1}, C^{2}, and S will be pursued in future research.
Conclusion
Summary of the demonstrations of the C^{1}C^{2}.
Both the C^{1}C^{2} and repeated Kfold crossvalidation performed well at finding the true Δ (even when independent variables are highly correlated and when n <p). 

The C^{1}C^{2} and repeated Kfold crossvalidation produced reasonable estimates of λ. 
Prior information about the number of important independent variables improves model choice but can reduce the accuracy of generalization error estimates. 
Correlated independent variables and using the genetic algorithm worsened the model choice significantly, but not the generalization error estimates. 
The C^{1}C^{2} compares favourably with repeated Kfold crossvalidation for assessing the generalization error. 
Methods
Bayesian Information Criterion (BIC)
In the latter expression, ${\widehat{\theta}}_{m}$ is the maximum likelihood estimate, df_{ m }is the number of free parameters in model ${\mathcal{M}}_{m}$ (note that this in general is not equal to the number of parameters in the model), and n is the number of observations [17]. Thus, BIC is an approximation to the Bayes solution, but valid outside the Bayesian context. This is true because the leading terms in the approximation do not depend on the prior densities of the model parameters, θ_{ m }. BIC is, as opposed to nonparametric approaches such as crossvalidation, model based and therefore relies on the assumptions made in the modeling. BIC is derived under the assumption that the data comes from a distribution in the exponential family (see [41] for more about the assumptions behind BIC and a comparison with Akaike's Information Criterion).
Ridge regression
The ordinary least squares (OLS) estimator of the regression coefficients β in the standard linear model is efficient (i.e. has the minimum possible variance) within the class of linear and unbiased estimators. However, when the independent variables are correlated, the variance of the OLS estimator is generally high. In these situations, ridge regression [42] can yield improved parameter estimates by minimizing a penalized residual sum of squares, given by: RSS (λ) = (y  X β)^{ T }(y  X β) + λ β^{T} β. Finding the minimum of this expression gives the ridge solution: ${\widehat{\beta}}^{ridge}$ = (X^{ T }X + λI)^{1} X^{ T }y, where I is the p × p identity matrix. The solution thus adds a positive constant to the diagonal of X^{ T }X before inversion; this makes the problem nonsingular, even if X^{ T }X is not of full rank. While this introduces bias into the coefficient estimates, variance is often greatly reduced.
Note that ${\widehat{\beta}}^{ridge}$ is a linear function in y, thus it is straightforward to define the effective degrees of freedom of the ridge regression fit, (df(λ)) as:
df (λ) = tr [ X ( X^{ T }X + λ I )^{1}X^{ T }] [16]. The degrees of freedom of the fit are needed for carrying out model selection according to, for instance, BIC. Linearity in y also enables easy implementation (no quadratic programming required as, for instance, is necessary with the lasso).
Genetic algorithm (GA)
A GA (see [43] for more details) is a stochastic search technique for finding exact or approximate solutions to optimization and search problems. A typical genetic algorithm is defined by a genetic representation of a given solution (normally termed a chromosome in the GA context). That is, a vector, ${w}_{t}^{i}$ specifies the numerical representation of the i th chromosome at generation t, and an objective function (or fitness function), f(${w}_{t}^{i}$)→/ℝ/evaluates the fitness of a chromosome. The GA is initiated by setting up a random population that contains a number of trial chromosomes. New solutions are generated by mutation or recombination of existing solutions and are selected for the next generation with a probability given by: $p({w}_{t}^{i})=f({w}_{t}^{i})/{\displaystyle {\sum}_{j}^{a}f({w}_{t}^{i})}$. The process is continued through a number of generations until an optimal or acceptable solution has been found. Genetic algorithms of this type can be shown to converge with a probability of one to the global optimal solution as t → ∞.
Brute force search
A brute force search systematically tests an exhaustive list of all possible candidates for the solution to a given search or optimization problem and checks whether each candidate satisfies the problem's statement.
Availability and requirements

Project name: P

Project homepage: http://www.genettasoft.com/p/P.zip

Operating systems: Platform independent (interpreted language)

Programming language: Java

Requirements: Java 5 or higher. A modified version of the JGAP package http://jgap.sourceforge.net/ for genetic algorithms. The modifications are distributed under the LGPL license and are available at http://www.genettasoft.com/p/JGAPm.zip. log4j, available from http://logging.apache.org/log4j/.

Licence: GSPL (see http://www.genettasoft.com/gspl/gspl1_1.pdf)

Restrictions to use for commercial purposes: licence needed
Declarations
Acknowledgements
Supported by the Swedish VR (04X05975). We extend out gratitude to three anonymous reviewers who helped to improve this paper.
Authors’ Affiliations
References
 Kontijevskis A, Prusis P, Petrovska R, Yahorava S, Mutulis F, Mutule I, Komorowski J, Wikberg JES: A look inside HIV resistance through retroviral protease interaction maps. PloS Computational Biology 2007., 3(3):Google Scholar
 Wikberg JES, Lapinsh M, Prusis P: Proteochemometrics: A tool for modelling the molecular interaction space. In Chemogenomics in Drug Discovery  A Medicinal Chemistry Perspective. Edited by: Kubinyi H, Müller G. Weinheim , WileyVCH; 2004:289–309.Google Scholar
 Hansch C: A Quantitative Approach to Biochemical StructureActivity Relationships. Accounts of Chemical Research 1969, 2: 232–239. 10.1021/ar50020a002View ArticleGoogle Scholar
 Hvidsten TR, Wilczynski B, Kryshtafovych A, Tiuryn J, Komorowski J, Fidelis K: Discovering regulatory bindingsite modules using rulebased learning. Genome Res 2005/06/03 edition. 2005, 15(6):856–866. 10.1101/gr.3760605PubMed CentralView ArticlePubMedGoogle Scholar
 van't Veer LJ, Dai H, van de Vijver MJ, He YD, Hart AA, Mao M, Peterse HL, van der Kooy K, Marton MJ, Witteveen AT, Schreiber GJ, Kerkhoven RM, Roberts C, Linsley PS, Bernards R, Friend SH: Gene expression profiling predicts clinical outcome of breast cancer. Nature 2002, 415(6871):530–536. 10.1038/415530aView ArticleGoogle Scholar
 Johnson SR: The Trouble with QSAR (or How I Learned To Stop Worrying and Embrace Fallacy). J Chem Inf Model 2008, 25–26. 10.1021/ci700332kGoogle Scholar
 Michiels S, Koscielny S, Hill C: Prediction of cancer outcome with microarrays: a multiple random validation strategy. The Lancet 2005, 365(9458):488–492. 10.1016/S01406736(05)178660View ArticleGoogle Scholar
 Ntzani EE, Ioannidis JPA: Predictive ability of DNA microarrays for cancer outcomes and correlates: an empirical assessment. The Lancet 2003, 362(9394):1439–1444. 10.1016/S01406736(03)146867View ArticleGoogle Scholar
 Freyhult E, Peteris P, Lapinsh M, Wikberg JES, Moulton V, Gustafsson MG: Unbiased descriptor and parameter selection confirms the potential of proteochemometric modelling. BMC Bioinformatics 2005, 6(50):1–14.Google Scholar
 Golbraikh A, Tropsha A: Beware of q2! J Mol Graph Model 2002/02/23 edition. 2002, 20(4):269–276. 10.1016/S10933263(01)001231View ArticlePubMedGoogle Scholar
 Stone M: CrossValidatory Choice and Assessment of Statistical Predictions. In Journal of the Royal Statistical Society Series B (Methodological). Volume 36. Royal Statistical Society; 1974:111–147.Google Scholar
 Cartmell J, Enoch S, Krstajic D, Leahy DE: Automated QSPR through Competitive Workflow. J Comput Aided Mol Des 2005, 19(11):821–833. 10.1007/s1082200590298View ArticlePubMedGoogle Scholar
 Cartmell J, Krstajic D, Leahy DE: Competitive Workflow: novel software architecture for automating drug design. Curr Opin Drug Discov Devel 2007, 10(3):347–352.PubMedGoogle Scholar
 Obrezanova O, Gola JM, Champness EJ, Segall MD: Automatic QSAR modeling of ADME properties: bloodbrain barrier penetration and aqueous solubility. J Comput Aided Mol Des 2008, 22(6–7):431–440. 10.1007/s1082200891938View ArticlePubMedGoogle Scholar
 Efron B, Tibshirani R: An Introduction to the Bootstrap. New York , Chapman & Hall/CRC. ; 1993.View ArticleGoogle Scholar
 Hastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning. In Springer series in statistics. New York , SpringerVerlag; 2001:533.Google Scholar
 Schwarz G: Estimating the Dimension of a Model. In The Annals of Statistics. Volume 6. Institute of Mathematical Statistics; 1978:461–464. 10.1214/aos/1176344136View ArticleGoogle Scholar
 The Selwood dataset[http://www.ndsu.edu/qsar_soc/resource/datasets/selwood.htm]
 Selwood DL, Livingstone DJ, Comley JCW, O'Dowd AB, Hudson AT, Jackson P, Jandu KS, Rose VS, Stables JN: Structureactivity relationships of antifilarial antimycin analogs: a multivariate pattern recognition study. Journal of Medicinal Chemistry 1990, 33(1):136–142. 10.1021/jm00163a023View ArticlePubMedGoogle Scholar
 Nicolotti O, Carotti A: QSAR and QSPR studies of a highly structured physicochemical domain. J Chem Inf Model 2006/01/24 edition. 2006, 46(1):264–276. 10.1021/ci050293lView ArticlePubMedGoogle Scholar
 Todeschini R, Consonni V, Mauri A, Pavan M: Detecting “bad” regression models: multicriteria fitness functions in regression analysis. Analytica Chimica Acta 2004, 515(1):99–208.Google Scholar
 Burman P: A Comparative Study of Ordinary CrossValidation, vFold CrossValidation and the Repeated LearningTesting Methods. In Biometrika. Volume 76. Biometrika Trust; 1989:503–514.View ArticleGoogle Scholar
 Efron B: The Estimation of Prediction Error: Covariance Penalties and CrossValidation. Journal of the American Statistical Association. In Journal of the American Statistical Association. Volume 99. American Statistical Association; 2004:619–632. 10.1198/016214504000000692Google Scholar
 Amaldi E, Kann V: On the Approximability of Minimizing Nonzero Variables Or Unsatisfied Relations in Linear Systems. Theoretical Computer Science 1997, 209: 237–260.Google Scholar
 Kubinyi H: Variable Selection in QSAR Studies. II. A Highly Efficient Combination of Systematic Search and Evolution. QSAR & Combinatorial Science 1994, 13(4):393–401. 10.1002/qsar.19940130403View ArticleGoogle Scholar
 Java  The Source for Java Developers[http://java.sun.com/]
 Spjuth O, Helmus T, Willighagen EL, Kuhn S, Eklund M, Wagener J, MurrayRust P, Steinbeck C, Wikberg JE: Bioclipse: an open source workbench for chemo and bioinformatics. BMC Bioinformatics 2007/02/24 edition. 2007, 8: 59. 10.1186/14712105859PubMed CentralView ArticlePubMedGoogle Scholar
 P[http://www.genettasoft.com/p/P.zip]
 JGAP  Java Genetic Algorithms Package[http://jgap.sourceforge.net/]
 JGAPm[http://www.genettasoft.com/p/JGAPm.zip]
 Shimodaira H: Approximately unbiased tests of regions using multistepmultiscale bootstrap resampling. Annals of Statistics 2004, 32: 2616–2641. 10.1214/009053604000000823View ArticleGoogle Scholar
 pvclust[http://www.is.titech.ac.jp/~shimo/prog/pvclust/]
 Skurichina M: Stabilizing Weak Classifiers  Regularization and Combining Techniques in Discriminant Analysis. Volume PhD. Vilnius State University; 2001.Google Scholar
 Cho SJ, Hermsmeier MA: Genetic Algorithm Guided Selection: Variable Selection and Subset Selection. J Chem Inf Comput Sci 2002, 42(4):927 9936. 10.1021/ci010247vView ArticlePubMedGoogle Scholar
 Tibshirani R: Regression Shrinkage and Selection via the Lasso. In Journal of the Royal Statistical Society Series B (Methodological). Volume 58. Royal Statistical Society; 1996:267–288.Google Scholar
 Akaike H: A new look at the statistical model identification. IEEE transactions on automatic control 1974, 19(6):716 7723. 10.1109/TAC.1974.1100705View ArticleGoogle Scholar
 Shao J: An asymptotic theory for linear model selection. Statistica Sinica 1997, 7: 221–264.Google Scholar
 Wolpert D: Stacked Generalization. Neural Networks, 1992, 5: 241–259. 10.1016/S08936080(05)800231View ArticleGoogle Scholar
 Kass RE, Wasserman L: A Reference Bayesian Test for Nested Hypotheses and its Relationship to the Schwarz Criterion. In Journal of the American Statistical Association. Volume 90. American Statistical Association; 1995:928–934. 10.2307/2291327View ArticleGoogle Scholar
 Wasserman L: Bayesian model selection and model averaging. In Mathematical Psychology Symposium. Bloomington, Indiana ; 1999.Google Scholar
 Kuha J: AIC and BIC  Comparisons of Assumptions and Performance. Sociological Methods & Research 2004, 33(2):188–229. 10.1177/0049124103262065View ArticleGoogle Scholar
 Hoerl AE, Kennard RW: Ridge Regression: Biased Estimation for Nonorthogonal Problems. In Technometrics. Volume 12. American Statistical Association; 1970:55–67. 10.2307/1267351View ArticleGoogle Scholar
 Goldberg DE: Genetic Algorithms in Search, Optimization and Machine Learning. Boston , AddisonWesley Longman Publishing Co., Inc. ; 1989:372.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.