 Methodology article
 Open Access
 Published:
The C^{1}C^{2}: A framework for simultaneous model selection and assessment
BMC Bioinformatics volume 9, Article number: 360 (2008)
Abstract
Background
There has been recent concern regarding the inability of predictive modeling approaches to generalize to new data. Some of the problems can be attributed to improper methods for model selection and assessment. Here, we have addressed this issue by introducing a novel and general framework, the C^{1}C^{2}, for simultaneous model selection and assessment. The framework relies on a partitioning of the data in order to separate model choice from model assessment in terms of used data. Since the number of conceivable models in general is vast, it was also of interest to investigate the employment of two automatic search methods, a genetic algorithm and a bruteforce method, for model choice. As a demonstration, the C^{1}C^{2} was applied to simulated and realworld datasets. A penalized linear model was assumed to reasonably approximate the true relation between the dependent and independent variables, thus reducing the model choice problem to a matter of variable selection and choice of penalizing parameter. We also studied the impact of assuming prior knowledge about the number of relevant variables on model choice and generalization error estimates. The results obtained with the C^{1}C^{2} were compared to those obtained by employing repeated Kfold crossvalidation for choosing and assessing a model.
Results
The C^{1}C^{2} framework performed well at finding the true model in terms of choosing the correct variable subset and producing reasonable choices for the penalizing parameter, even in situations when the independent variables were highly correlated and when the number of observations was less than the number of variables. The C^{1}C^{2} framework was also found to give accurate estimates of the generalization error. Prior information about the number of important independent variables improved the variable subset choice but reduced the accuracy of generalization error estimates. Using the genetic algorithm worsened the model choice but not the generalization error estimates, compared to using the bruteforce method. The results obtained with repeated Kfold crossvalidation were similar to those produced by the C^{1}C^{2} in terms of model choice, however a lower accuracy of the generalization error estimates was observed.
Conclusion
The C^{1}C^{2} framework was demonstrated to work well for finding the true model within a penalized linear model class and accurately assess its generalization error, even for datasets with many highly correlated independent variables, a low observationtovariable ratio, and model assumption deviations. A complete separation of the model choice and the model assessment in terms of data used for each task improves the estimates of the generalization error.
Background
A common task in computational biology/bioinformatics and computational chemistry/chemometrics (hereafter abbreviated BBCC) is to model a dependent variable from a set of independent variables; this gives insight into the workings of the process being modeled and enables prediction of future observations. Typical examples include analyzing potential drug activity through proteochemometrics and quantitative structureactivity relationship (QSAR) modeling [1–3], discovering gene regulatory bindingsite modules [4], and predicting clinical outcomes of cancer from gene expression data [5]. However, recent articles have indicated that predictive modeling approaches have not fully fulfilled expectations for solving real problems. This issue has for instance been discussed in the fields of QSAR [6] and microarray gene expression data modeling [7, 8]. While some of the problems may be attributed to incorrect use and interpretation of the models, others can be ascribed to improper model selection and assessment. Our aim is here to address the latter issue by introducing the C^{1}C^{2}, a general framework for model choice and assessment.
Let D= {X_{ n }, y_{ n }} be a dataset, where X_{ n }= (x '_{1}, ..., x '_{ n })' is an n × p_{ X }matrix whose i th row, x_{ i }, is the value of a p_{ X }vector of independent variables associated with y_{ i }, the i th row of the n × 1 matrix, y= (y_{1}, ..., y_{ n })'. A statistical model, ${\mathcal{M}}_{l}$ can be used to characterize the relation between X_{ n }and y_{ n }. In general, given the dataset D, ${\mathcal{M}}_{l}$ must be chosen from a set of models, $\mathcal{M}=\left\{{\mathcal{M}}_{1},\mathrm{...},{\mathcal{M}}_{m},\mathrm{...},{\mathcal{M}}_{\mathcal{M}}\right\}$ according to some criterion (typically the minimization of a loss function). The most common way to select ${\mathcal{M}}_{l}$ from $\mathcal{M}$ in BBCC is to use Kfold crossvalidation; that is, the dataset D is split into K mutually exclusive subsets, D_{1},..., D_{ k },..., D_{ k }, of approximately equal size and ${\mathcal{M}}_{l}^{0}$ (D) is picked to minimize the function:
where L is a loss function and D_{k}denotes that the k th subset was excluded from D during the model fitting; ${n}_{{D}_{i}}$ is the number of observations in subset D_{ i }; and m = 1,..., M, where M is the number of models in $\mathcal{M}$. This model selection method may seem straightforward and intuitive, however it neglects the fact that all the data at hand is used to make the model choice. Thus, we no longer have an independent testset to assess the chosen model by. The result is that, typically, C^{0}(${\mathcal{M}}_{l}$) underestimates the generalization error (see for instance [9]), defined as the expected prediction error over an independent test sample. This problem has been highlighted in relatively recent works [9, 10], but was noted initially in 1974 [11]. To obtain a more accurate generalization error estimate, the model selection process must be separated from the model assessment in terms of the data that is used. Ideally, if data were abundant and easily produced, we would set aside a large test dataset and use it to assess – but not to choose! – the model ${\mathcal{M}}_{l}$, and subsequent model refinements could be assessed with new, unseen data. In practice, this is however often impossible since BBCC data is typically scarce, and expensive to produce. The luxury of large independent testsets can thus rarely be afforded. To tackle this problem, Freyhult et al. [9] suggested using a Kfold crossvalidatory assessment of an Hfold crossvalidatory model choice, ${\mathcal{M}}_{l}^{\u2020}$, as a way of simultaneously choosing ${\mathcal{M}}_{l}$ and assessing its performance; thereby separating the model selection from its assessment. The model is assessed by the function
where ${\mathcal{M}}_{l}^{\u2020}$ (D_{k}) is the crossvalidatory choice of ${\mathcal{M}}_{l}$ based on D_{k}; that is, the model ${\mathcal{M}}_{l}$ that minimizes the function
where D_{hk}denotes the dataset D with the k th and h th subsets omitted. In the present work we build on and generalize this idea into the C^{1}C^{2} framework. In general, the number of models in $\mathcal{M}$ is huge, thus it is unfeasible to go through even a small subset of them manually. Hence, for a framework such as the C^{1}C^{2} to be useful in practice, automated methods for searching the model space $\mathcal{M}$ are necessary; in this sense the C^{1}C^{2} is similar to the automatic modelling approaches taken in for instance [12–14]. Here, the specific use of the C^{1}C^{2} is demonstrated by applying it together with two search methods to simulated and realworld datasets. The results are compared to those obtained by employing the function (1) for model selection and assessment. In the interest of clarity, we have restricted our attention to the study of model choice and assessment within a linear model class, ${\mathcal{M}}^{ridge}$ (defined below) for a quadratic loss function. We discuss the results of the demonstrations, the pros and cons of the generality of the C^{1}C^{2}, and set out some directions for further research.
Results
Algorithm
Let C^{1}, C^{2} ∈ C= {C_{1},..., C_{ J }}, where C is a set of model assessment criteria and C^{1}, C^{2} represent two specific criteria (i.e. C^{1} = C_{ i }, C^{2} = C_{ j }, i, j = 1,..., J). Further, let S ∈ S, where S is a set of search methods; let L ∈ L, where L is a set of loss functions; let G denote a sequence of data processing steps (e.g. meancentering, transformations, whitening, etc) and let G ' contain the results of G applied to D_{k}(the roles of G and G ' are exemplified in the discussion following the pseudococe below); let R be a positive integer and K an integer between 1 and n, where n is the number of observations. The C^{1}C^{2} procedure is outlined with the following pseudocode:
Initiate $\mathcal{M}$, G, L, C^{1}, C^{2}, R, and K.
for (r in 1,..., R) {
a. Partition data, D= {D_{ k }}_{k = 1,..., k}.
for (k in 1,..., K) {
b. Apply G to D_{k}. Save results in G'.
c. Search $\mathcal{M}$ using the data D_{k}and C^{2} as objective function. Assume ${\mathcal{M}}_{l}$ is found to maximize (or minimize) C^{2}. Save ${\mathcal{M}}_{l}$.
d. Apply G to D_{ k }using G'.
e. Assess ${\mathcal{M}}_{l}$ using C^{1} and D_{ k }. Save assessment result.
}
}
The data partitioning in (a) separates data for the model choice from data for the model assessment. Note that the partitioning is dependent on the choice of C^{1} and does not necessarily need to be done in a crossvalidation fashion. For instance, the choice C^{1} = ".632 estimator" [15, 16], partitions the data by independently sampling n rows from D with replacements and lets the observations not included among the sampled observations constitute the test set. The output from the C^{1}C^{2} is also dependent on the choice of C^{1}; for example, the choice C^{1} = C^{2} = Bayesian Information Criterion (BIC, see [17] and Methods) would not give a direct estimation of the generalization error, but rather an assessment of model overfitting. To clarify the roles of G and G', we give the following example: Let G only contain a processing step that scales to unit variance. In (b) G is applied to D_{i}and the standard deviation of each column of D_{k}is saved in G '. In (d), G ' is applied to D_{ k }, that is, the columns in D_{ k }are scaled using the standard deviations calculated in (b). This treatment of G ensures that D_{ k }indeed constitutes an independent testset. The 'for loop' over r is introduced to enable calculation of confidence intervals for estimates and, by averaging the estimates over R repetitions, it permits reduction of the variance in parameter and error (or overfitting) estimates by a factor of 1/R.
Figure 1 gives a graphical gives a graphical view of the C^{1}C^{2} framework. We emphasize that the generality of the framework allows C^{1}, C^{2}, and S to be chosen to fit the problem at hand. Adequate choices of C^{1}, C^{2}, and S make the model selection and assessment more accurate and faster, which we will discuss below.
Datasets used in the testing
Both the simulated and the real data used for evaluating a new method or algorithm should reflect typical dataset properties found in realworld application domains. Examples of such properties in BBCC are multicollinearity, a large number of independent variables relative to the number of observations, and binary and categorical independent variables.
Simulated data
We simulated datasets as follows:
Let Δ = (δ_{ i })_{i = 1,..., p}, where ${\delta}_{i}=\{\begin{array}{l}1\text{iff}{x}_{i}\text{isinthemodel}\hfill \\ 0\text{else}\hfill \end{array}$, represents a subset of X_{ n }, and let β_{ p }(Δ) = (β_{ i }(δ_{ i }))_{i = 1,..., p}, where ${\beta}_{i}({\delta}_{i})=\{\begin{array}{l}{\beta}_{i}\ne 0\text{iff}{\delta}_{i}=1\hfill \\ 0\text{else}\hfill \end{array}$, is a vector of regression coefficients. The data matrix, X_{ n }was sampled from a twentydimensional multivariate normal distribution. Thus, x_{ i }~ N_{20}(0_{20}, Σ_{20}), i = 1,..., n, where 0_{20} is a twentydimensional vector of zeroes and Σ_{20} is either the I_{20} identity matrix or a covariance matrix, S_{20}, estimated from a real inhouse QSAR dataset that originated from HIV protease inhibitors. The HIV QSAR dataset contains highly correlated independent variables resulting in an S_{20} with many large absolute values in the offdiagonal elements. Three δ_{ i }in Δ were chosen to be nonzero and equal to one; the positions were chosen at random to be 11, 14, and 18 (but remained fixed throughout the experiment for evaluation purposes). The corresponding regression coefficients, β_{11}(δ_{11}), β_{14}(δ_{14}), and β_{18}(δ_{18}), were, for simplicity, all set equal to 1. The variables 11, 14, and 18 were slightly correlated with an estimated covariance matrix ${\Sigma}_{(11,14,18)}=\left(\begin{array}{ccc}\text{4}\text{.07}& \text{15}\text{.28}& \text{0}\text{.01}\\ \text{15}\text{.28}& \text{3423}\text{.20}& \text{0}\text{.33}\\ \text{0}\text{.01}& \text{0}\text{.33}& \text{0}\text{.01}\end{array}\right).$.
Datasets were generated assuming that y_{ n }followed a linear model according to: y_{ n }= X_{ n }β_{ p }(Δ) + ε_{ n }, where ε_{ i }~N(0,1.5). Four datasets were simulated in order to evaluate the C^{1}C^{2}s performance in settings where n <p_{ x }, n > p_{ x }, and where the observations were sampled from an orthogonal multivariate normal distribution or not, according to the following schema:

1.
n = 15, Σ_{20} = I_{20}

2.
n = 200, Σ_{20} = I_{20}

3.
n = 15, Σ_{20} = S_{20}

4.
n = 200, Σ_{20} = S_{20}
The simulated datasets are available in CSV format from Additional files 1, 2, 3, 4.
The Selwood dataset
This is a real dataset, made available from a website [18] and originally published in 1990 [19]. It is a widely studied dataset (see [20, 21] and references therein). It contains one dependent variable, 53 independent variables, and 31 observations. The 53 independent variables correspond to numerical descriptions of molecules (antifilarial antimycin analogues) designed to capture their physicochemical properties. The dependent variable is the in vitro antifilarial activity of the molecules. This dataset exhibits extremely strong correlations between the independent variables and contains real valued, binary and categorical independent variables. It is known from previous studies that this dataset contains nonlinearities, but that decent models can be found using linear methods.
Testing
To demonstrate the use of the C^{1}C^{2}, it was applied to the simulated and realworld datasets described above (hereafter referred to as "the datasets"). Below we describe the choices for R, K, $\mathcal{M}$, G, L, C^{1}, C^{2}, and S and the motivation for each selection.
Choice of R and K
The larger the choice of R, the higher the accuracy in the estimate of the generalization error; the choice of R is thus constrained by time and is dependent on the size of the dataset and the computational complexity of the choices of $\mathcal{M}$, G, L, C^{1}, C^{2}, and S. The choice of K is a tradeoff between bias and variance; the larger the K, the more variance and the less bias in the estimates of the generalization error [22]. R was here set to 12 and K to 5.
Choice of $\mathcal{M}$
We make the assumption that a normal linear model forms a reasonable approximation of the data. This model is given by: y_{ n }= X_{ n }β_{ p }+ ε_{ n }, where the subscripts denote the number of rows in a matrix, β_{ p }are regression coefficients, and ε_{ n }~N(0,σ^{2} I). Further, because n <p and the columns in X_{ n }are highly correlated in some of the datasets, we decide to use the ridge estimator, ${\widehat{\beta}}_{p}^{ridge}$ (see Methods), of the regression coefficients, β_{ p }. Let ${\mathcal{M}}^{ridge}$ be the linear class of models given by: y_{ n }= X_{ n }β_{ p }+ ε_{ n }, where β_{ p }is estimated by ${\widehat{\beta}}_{p}^{ridge}$. We thus choose $\mathcal{M}={\mathcal{M}}^{ridge}$. The problem of model choice within ${\mathcal{M}}^{ridge}$ reduces to the problem of variable selection, i.e. choosing which subset of the p columns in X_{ n }to include in the model, and the problem of choosing the ridge penalizing parameter λ (see Methods). Hence, letting Δ = (δ_{ i })_{i = 1,..., p}(see simulated data above) represent a subset of X_{ n }, we want to choose Δ and λ using the C^{1}C^{2} framework. A choice of Δ and λ for given values of r and k will be termed "an estimate" of Δ and λ, respectively, and be denoted $\widehat{\Delta}$ and $\widehat{\lambda}$. Averages of estimates over the K folds and the R repeats in the C^{1}C^{2} are denoted $\overline{\widehat{\Delta}}$ and $\overline{\widehat{\lambda}}$, respectively.
Choice of G
As the columns in the Selwood dataset are measured in different units using different scales, we choose to make G contain mean centering and scaling to unit variance processing steps.
Choice of L
We use the standard quadratic loss function given by:
Choice of C^{1}and C^{2}
Others [12, 13] have suggested choosing C^{1} = C^{2} = crossvalidation. Here, we choose C^{1} = crossvalidation and C^{2} = BIC. Hence, in this demonstration we assess a model choice ${\mathcal{M}}_{l}^{ridge}\in {\mathcal{M}}^{ridge}$ according to:
where ${\mathcal{M}}_{l}^{ridge,BIC}$ (D_{k}) is the ${\mathcal{M}}_{l}^{ridge}$ chosen according to BIC based on D_{k}; that is, the value of ${\mathcal{M}}_{l}^{ridge}$ that optimizes the function:
m = 1,..., M, where M is the number of models in ${\mathcal{M}}^{ridge}$. df in (5) is the number of free parameters in the model ${\mathcal{M}}_{m}^{ridge}$ (note that this is not equal to the number of parameters in the model ${\mathcal{M}}_{m}^{ridge}$, see for instance [16]). The choice of C^{1} is motivated by that we wish to get a direct estimate, ${\widehat{\epsilon}}_{gen}$, of the generalization error, ε_{ gen }, of our model choice. Provided that the assumptions behind BIC are fulfilled, the choice C^{2} = BIC has several advantages over C^{2} = crossvalidation, including: a reduction of bias in parameter estimates [22], a reduction of variance by the RaoBlackwell type relation derived in [23], and a drastic reduction of the computational cost of the procedure.
Choice of S
A genetic algorithm (GA) was chosen as a search method because it is very easy to adapt to different situations and in general effective for nondeterministic polynomialtime hard combinatorial problems, such as the problem of estimating Δ [24]. A trial solution in the GA is here a varying length chromosome that contains a realvalued number representing λ and a vector of integers representing the indices of the δ_{ i }in Δ that are nonzero. The fitness function is our choice of C^{2}. For the simulated datasets, we also chose to run the C^{1}C^{2} with a brute force search method: for each λ ∈ {0,0.01,0.02,...,10} an exhaustive search in the Δ space (i.e. an allsubset regression) was performed. This enabled comparisons between the GA method and a search method guaranteed to find the optimal model (given a specific objective function and the resolution and limits of λ).
Some remarks regarding the demonstration
To enable comparisons with the estimates of Δ, λ, and ε_{ gen }obtained with repeated Kfold crossvalidation, the demonstration described above was repeated with the function (1) used as a criterion for model choice and for assessing the model. Note that, since the C^{1}C^{2} includes the 'for loop' over r, (1) was repeated R = 12 times, each time with a new, independent data partitioning. This was done to facilitate an impartial comparison between the two methods.
The demonstration of the C^{1}C^{2} framework can be compared with the work of for instance Nicolotti and Carotti [20], where a genetic algorithm was employed to estimate Δ. In contrast to that approach, the C^{1}C^{2} framework completely separates model choice and assessment whereby more accurate generalization error estimates in general are achieved. Further, the use of specific ad hoc objective functions is avoided by choosing C^{2} to be a formally derived model selection criterion, and simultaneous estimation of model parameters other than Δ (for example, the ridge parameter λ in the demonstration) can be afforded. Typically, in works that have employed a search method for estimating Δ, a given number of nonzero δ_{ i }in Δ is assumed (see for instance [20, 25]). Therefore it was of interest to investigate the effect of this assumption on producing good estimates of Δ and ε_{ gen }. This can be tested for the simulated datasets in the demonstration, where the number of nonzero δ_{ i }is known. The C^{1}C^{2} was therefore applied to the simulated datasets both with an assumption about the number of nonzero δ_{ i }and without the assumption. For simplicity (however somewhat unrealistically), we assumed the correct number of nonzero δ_{ i }.
Results of the testing
Simulated datasets
The four simulated datasets in combination with the use of either the C^{1}C^{2} or repeated Kfold crossvalidation for model choice and assessment, the GA or the bruteforce search method, and either with or without the assumption of prior knowledge of the number of nonzero δ_{ i }constitute a twolevel, fivefactor, full factorial experimental design. The C^{1}C^{2} and the repeated Kfold crossvalidation were applied four times to each factor combination, thus providing four replicates of the whole demonstration for the simulated data. The design can be analyzed within the normal linear modelw_{ iv }= γ_{0} + γ_{1} z_{1i}+ γ_{2}z_{2i}+ γ_{3}z_{3i}+ γ_{4}z_{4i}+ γ_{5}z_{5i}+ η i (i = 1,...,128),
where z_{ ji }, j = 1,2,3,4,5, are factors corresponding to C^{1}C^{2} or repeated Kfold crossvalidation model choice and assessment, brute force or GA search, Σ_{20} = I_{20} or Σ_{20} = S_{20} in the multivariate normal distribution from which the data was sampled, 200 or 15 observations, and assuming three nonzero δ_{ i }or no such assumption, respectively. i goes from 1 to 128 in (6) as there are 32 factor combinations in four replicates. w_{ iv }, v = 1,2,3, are response variables defined according to the following: the Euclidean norm w_{i 1}= ${\Vert \overline{\widehat{\Delta}}\Delta \Vert}_{i}$ was used to measure how well Δ on average was estimated, w_{i 2}= ${\overline{\widehat{\lambda}}}_{i}$ was used as a response variable in the λ case (as the correct choice of λ is not known), and w_{i 3}= ${\overline{\left{\widehat{\epsilon}}_{gen}{\tilde{\epsilon}}_{gen}\right}}_{i}$ was used to measure how well the generalization error ε on average was estimated; ${\widehat{\epsilon}}_{gen}$ denotes the estimate of ε_{ gen }for given values of r and k; ${\tilde{\epsilon}}_{gen}=\frac{1}{N}{\displaystyle {\sum}_{j=1}^{N}{({y}_{j,ext}{\widehat{y}}_{j,ext}({x}_{j,ext},{\mathcal{M}}_{l}))}^{2}}$ denotes the generalization error estimate obtained by using the corresponding choice of model, ${\mathcal{M}}_{l}$, to predict the response values in a large (N = 500,000) external test set, generated in the same way as the dataset used for choosing the model and estimating ${\widehat{\epsilon}}_{gen}$; the bar denotes the average over the R·K individual estimates. The generalization error can be decomposed into three parts: one irreducible error (corresponding to the error added when simulating the data), the squared bias, and the variance. The latter two are dependent on the model choice and consequently the generalization error is dependent on the model choice. We here assume that the largesample estimate of the generalization error, ${\tilde{\epsilon}}_{gen}$, closely represents the true generalization error, ε, for a given model choice.
The results for choosing a model ${\mathcal{M}}_{l}^{ridge}\in {\mathcal{M}}^{ridge}$ for the simulated datasets are available in Additional file 5, where $\Vert \overline{\widehat{\Delta}}\Delta \Vert $, $\overline{\widehat{\lambda}}$, and $\overline{\left{\widehat{\epsilon}}_{gen}{\tilde{\epsilon}}_{gen}\right}$ are tabulated for each factor combination and replicate. The parameter estimates for fitting the model (6) using $\Vert \overline{\widehat{\Delta}}\Delta \Vert $, $\overline{\widehat{\lambda}}$, and $\overline{\left{\widehat{\epsilon}}_{gen}{\tilde{\epsilon}}_{gen}\right}$ as response variables are shown in Tables 1, 2, and 3, respectively. All fitted models were highly significant (F_{5,122} = 26.1, pvalue < 2.2 × 10^{16} with $\Vert \overline{\widehat{\Delta}}\Delta \Vert $ as response; F_{5,122} = 47.7, pvalue < 2.2 × 10^{16} with $\overline{\widehat{\lambda}}$ as response; and F_{5,122} = 12.1, pvalue = 1.6 × 10^{9} with $\overline{\left{\widehat{\epsilon}}_{gen}{\tilde{\epsilon}}_{gen}\right}$ as response); residual plots showed no large deviations from the assumptions of normality of error distribution (an asymptotic normal distribution of the response variables is warranted by the central limit theorem), homoscedasticity, and independent errors (data not shown). A few outliers were however observed, probably resulting from "unfortunate" data partitions.
Selwood dataset
Applying the C^{1}C^{2} to the Selwood dataset yielded $\widehat{\Delta}$ that on average contained 11.4 ± 3.5 nonzero ${\widehat{\delta}}_{i}$. The 11 most frequently picked variables were the partial atomic charge for atoms ATCH1, ATCH3, and ATCH6; electrophilic superdelocalizability for atom ESDL3; the van der Waal's volume VDWVOL; the surface area SURF_A; the principal moments of inertia MOFI_Y and MOFI_Z; the principal ellipsoid axis PEAX_X; the partition coefficient LOGP; and the sum of F substituent constant SUM_F (see [19] for more details about the variables). The estimation of λ by $\overline{\widehat{\lambda}}$ was 2.50 ± 0.09 and the estimation of the generalization error by ${\overline{\widehat{\epsilon}}}_{gen}$ was 0.42 ± 0.038, where ${\overline{\widehat{\epsilon}}}_{gen}$ is an average over the R·K ${\widehat{\epsilon}}_{gen}$ produced in the C^{1}C^{2}.
Applying repeated Kfold crossvalidation for model choice and assessment to the Selwood dataset gave on average 14.1 ± 4.8 selected variables. The 14 most frequently picked variables included the same 11 variables picked by the C^{1}C^{2} (see above) plus DIPMOM (the dipole moment), ATCH7 (partial charge of atom 7), and DIPV_Y (the dipole moment vector in the Ydirection). The estimation of λ by $\overline{\widehat{\lambda}}$ was 3.01 ± 0.22 and the estimation of the generalization error by $\overline{\widehat{\epsilon}}$ was 0.35 ± 0.041, where $\overline{\widehat{\epsilon}}$ is an average over the R·K $\widehat{\epsilon}$ produced in the repeated Kfold crossvalidation.
Implementation
Computer programs to implement the C^{1}C^{2} were written in Java (Sun Microsystems [26]) as a part of the library P, that will serve as the data analysis plugin for Bioclipse [27]. P is available under the GSPL license from the website [28]; it is open source and free for academics. It has a modular architecture that enables plugging in new features, including modeling methods, model selection criteria, and search procedures. P relies on a modified version of the JGap library (available from the website [29]) for the genetic algorithm computations (the modifications are available under the LGPL license from the website [30]). The Rpackage, pvclust [31, 32], was used for the cluster analysis (see Discussion).
Discussion
Simulated datasets
The model (6) fitted to w = $\Vert \overline{\widehat{\Delta}}\Delta \Vert $ (see Table 1) showed a relatively clear significant difference (on the 90% level) in average Δ estimates depending on whether the data came from a multivariate normal distribution with Σ_{20} = I_{20} or Σ_{20} = S_{20}. Furthermore, we observed significant positive impacts on average Δ estimates with more observations and knowledge about the number of nonzero δ_{ i }. All these findings were expected; highly correlated variables should provide worse estimates of Δ, whereas more observations and trustworthy prior knowledge should provide better estimates. A significant improvement in average Δ estimates was observed when using the bruteforce search compared to the GA. The GA on average selected slightly more variables than needed and than what the bruteforce method did. No clear significant difference could be seen between using the C^{1}C^{2} rather than repeated Kfold crossvalidation.
It can be shown that the optimal choice of λ (in terms of minimized expected generalization error) tends to zero as the number of observations tends to infinity and decreases with decreasing number of variables (see for instance [33]) and with decreasing correlations between the independent variables. The model (6) fitted with w = $\overline{\widehat{\lambda}}$ as a response (see Table 2) showed that the average estimated λ was significantly smaller for the data that came from a multivariate normal distribution with Σ_{20} = I_{20} compared to Σ_{20} = S_{20}, when more observations were used, and when prior knowledge about the number of nonzero δ_{ i }in Δ was assumed. Although the true value of λ is not known, these results are thus consistent with theory and provide evidence that both the C^{1}C^{2} and repeated Kfold crossvalidation gave reasonable estimates of λ in the demonstration. However, the average λ estimates are not equal to zero for all orthogonal datasets, presumably due to the stochastic nature of the GA and to errors in y_{ n }. No significant differences were observed between using the GA or the brute force search methods or between the C^{1}C^{2} and the repeated Kfold crossvalidation.
Fitting model (6) with w = $\overline{\left{\widehat{\epsilon}}_{gen}{\tilde{\epsilon}}_{gen}\right}$ as a response (see Table 3) showed that the average error estimates were significantly worsened with the assumption of a given number of nonzero δ_{ i }and that no significant difference was observed when using the GA or the brute force method, or when the independent variables in the dataset were correlated or not. These findings might seem confusing given that the assumption of a given number of nonzero δ_{ i }, the use of the bruteforce search method, and uncorrelated independent variables all improved model selection. The findings can be explained by the fact that, in general, without an assumption of a given number of nonzero δ_{ i }, when using the GA for searching the model space, and when independent variables were correlated, more nonzero δ_{ i }are on average selected (see Table 1). Thus the chances of also selecting the correct ones improve. This implies that it is worse to estimate at least one δ_{ i }= 1 with ${\widehat{\delta}}_{i}$ = 0 than to estimate all δ_{ i }= 1 with ${\widehat{\delta}}_{i}$ = 1 and at least one δ_{ i }= 0 with ${\widehat{\delta}}_{i}$ = 1. This makes sense, because the former models are incorrect, whereas the latter ones contain the true model, but are inefficient due to their unnecessary large size. The average error estimates were significantly improved with a large number of observations and when the C^{1}C^{2} was employed to produce the estimates compared to when the repeated Kfold crossvalidation was used (see Fig. 2 and Table 3). The latter result seems contradictory with that no clear difference was found between the average Δ estimates produced with the C^{1}C^{2} and those obtained with the repeated Kfold crossvalidation (see above). It can however be explained by studying the R·K individual Δ estimates, where a clear (99% level) positive effect could be observed when using the C^{1}C^{2} compared to the repeated Kfold crossvalidation. The individual Δ estimates were thus worse when repeated Kfold crossvalidation was used, resulting in worse generalization error estimates. However the average Δ estimates from the respective method were almost the same. This observation is seconded by the higher confidence intervals of the average Δ estimates produced with repeated Kfold crossvalidation (see Additional file 5). The finding that the C^{1}C^{2} produces more accurate generalization error estimates than repeated Kfold crossvalidation is consistent with the results presented in for instance [9] and provides evidence for that a complete separation of the data used for model choice and the data used for model assessment is necessary to obtain better estimates of the generalization error.
Selwood dataset
The result of estimating Δ was, expectedly, less clear when applying the C^{1}C^{2} and repeated Kfold crossvalidation to realworld data; the Selwood dataset is particularly difficult to model due to the extremely high correlations between variables (many variable pairs have correlation coefficients > 0.95), the low observationtovariable ratio, and deviations from the linearity and homoscedasticity assumptions (see [20]). 11.4 out of the 53 variables were on average selected by the C^{1}C^{2} and 14.1 by Kfold crossvalidation. Interestingly, the 11 most frequently picked variables selected by the C^{1} C^{2} is a proper subset of the 14 most recurrently selected variables by Kfold crossvalidation. Hierarchically clustering the 14 most frequently picked variables chosen by Kfold crossvalidation (which thus includes the 11 variables selected most often by the C^{1}C^{2}) using the absolute correlation as a distance measure revealed three distinct clusters and one subcluster (see Fig. 3). Good models (in terms of estimated generalization error) for the Selwood dataset can be achieved by selecting LOGP and one variable from the set of variables in the blue subcluster (PEAX_X, MOFI_Y, MOFI_Z, VDWVOL, and SURF_A) and one from the set of variables in the green cluster (ESDL3, ATCH1, ATCH3, ATCH6, ATCH7, and SUM_F). LOGP appears to be sufficiently different from the rest of the variables in the red cluster to improve model performance. The variables in the respective blue and green clusters are highly correlated and it is sufficient to have one variable from each cluster in a model.
Models containing LOGP and one variable each from the green and blue clusters have high predictive power and comply to the QUICK rules for credible predictive models proposed previously [21]. Furthermore, these models have been found credible in the works of others [20, 25, 34]. The C^{1}C^{2} chose 11 variables belonging to the green, red, and blue clusters, whereas Kfold crossvalidation chose an additional three variables: ATCH7 in the green cluster, and DIPMOM and DIPV_Y belonging to the third distinct (yellow) cluster. Variables from the yellow cluster do not improve the internal predictive ability when testing models containing LOGP and one variable from the respective green, blue, and yellow clusters on the whole Selwood dataset (data not shown); this result is supported by the work of Nicolotti and Carotti (see Table 1 in [20]). More variables in the Selwood dataset were thus on average selected with repeated Kfold crossvalidation than when using the C^{1}C^{2} (the difference was significant on the 80% level, tested by a onesided Welch's ttest), including two that not seem to improve the predictive ability of the models. The generalization estimates obtained with the Kfold crossvalidation was lower than those obtained with the C^{1}C^{2} (significant on the 70% level). Although these differences are not highly significant, it is tantalizing to arrive at the conclusion that the models selected by repeated Kfold crossvalidation in this particular case are more prone to overfitting and that this is reflected in the lower generalization error estimates.
The C^{1}C^{2}
We have here introduced the C^{1}C^{2} framework for simultaneous model choice and assessment. The main idea is a complete separation of the choice of a model and its assessment in terms of the data used for each task. The C^{1}C^{2} was applied to the problem of choosing a model, ${\mathcal{M}}_{l}^{ridge}\in {\mathcal{M}}^{ridge}$. Previously, others have described methods that, within the linear model, tackle the problem of regression coefficient shrinkage and variable selection simultaneously, for example, the lasso [35]. However, the C^{1}C^{2} framework is general and is easily applied to other settings. For instance, different choices of C^{2} are favorable in different situations; Akaike's information criterion (AIC) [36] is known to be consistent within the linear model if the true model is very complex, whereas BIC is favorable within linear models of finite dimension [37], and crossvalidation is preferable to use in situations where the degrees of freedom of a model is difficult to define, and so forth. The search method can also be tailored to the problem at hand; for instance, bruteforce methods are advantageous for small problems, whereas GAs are faster and thus applicable to larger problems. Moreover, if required, $\mathcal{M}$ can be chosen to contain nonlinear models, L can be chosen to be exponential in order to increase the penalty on outliers, and instead of using the search method to produce an estimate of Δ (as we did in the demonstration) we can let G contain a dedicated variable selection method. The cost of this generality is uncharacterized convergence rates (in finite time) of the parameter estimates, which is coupled to the need of employing a general search method (like a GA) rather than solving standard convex problems. Running the C^{1}C^{2} R times enables averaging of estimates and calculation of confidence intervals, but renders problems in choosing which out of the R models to use for interpretation and future predictions. A potential remedy to these problems is, instead of choosing a model, to employ all chosen models in a stackinglike schema (see [38] for details on stacking). Testing this idea and further testing of the C^{1}C^{2} for other choices of $\mathcal{M}$, G, L, C^{1}, C^{2}, and S will be pursued in future research.
Conclusion
We have presented some evidence that suggests that the C^{1}C^{2} works well in terms of choosing the correct model and produce good estimates of the generalization error. It was demonstrated to perform well within a penalized linear model, even for "difficult" datasets with highly correlated independent variables, a low observationtovariable ratio, and deviations from model assumptions (see Table 4 for a summary of the findings in the demonstrations). However, more research is needed to fully assess the methods performance for more general, for instance nonlinear, models and to provide theoretical insight to frameworks such as the C^{1}C^{2}. The C^{1}C^{2} is general and reasonable choices of $\mathcal{M}$, G, L, C^{1}, C^{2}, and S help in achieving as unbiased estimates with as low a variance to as low a computational cost as possible. A framework that completely separates model choice from assessment in terms of used data, like the C^{1}C^{2}, should always be employed for model selection and assessment in order to avoid positive bias in the generalization error estimates and, ultimately, to avoid false conclusions and using dubious models to direct further research.
Methods
Bayesian Information Criterion (BIC)
Suppose we have a set of candidate models, $\mathcal{M}$ and corresponding model parameters, θ_{ m }, and we wish to choose the best model among $\mathcal{M}$. Assuming we have a prior distribution, P(θ_{ m }${\mathcal{M}}_{m}$) for the parameters of each model, ${\mathcal{M}}_{m}\in \mathcal{M}$, the posterior probability of a given model is:
To choose a model in a Bayesian setting, the posterior odds, given by:
are formed for all models ${\mathcal{M}}_{l}\in \mathcal{M}$, and ${\mathcal{M}}_{l}$ is picked to maximize equation (7). If all models in $\mathcal{M}$ are given equal prior probabilities, the problem of choosing the model ${\mathcal{M}}_{l}$ is reduced to calculating the integrals, P(D${\mathcal{M}}_{m}$). It can be shown [39, 40] that the Bayesian Information Criterion (BIC) approximates the logarithm of this integral with an O(1) error term, that is:
where
In the latter expression, ${\widehat{\theta}}_{m}$ is the maximum likelihood estimate, df_{ m }is the number of free parameters in model ${\mathcal{M}}_{m}$ (note that this in general is not equal to the number of parameters in the model), and n is the number of observations [17]. Thus, BIC is an approximation to the Bayes solution, but valid outside the Bayesian context. This is true because the leading terms in the approximation do not depend on the prior densities of the model parameters, θ_{ m }. BIC is, as opposed to nonparametric approaches such as crossvalidation, model based and therefore relies on the assumptions made in the modeling. BIC is derived under the assumption that the data comes from a distribution in the exponential family (see [41] for more about the assumptions behind BIC and a comparison with Akaike's Information Criterion).
Ridge regression
The ordinary least squares (OLS) estimator of the regression coefficients β in the standard linear model is efficient (i.e. has the minimum possible variance) within the class of linear and unbiased estimators. However, when the independent variables are correlated, the variance of the OLS estimator is generally high. In these situations, ridge regression [42] can yield improved parameter estimates by minimizing a penalized residual sum of squares, given by: RSS (λ) = (y  X β)^{T}(y  X β) + λ β^{T} β. Finding the minimum of this expression gives the ridge solution: ${\widehat{\beta}}^{ridge}$ = (X^{T}X + λI)^{1} X^{T}y, where I is the p × p identity matrix. The solution thus adds a positive constant to the diagonal of X^{T}X before inversion; this makes the problem nonsingular, even if X^{T}X is not of full rank. While this introduces bias into the coefficient estimates, variance is often greatly reduced.
Note that ${\widehat{\beta}}^{ridge}$ is a linear function in y, thus it is straightforward to define the effective degrees of freedom of the ridge regression fit, (df(λ)) as:
df (λ) = tr [ X ( X^{T}X + λ I )^{1}X^{T}] [16]. The degrees of freedom of the fit are needed for carrying out model selection according to, for instance, BIC. Linearity in y also enables easy implementation (no quadratic programming required as, for instance, is necessary with the lasso).
Genetic algorithm (GA)
A GA (see [43] for more details) is a stochastic search technique for finding exact or approximate solutions to optimization and search problems. A typical genetic algorithm is defined by a genetic representation of a given solution (normally termed a chromosome in the GA context). That is, a vector, ${w}_{t}^{i}$ specifies the numerical representation of the i th chromosome at generation t, and an objective function (or fitness function), f(${w}_{t}^{i}$)→/ℝ/evaluates the fitness of a chromosome. The GA is initiated by setting up a random population that contains a number of trial chromosomes. New solutions are generated by mutation or recombination of existing solutions and are selected for the next generation with a probability given by: $p({w}_{t}^{i})=f({w}_{t}^{i})/{\displaystyle {\sum}_{j}^{a}f({w}_{t}^{i})}$. The process is continued through a number of generations until an optimal or acceptable solution has been found. Genetic algorithms of this type can be shown to converge with a probability of one to the global optimal solution as t → ∞.
Brute force search
A brute force search systematically tests an exhaustive list of all possible candidates for the solution to a given search or optimization problem and checks whether each candidate satisfies the problem's statement.
Availability and requirements

Project name: P

Project homepage: http://www.genettasoft.com/p/P.zip

Operating systems: Platform independent (interpreted language)

Programming language: Java

Requirements: Java 5 or higher. A modified version of the JGAP package http://jgap.sourceforge.net/ for genetic algorithms. The modifications are distributed under the LGPL license and are available at http://www.genettasoft.com/p/JGAPm.zip. log4j, available from http://logging.apache.org/log4j/.

Licence: GSPL (see http://www.genettasoft.com/gspl/gspl1_1.pdf)

Restrictions to use for commercial purposes: licence needed
References
 1.
Kontijevskis A, Prusis P, Petrovska R, Yahorava S, Mutulis F, Mutule I, Komorowski J, Wikberg JES: A look inside HIV resistance through retroviral protease interaction maps. PloS Computational Biology 2007., 3(3):
 2.
Wikberg JES, Lapinsh M, Prusis P: Proteochemometrics: A tool for modelling the molecular interaction space. In Chemogenomics in Drug Discovery  A Medicinal Chemistry Perspective. Edited by: Kubinyi H, Müller G. Weinheim , WileyVCH; 2004:289–309.
 3.
Hansch C: A Quantitative Approach to Biochemical StructureActivity Relationships. Accounts of Chemical Research 1969, 2: 232–239. 10.1021/ar50020a002
 4.
Hvidsten TR, Wilczynski B, Kryshtafovych A, Tiuryn J, Komorowski J, Fidelis K: Discovering regulatory bindingsite modules using rulebased learning. Genome Res 2005/06/03 edition. 2005, 15(6):856–866. 10.1101/gr.3760605
 5.
van't Veer LJ, Dai H, van de Vijver MJ, He YD, Hart AA, Mao M, Peterse HL, van der Kooy K, Marton MJ, Witteveen AT, Schreiber GJ, Kerkhoven RM, Roberts C, Linsley PS, Bernards R, Friend SH: Gene expression profiling predicts clinical outcome of breast cancer. Nature 2002, 415(6871):530–536. 10.1038/415530a
 6.
Johnson SR: The Trouble with QSAR (or How I Learned To Stop Worrying and Embrace Fallacy). J Chem Inf Model 2008, 25–26. 10.1021/ci700332k
 7.
Michiels S, Koscielny S, Hill C: Prediction of cancer outcome with microarrays: a multiple random validation strategy. The Lancet 2005, 365(9458):488–492. 10.1016/S01406736(05)178660
 8.
Ntzani EE, Ioannidis JPA: Predictive ability of DNA microarrays for cancer outcomes and correlates: an empirical assessment. The Lancet 2003, 362(9394):1439–1444. 10.1016/S01406736(03)146867
 9.
Freyhult E, Peteris P, Lapinsh M, Wikberg JES, Moulton V, Gustafsson MG: Unbiased descriptor and parameter selection confirms the potential of proteochemometric modelling. BMC Bioinformatics 2005, 6(50):1–14.
 10.
Golbraikh A, Tropsha A: Beware of q2! J Mol Graph Model 2002/02/23 edition. 2002, 20(4):269–276. 10.1016/S10933263(01)001231
 11.
Stone M: CrossValidatory Choice and Assessment of Statistical Predictions. In Journal of the Royal Statistical Society Series B (Methodological). Volume 36. Royal Statistical Society; 1974:111–147.
 12.
Cartmell J, Enoch S, Krstajic D, Leahy DE: Automated QSPR through Competitive Workflow. J Comput Aided Mol Des 2005, 19(11):821–833. 10.1007/s1082200590298
 13.
Cartmell J, Krstajic D, Leahy DE: Competitive Workflow: novel software architecture for automating drug design. Curr Opin Drug Discov Devel 2007, 10(3):347–352.
 14.
Obrezanova O, Gola JM, Champness EJ, Segall MD: Automatic QSAR modeling of ADME properties: bloodbrain barrier penetration and aqueous solubility. J Comput Aided Mol Des 2008, 22(6–7):431–440. 10.1007/s1082200891938
 15.
Efron B, Tibshirani R: An Introduction to the Bootstrap. New York , Chapman & Hall/CRC. ; 1993.
 16.
Hastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning. In Springer series in statistics. New York , SpringerVerlag; 2001:533.
 17.
Schwarz G: Estimating the Dimension of a Model. In The Annals of Statistics. Volume 6. Institute of Mathematical Statistics; 1978:461–464. 10.1214/aos/1176344136
 18.
The Selwood dataset[http://www.ndsu.edu/qsar_soc/resource/datasets/selwood.htm]
 19.
Selwood DL, Livingstone DJ, Comley JCW, O'Dowd AB, Hudson AT, Jackson P, Jandu KS, Rose VS, Stables JN: Structureactivity relationships of antifilarial antimycin analogs: a multivariate pattern recognition study. Journal of Medicinal Chemistry 1990, 33(1):136–142. 10.1021/jm00163a023
 20.
Nicolotti O, Carotti A: QSAR and QSPR studies of a highly structured physicochemical domain. J Chem Inf Model 2006/01/24 edition. 2006, 46(1):264–276. 10.1021/ci050293l
 21.
Todeschini R, Consonni V, Mauri A, Pavan M: Detecting “bad” regression models: multicriteria fitness functions in regression analysis. Analytica Chimica Acta 2004, 515(1):99–208.
 22.
Burman P: A Comparative Study of Ordinary CrossValidation, vFold CrossValidation and the Repeated LearningTesting Methods. In Biometrika. Volume 76. Biometrika Trust; 1989:503–514.
 23.
Efron B: The Estimation of Prediction Error: Covariance Penalties and CrossValidation. Journal of the American Statistical Association. In Journal of the American Statistical Association. Volume 99. American Statistical Association; 2004:619–632. 10.1198/016214504000000692
 24.
Amaldi E, Kann V: On the Approximability of Minimizing Nonzero Variables Or Unsatisfied Relations in Linear Systems. Theoretical Computer Science 1997, 209: 237–260.
 25.
Kubinyi H: Variable Selection in QSAR Studies. II. A Highly Efficient Combination of Systematic Search and Evolution. QSAR & Combinatorial Science 1994, 13(4):393–401. 10.1002/qsar.19940130403
 26.
Java  The Source for Java Developers[http://java.sun.com/]
 27.
Spjuth O, Helmus T, Willighagen EL, Kuhn S, Eklund M, Wagener J, MurrayRust P, Steinbeck C, Wikberg JE: Bioclipse: an open source workbench for chemo and bioinformatics. BMC Bioinformatics 2007/02/24 edition. 2007, 8: 59. 10.1186/14712105859
 28.
 29.
JGAP  Java Genetic Algorithms Package[http://jgap.sourceforge.net/]
 30.
 31.
Shimodaira H: Approximately unbiased tests of regions using multistepmultiscale bootstrap resampling. Annals of Statistics 2004, 32: 2616–2641. 10.1214/009053604000000823
 32.
 33.
Skurichina M: Stabilizing Weak Classifiers  Regularization and Combining Techniques in Discriminant Analysis. Volume PhD. Vilnius State University; 2001.
 34.
Cho SJ, Hermsmeier MA: Genetic Algorithm Guided Selection: Variable Selection and Subset Selection. J Chem Inf Comput Sci 2002, 42(4):927 9936. 10.1021/ci010247v
 35.
Tibshirani R: Regression Shrinkage and Selection via the Lasso. In Journal of the Royal Statistical Society Series B (Methodological). Volume 58. Royal Statistical Society; 1996:267–288.
 36.
Akaike H: A new look at the statistical model identification. IEEE transactions on automatic control 1974, 19(6):716 7723. 10.1109/TAC.1974.1100705
 37.
Shao J: An asymptotic theory for linear model selection. Statistica Sinica 1997, 7: 221–264.
 38.
Wolpert D: Stacked Generalization. Neural Networks, 1992, 5: 241–259. 10.1016/S08936080(05)800231
 39.
Kass RE, Wasserman L: A Reference Bayesian Test for Nested Hypotheses and its Relationship to the Schwarz Criterion. In Journal of the American Statistical Association. Volume 90. American Statistical Association; 1995:928–934. 10.2307/2291327
 40.
Wasserman L: Bayesian model selection and model averaging. In Mathematical Psychology Symposium. Bloomington, Indiana ; 1999.
 41.
Kuha J: AIC and BIC  Comparisons of Assumptions and Performance. Sociological Methods & Research 2004, 33(2):188–229. 10.1177/0049124103262065
 42.
Hoerl AE, Kennard RW: Ridge Regression: Biased Estimation for Nonorthogonal Problems. In Technometrics. Volume 12. American Statistical Association; 1970:55–67. 10.2307/1267351
 43.
Goldberg DE: Genetic Algorithms in Search, Optimization and Machine Learning. Boston , AddisonWesley Longman Publishing Co., Inc. ; 1989:372.
Acknowledgements
Supported by the Swedish VR (04X05975). We extend out gratitude to three anonymous reviewers who helped to improve this paper.
Author information
Additional information
Authors' contributions
ME devised and implemented the proposed C^{1}C^{2} framework. OS was involved in program design and aided with the implementation. JESW supervised the project. All authors read and approved the final manuscript.
Electronic supplementary material
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Eklund, M., Spjuth, O. & Wikberg, J.E. The C^{1}C^{2}: A framework for simultaneous model selection and assessment. BMC Bioinformatics 9, 360 (2008) doi:10.1186/147121059360
Received:
Accepted:
Published:
Keywords
 Genetic Algorithm
 Bayesian Information Criterion
 Model Choice
 Simulated Dataset
 Ridge Regression