A maximum likelihood framework for protein design
 Claudia L Kleinman^{1},
 Nicolas Rodrigue^{1},
 Cécile Bonnard^{2},
 Hervé Philippe^{1} and
 Nicolas Lartillot^{2}Email author
https://doi.org/10.1186/147121057326
© Kleinman et al; licensee BioMed Central Ltd. 2006
Received: 01 February 2006
Accepted: 29 June 2006
Published: 29 June 2006
Abstract
Background
The aim of protein design is to predict aminoacid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility.
Results
We propose a formulation of the protein design problem in terms of modelbased statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by crossvalidation. We apply this to a simple pairwise contact potential, supplemented with a solventaccessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered.
Conclusion
Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such modelbased statistical analyses may help to understand the forces shaping protein sequences, and driving their evolution.
Keywords
Background
Predicting the sequences compatible with a given structure defines what is traditionally called the inverse folding problem, or more often, protein design [1–3]. As suggested by the terminology, this question is usually considered in an engineering perspective: the aim is then to determine a sequence, or a set of sequences, that stably fold into a prespecified conformation. In a thermodynamic perspective, this requirement translates into eliciting sequences that have lowest free energy under the target fold, compared to all possible alternative conformations. In principle, such a criterion would imply a search through the joint structuresequence space, which is not feasible but for small onlattice model proteins [4].
As an alternative to the engineering approach, a more evolutionary stance can be taken towards the inverse folding problem, in which case the aim would rather be to predict the sequences of natural proteins having the conformation of interest. Seen from this new point of view, the design problem raises new questions: natural proteins are the result of a complex evolutionary process, involving an intricate interplay between mutation and selection, and this probably entails many constraints directly related to the native conformation, but nevertheless not equivalent to the mere requirement of structural stability. For instance, the requirement of fast and cooperative folding has an impact on the dispersion of contact energies [5]. For this and many other potential reasons, among all sequences predicted by classical engineeringoriented protein design, probably only a subset will look like natural proteins.
The evolutionary approach to protein design is particularly relevant to phylogenetic studies, where one of the current motivations is to develop the socalled structurally constrained models of protein evolution, i.e. models explicitly dependent on the protein's conformation, either for simulation purposes [6–9], or in the context of phylogenetic inference [10, 11]. In this framework, each substitution undergone by a protein during evolution has to be tested for its compatibility with the structure, in the context of the sequence that the protein displays at all other sites when the substitution occurs. Such repeated evaluation of the structuresequence compatibility along a phylogenetic tree requires relevant and computationally very efficient scoring schemes/functions.
It is interesting to compare the different methods proposed thus far for performing protein design in light of this engineering/evolutionary distinction. A first direction of research has consisted in using allatom semiempirical force fields to evaluate the conformational free energy (reviewed in [12]). These empirical methods have been applied to many theoretical and experimental cases, reaching a high level of accuracy. On the other hand, they are computationally heavy, mainly because of the sidechain positioning problem, and thus cannot be easily applied to structurally constrained phylogenetic models [10, 11]. Concerns may also be expressed about their oversensitivity to the native conformation, in particular in the core of the target structures and when the flexibility of the backbone is not accounted for [13, 14]. But more importantly, approaches based on physical force fields are, by definition, exclusively focussed on the conformational stability, and thereby, completely oversee other potential factors shaping the sequences of biological proteins. As such, they are well suited for engineering synthetic proteins [15], or for testing to what extent natural sequences are shaped by selection for protein stability [16], but may not be sufficient for more general evolutionary purposes.
An alternative to the semiempirical strategy consists in relying on knowledgebased, or statistical, potentials. These scoring functions mimic physical Boltzmann distributions, but merely encode statistical patterns present in the databases. Some of these potentials were obtained under the quasichemical approximation, whereby frequencies of patterns, such as contacts between each pair of aminoacids, are transformed into energies using the Boltzmann law [17–20]. Alternatively, contact energies can be obtained by maximizing the potential's predictive accuracy in a threading test [21–24]. In the present context, an advantage of these knowledgebased potentials, compared to semiempirical forcefields, is that they should in principle capture all kinds of patterns that true biological sequences have, in relation to their conformation, and not only those directly related to thermodynamic stability. Furthermore, statistical potentials need not be defined at the atomic level, but can be based on a coarsegrained description of the protein's configuration, essentially by omitting the degrees of freedom associated to side chains. This allows faster computations, by avoiding the problem of searching through the rugged landscape of sidechain conformations. In addition, coarsegrained potentials could turn out to be an advantage, in that they will not recover the native sequence too faithfully. Most protein design procedures based on statistical potentials proposed until now have relied on coarsegrained, pairwise contact pseudoenergies [4, 25–32].
Yet, irrespective of the level of description adopted, currently available statistical potentials may not be ideal for protein design, since they have generally been optimized in the context of the folding problem, i.e. for maximizing the rate of correct structure prediction, given the sequence. In contrast, we would like to optimize the reciprocal prediction, namely, the sequences given the conformation. Several approaches have been proposed in this direction, consisting in maximizing the Zscore between the energy of the native sequence on the target conformation and its energy on a set of decoy sequences [33], or, alternatively, in applying a meansquare criterion on the values taken by the scoring function on each structuresequence pair of the database [28]. However, these methods have thus far only been tested in cubic lattice protein models. In addition, they lack a firm theoretical basis. In particular, it would be interesting to guarantee optimal predictive power, and to have a robust methodology available to assess and compare the performance of alternative forms of statistical potentials.
Standard statistical theory provides such theoretical guarantees [34]. In the present case, the inverse folding problem can be formulated directly in terms of the probability of observing a sequence s given a conformation c, i.e. p(s  c, θ). This probability explicitly depends on the prespecified model through a series of parameters, represented here by θ. These may be, for instance, the coefficients of a pairwise potential, parameters describing compositional effects, secondary structure environment, solvent accessibility, etc. Taking the product over a database of P independent sequenceconformation pairs, S = (s^{ p })_{p = 1..P}and C = (c^{ p })_{p = 1..p}, yields a joint probability
$p\left(SC,\theta \right)={\displaystyle \prod _{p}p\left({s}^{p}{c}^{p},\theta \right)}\phantom{\rule{0.1em}{0ex}}\left(1\right)$
which, as a function of θ, can be seen as a likelihood. The parameter θ is then learnt by maximizing the likelihood with respect to θ. Once this is done, sequences can be assessed, or sampled, under the optimal parameter value $\widehat{\theta}$, by direct numerical evaluation of their probability, or by Monte Carlo sampling methods.
Reformulated in this way, the method maximizes the predictive power of the potential, now in the structureseekssequence direction. By construction, it yields the optimal parameter values that can be obtained for a given form of the potential. In addition, the fit of the model can be directly evaluated, based on the value of the likelihood obtained on a test data set, distinct from the learning set (crossvalidation), giving a means of rigorous model selection. Finally, the statistical framework proposed here allows one to explicitly combine together, in a model dependent manner, all kinds of factors that we surmise may induce correlations between the structure and the sequence of proteins.
We have implemented this maximum likelihood (ML) procedure in a Markov chain Monte Carlo framework, and applied it to a simple case, using a contact potential, supplemented with a solvent accessibility term. Using crossvalidation, we show that the resulting potentials yield a better fit than currently available potentials of the same form, and that combining solventaccessibility considerations with contact energies is better than either alone. Furthermore, we find that solvent accessibility requires a more complex description than what is currently used. Ultimately, the overall method proposed in this work can be extended to a large spectrum of alternative models and statistical potentials.
Results
The probabilistic model
Let us consider a sequence s = (s_{ i })_{i = 1..N}, of length N, and of conformation c. In its most general form, the method introduced here can work with any model M specifying the conditional probability of s given c, in terms of an unnormalized non negative function q(s, c):
$p\left(sc,M\right)=\frac{q\left(s,c\right)}{{\displaystyle {\sum}_{s}q\left(s,c\right)}}.\phantom{\rule{0.1em}{0ex}}\left(2\right)$
To illustrate the method, we will apply it to a simple case, using a pairwise contact potential. The argument is as follows. First, by Bayes' theorem:
$p\left(sc,M\right)=\frac{p\left(cs,M\right)p\left(sM\right)}{{\displaystyle {\sum}_{s}p\left(cs,M\right)p\left(sM\right)}}.\phantom{\rule{0.1em}{0ex}}\left(3\right)$
If, in addition, we assume a uniform prior on s, we can simply relate equations 3 and 2 by posing q(s, c) = p(c  s, M). Next, given a statistical potential E(s, c), the conformational probability p(c  s) can be expressed as a Boltzmann distribution:
$\begin{array}{cc}p\left(cs,M\right)=\frac{{e}^{E\left(s,c\right)/kT}}{{Z}_{s}}& \left(4\right)\\ ={e}^{\left(E\left(s,c\right)F\left(s\right)\right)/kT},& \left(5\right)\end{array}$
where
${Z}_{s}={\displaystyle \sum _{c}{e}^{E\left(s,c\right)/kT}}\phantom{\rule{0.1em}{0ex}}\left(6\right)$
is a normalization constant, and
F(s) =  ln Z_{ s }. (7)
T and k are the absolute temperature and the Boltzmann constant, respectively. Without loss of generality, it is possible to rescale the potential so that kT = 1, which we will do in the following.
Then, by defining the inverse potential:
G(s, c) = E(s, c)  F(s), (8)
the conditional probability of sequence s reads as
$p\left(sc,\theta ,M\right)=\frac{{e}^{G\left(s,c\right)}}{Y},\phantom{\rule{0.1em}{0ex}}\left(9\right)$
where
$Y={\displaystyle \sum _{{s}^{\prime}}{e}^{G\left({s}^{\prime},c\right)}}\phantom{\rule{0.1em}{0ex}}\left(10\right)$
is the normalization factor. Note that, contrary to the Z_{ s }factor of equation 4, which was a sum over all conformations, the present factor Y is a sum over sequence space (all possible sequences of length N).
Statistical potentials
In the present work, we used a statistical potential made of two terms:
$E\left(s,c\right)={\displaystyle \sum _{1\le i<j\le N}{\Delta}_{ij}{\epsilon}_{{s}_{i}{s}_{j}}}+{\displaystyle \sum _{1\le i\le N}{\alpha}_{{s}_{i}}^{{v}_{i}}}.\phantom{\rule{0.1em}{0ex}}\left(11\right)$
The first term is a contact free energy: Δ_{ ij }= 1 if positions i and j are closer in space than a certain cutoff distance, and 0 otherwise, and ε_{ ab }defines the contact energy between amino acids a and b. The second term encodes a solventaccessibility free energy: for each position, ${\alpha}_{a}^{d}$ represents the free energy of amino acid a in the solvent accessibility class d, a = 1..20, and d = 1..D, where D is the total number of solvent accessibility classes considered.
Deriving the inverse potential requires the calculation of F(s), which is already entirely specified by the potential E as a sum over all conformations. However, this computation is difficult in practice. As an alternative, we can give it a simple phenomenological form, inspired from the random energy model [25, 28, 35]:
$F\left(s\right)={\displaystyle \sum _{1\le i\le N}{\mu}_{{s}_{i}},}\phantom{\rule{0.1em}{0ex}}\left(12\right)$
where the (μ_{ a })_{a = 1..20}are unknown parameters, analogous to "chemical potentials" for the 20 amino acids.
Altogether, our parameter vector is made of three components: θ = (α, ε, μ), and the inverse potential reads as:
Note that the probability defined by equation 9 is invariant under the following transformation:
where J_{1}, J_{2} and J_{3} are arbitrary real constants. Therefore, to ensure identifiability of our probabilistic model, we enforce the following constraints:
$\sum _{a}{\mu}_{a}}=0,\phantom{\rule{0.1em}{0ex}}\left(17\right)$
$\sum _{ab}{\epsilon}_{ab}}=0,\phantom{\rule{0.1em}{0ex}}\left(18\right)$
$\sum _{a}{\alpha}_{a}^{d}}=0,\phantom{\rule{0.1em}{0ex}}d=\mathrm{1..}D.\phantom{\rule{0.1em}{0ex}}\left(19\right)$
A series of alternative inverse potentials can be obtained by suppressing the first or the second of the components of equation 13. In the present work, we tested the following combinations:

μ,

α + μ,

ε + μ,

ε + α + μ.
We also explored various numbers of accessibility classes, with D ranging from 2 to 20. Alternatively, the ε component can be fixed to values of a contact potential obtained by other authors (MJ) [17]. In this case, we must add a multiplicative scaling factor λ in front of the contact component to account for the fact that these potentials are normalized differently:
The scaling factor is optimized by ML, along with μ.
Optimizing the potentials by gradient descent
If we now consider a database, made of P protein sequences S = (s^{ p })_{p = 1..P}, of respective lengths N_{ p }and their corresponding three dimensional structures C = (c^{ p })_{p = 1..P}, the probability of observing the whole database, which we define as the likelihood L(θ), is the product of the probabilities of observing each protein independently:
$\begin{array}{cc}L\left(\theta \right)=p\left(SC,\theta \right)& \left(21\right)\\ ={\displaystyle \prod _{p}p\left({s}^{p}{c}^{p},\theta \right)}& \left(22\right)\\ =\frac{{e}^{G\left(S,C\right)}}{Y}& \left(23\right)\end{array}$
where
$G\left(S,C\right)={\displaystyle \sum _{p}G\left({s}^{p},{c}^{p}\right)}\phantom{\rule{0.1em}{0ex}}\left(24\right)$
is the inverse potential summed over the database, and
$Y={\displaystyle \sum _{{S}^{\prime}}{e}^{G\left({S}^{\prime},C\right)}}\phantom{\rule{0.1em}{0ex}}\left(25\right)$
is the corresponding normalization constant. Since it is more convenient to work on minus the logarithm of the probability, we define the score ω:
ω(θ) =  ln L(θ) (26)
= G(S, C) + ln Y. (27)
We wish to maximize the likelihood, or equivalently, minimize ω, with respect to θ. We do this by gradient descent, based on a numerical evaluation of the derivative of ω (see methods). The overall method is akin to an Expectation Maximization algorithm [36]. In fact, it can be seen as a differential version of Dempster's method, and therefore, we call it differential EM.
The derivative of ω reads as:
$\frac{\partial \omega}{\partial \theta}=\frac{\partial G\left(S,C\right)}{\partial \theta}+\frac{\partial \mathrm{ln}\phantom{\rule{0.25em}{0ex}}Y}{\partial \theta}.\phantom{\rule{0.1em}{0ex}}\left(28\right)$
Applying the partition function formalism to equation 25, we can express the second term as an expectation over p(S'  C, θ):
$\begin{array}{cc}\frac{\partial \mathrm{ln}\phantom{\rule{0.25em}{0ex}}Y}{\partial \theta}=\frac{1}{Y}\frac{\partial Y}{\partial \theta}& \left(29\right)\\ =\frac{1}{Y}{\displaystyle \sum _{{S}^{\prime}}\frac{\partial G\left({S}^{\prime},C\right)}{\partial \theta}}{e}^{G\left({S}^{\prime},C\right)}& \left(30\right)\\ ={\displaystyle \sum _{{S}^{\prime}}\frac{\partial G\left({S}^{\prime},C\right)}{\partial \theta}}p\left({S}^{\prime}C,\theta \right)& \left(31\right)\\ =\u3008\frac{\partial G}{\partial \theta}\u3009& \left(32\right)\end{array}$
which leads us to the following expression for the derivative of ω:
$\frac{\partial \omega}{\partial \theta}=\frac{\partial G\left(S,C\right)}{\partial \theta}\u3008\frac{\partial G}{\partial \theta}\u3009.\phantom{\rule{0.1em}{0ex}}\left(33\right)$
Applying formula 33 on the inverse potential 13 yields the following expressions for the derivatives:
$\frac{\partial \omega}{\partial {\epsilon}_{ab}}=\left[{n}_{ab}\u3008{n}_{ab}\u3009\right],\phantom{\rule{0.1em}{0ex}}\left(34\right)$
where n_{ ab }is the number of contacts between amino acids a and b observed in the database, and 〈n_{ ab }〉 is its expectation over the probability distribution p(S'  C, θ). Formula 34 thus leads to an intuitive characterization of the maximum likelihood estimate $\widehat{\epsilon}$: it is the value of ε such that the average number of each type of contact predicted by the potential matches the number observed in the database. Following a similar derivation:
$\frac{\partial \omega}{\partial {\mu}_{a}}=\left[{m}_{a}\u3008{m}_{a}\u3009\right],\phantom{\rule{0.1em}{0ex}}\left(35\right)$
where m_{ a }is the total number of amino acids of type a, and
$\frac{\partial \omega}{\partial {\alpha}_{a}^{d}}=\left[{l}_{a}^{d}\u3008{l}_{a}^{d}\u3009\right],\phantom{\rule{0.1em}{0ex}}\left(36\right)$
where ${l}_{a}^{d}$ is the total number of amino acids of type a belonging to solventaccessibility class d.
The evolution of the potential for some residue pairs is shown in figure 1b and 1c. Effects in the final values due to residue polarity are easily seen: known favorable interactions such as glutamatelysine or the hydrophobic isoleucinevaline have a lower contact energy, while known unfavorable interactions, such as glutamateglutamate, have higher energies, indicating that the potentials obtained are biologically reasonable.
Model comparison
The same optimization procedure was applied to the potential consisting only of the solvent accessibility term (α + μ), with an increasing number of accessibility classes, and to the combined (ε + α + μ) potential. The resulting log likelihood scores cannot directly be compared, since the models do not have the same dimensionality. We therefore applied a 2fold crossvalidation procedure (CV), consisting in learning the potential on DS2, and testing it on DS1, and vice versa.
Specificity of the designed sequences
Specificity of designed sequences.
Potential  Average Zscore ratio  SDev Zscore ratio  Ranking (median)  Target fold in top 1% (A)  Target fold in top 10%  Average seq. identity  Correlation between (A) and mean entropy/site 

μ  0.12  0.18  2249  0.5%  4.8%  5.76 %  0.26 
μ + α_{14ac}  0.10  0.18  2090  0.4%  6.3%  6.65 %  0.04 
μ + ε  0.13  0.16  816.8  10.7%  33.5%  6.69 %  0.23 
μ + α_{14ac} + ε  0.45  0.23  32.7  53.6%  77.5%  7.82 %  0.64 
The solvent accessibility potential alone (α_{14ac} + μ, figure 6b) is not sufficient to provide specificity to the designed sequences, and behaves almost as poorly as the flat potential (μ, figure 6a). A mild improvement is seen when using the contact potential (ε + μ, figure 6c): for 10% of the designed sequences the target fold is found among the best scoring folds (Table 1), and the distribution of this ranking is skewed towards lower values. However, it is only with the combined potential (ε + μ_{14ac} + μ, figure 6d) that a significant improvement is observed: for more than half of the designed sequences the target fold is found among the best 1% scoring folds, even though the average sequence identity with the native sequence is less than 10% in all cases (Table 1).
We also tested a subset of 120 randomly chosen designed sequences using another fold recognition program, LOOPP [38]. LOOPP is based on a combination of several structure prediction methods, based on threading, secondary structure, sequence profile and exposed surface area prediction. The results obtained with this program were similar to those of THREADER: for 51.2% of the designed sequences using the combined (ε + α_{14ac} + μ) potential, the target fold was found as the first hit, and for 67.2% the target fold was found among the first 10 hits.
In contrast, many of the current fold recognition programs based on sequence profile methods produced no significant hits (data not shown), which is not surprising, given that our sampling algorithm produces highly divergent sequences, with no similarity to any natural protein.
Discussion
The central idea of the present work is to reformulate the problem of devising statistical potentials for protein design as a statistical inference problem. This reformulation, based on the maximum likelihood (ML) principle, led us naturally to a gradient descent method, with the only additional aspect being that the gradient to follow is itself estimated by MonteCarlo averaging.
The main advantage of this ML framework is that it guarantees an optimal predictive power of the resulting potential. In addition, it is very general, and can in principle be applied to any form of statistical potential. In particular, it is not restricted to coarse grained descriptions of proteins, and it could also be applied at the atomic level.
Interestingly, our gradient descent method turns out to be similar in spirit to an iterative scheme proposed by Thomas and Dill [39], although in that case the purpose was to optimize a potential in the context of the folding problem. Specifically, Thomas and Dill tune the potential so as to match the observed and expected number of contacts of each type, except that their expectation is taken on a set of alternative conformations, for a fixed sequence, whereas we take the expectation on a set of alternative sequences, on the conformation of interest. Note that Thomas and Dill derived their method from intuitive arguments, and not as a mathematical consequence of the ML principle.
These two alternative optimization schemes, obtained by normalizing either over the sequence or over the structure space, are quite distinct, at least conceptually. How the resulting potentials would differ in practice is more difficult to evaluate. Among other things, it will depend on how the approximation of lnZ_{ s }based on the random energy model works. In the eventuality that it does not work well, it is likely that the contact term of our inverse potential will in fact combine two things: the information corresponding to the conformational energy of the sequence itself, which is also encoded in classical potentials optimized for threading, plus some information coming from the decoy term ln Z_{ s }. A way to settle this question would be to optimize a contact potential using, on the same learning set, both normalization schemes, and then compare the resulting values as well as their predictive powers.
Model assessment and comparison
The methodological framework proposed here offers reliable criteria for comparing the empirical fit of alternative models on real data. In this respect, it should be noted that the lack of a reliable objective criterion for evaluating different statistical potentials has often been invoked for justifying the use of onlattice idealized models [23]. However, onlattice approaches are only moderately interesting, as they completely ignore the problem of the robustness of the learning method to model violation. Coarsegrained statistical potentials are by definition oversimplified models of proteins, and therefore, model violation is an intrinsic feature of the protein design problem. In this respect, the statistical language is interesting, since it is still valid, even for fitting and assessing models that are known to be imperfect.
On the other hand, the intuitive idea underlying crossvalidation, i.e. measuring the rate of prediction of the native sequence, is quite simple, and has been invoked and used several times previously [16, 29, 32, 35, 40]. What we propose here is a better formalization of this idea. Note that in contrast to previous methods, we do not measure the marginal native prediction rate at each site, but the joint probability of the native sequence. This can be important, as it accounts for possible correlations in the predictive distribution. For instance, two given positions may not display any particular pattern, when considered marginally, but may jointly follow charge or steric compensatory patterns. These phenomena will not be taken into account in the overall fit of the potential when measuring the marginal prediction rate, as is usually done. Technically speaking, the joint probability of the native sequence on the corresponding structure is extremely small, and cannot be evaluated just by counting the frequency at which the native sequence appears in the sample obtained by Gibbs sampling. For this, more elaborate numerical methods, such as thermodynamic integration, are required.
In the present case, the comparison between alternative models has allowed us to measure the relative contribution of each term of the potential and to refine the protein representation. The contact component turns out to be the most informative (figure 4), although it should be complemented with other energetic forms. Here, we have tested the addition of a solvent accessibility component, which significantly improves the fit of the model. Contact information and solvent exposure are correlated, which is reflected in the fact that the fit improvement of each term is not additive.
Our model comparison method also gives us a direct way of choosing the optimal number of solvent accessibility classes (figure 3). Here, we found a number of 14 to 16 classes, which is higher than what one may have expected and than what is usually used. Note that this number depends on the way the classes are defined; here, the classes are based on quantiles, but as an alternative, we also tried a linear definition (evenly splitting the whole range of accessibility surfaces into D bins), which gave us an even higher optimal number of classes (20 classes, data not shown). In general, the present methodology could be used to investigate different definitions of accessibility classes, to refine the pairwise contact definition, or any other elements of the structure representation included in the potential.
The fact that our potential has a significantly better predictive power than that of Miyazawa and Jernigan (MJ, figure 4) is trivially expected, by construction of the ML potential. What is more surprising is that the MJ matrix is less fit than a simple solventaccessibility profile. A possible explanation would be that Miyazawa and Jernigan's potential is based on the quasichemical approximation, which is now known to be somewhat drastic [19, 41, 42], as it neglects correlations between observed pairing frequencies, due to chain connectivity and multiple contacts. Alternatively, it could mean that potentials optimized for folding are really not suited for protein design purposes. Testing other pairwise contact potentials, in particular those that do not rely on the quasichemical approximation [22, 24, 43–45], would be a way to address this issue.
Sequence sampling
The method that we propose in this work is probabilistic in essence. As such, it offers a very natural framework for investigating the patterns induced by the models on distributions of sequences.
Specificity of the designed sequences
A sequence s designed for a target conformation c should not only be compatible with c, but also incompatible with competing folds. A rigorous solution to this problem involves a simultaneous search over the sequence and conformation space. It is possible, however, to achieve specificity without explicitly seeking to penalize competing states (negative design), if we rely on the approximation based on the random energy model, where the normalization constant of equation 4 can be considered as a function of the sequence composition only [25, 46]. In our case, the normalization of the likelihood will also play an important role: since the total probability over all possible sequences has to be 1, maximizing the probability for a given sequence s_{1} on its native conformation c_{1} will lower the probability that another natural sequence s_{2}, with native conformation c_{2}, also gets a high probability on c_{1}. When many sequences are learnt in parallel, this phenomenon should ultimately favor specificity of s_{2} on c_{2}, compared to all other conformations of the data set.
On the other hand, the extent to which the specificity is achieved will depend on the actual form of the potential used, as well as on the data base used for learning. To address this question, we produced a large number of sequences with four different potentials, and checked their ability to recognize the target fold, as measured by the Zscore ratio or by the ranking of the target structure in a fold recognition experiment. Indeed, an improvement of specificity is observed when using better potentials, suggesting that the method is effectively capturing specific dependencies between the conformation and the sequence of the proteins in the learning set, even for the simple forms of potentials tested here. For the combined (ε + α_{14ac} + μ) potential, the average Zscore ratio of the designed sequences is similar to what has been reported for other protein design algorithms [46]. Conversely, this also suggests that a more sophisticated potential may further improve the specificity of the sequences designed using our algorithm.
Conformationdependent sitespecific profiles
To compare natural protein sequences with those predicted by the optimized potentials, marginal, leaveoneout and empirical profiles (see methods) were generated for the 60 proteins used in the design specificity experiment described above; the profiles obtained for the best and the worst scoring structures are provided as supplementary materials [see Additional file 7]. Overall, leaveoneout profiles (figure 5a) and marginal profiles (figure 5b) do not display significant differences in the discriminative power between sites: the mean Shannon entropy per site is 0.743 ± 0.366 for marginal profiles, and 0.696 ± 0.428 for leaveoneout profiles. It is worth noting that the mean entropy per site for each protein, and the corresponding standard deviation, i.e. the average amount of information at each site and the variation between sites, are both correlated with the performance of the particular protein in the fold recognition experiment, and this, only for the combined (ε + α_{14ac} + μ) potential (Table 1).
A detailed analysis of the leaveoneout profiles for a particular case, an alphaaminotransferase, may be useful to understand which type of information is effectively captured by the potential, and which is not captured at all, thereby suggesting possible ways of improving the current form of potential.
First, regions of the protein that show little secondary structure (such as in positions 32–40, 55–65 and 82–88) contain less information (mean entropy per site = 0.756) than regions with local structure (mean entropy per site = 0.856). This is not surprising, since these regions typically have fewer contacts between residues, and thus the amount of information included in the protein representation is lower.
Concerning regions with defined secondary structure, residue polarity is the information most easily captured. Charged residues are also distinctively inferred, as well as glycines, to a lesser extent (e.g. glycine 64, 81 and 95 – the latter predicted at position 94 or 95). In contrast, prolines are rarely correctly predicted, which is expected, since the properties most distinctive of prolines (such as phipsi dihedral angles or local secondary structure) are not included in this particular form of potential.
Interestingly, some residues that have a crucial importance for the protein structure or function fail to be predicted, simply because the properties conferring their importance are not included in the protein description. This is the case of the amino acids that are in close interaction with a ligand (positions 34, 59, 96, 97).
Finally, the leaveoneout profiles display an interesting behavior with respect to positions where the aminoacid present in the reference sequence is not at all conserved in other members of the family. In some cases, they simply do not predict anything (e.g. glycines 24 and 60, or leucine 9, isoleucine 21, and alanine 23), which suggests that their limited importance in structure stability or function is recognized by the inverse potential. In other cases, the natural profile is even reproduced in the leaveoneout profile, instead of the amino acid of the reference sequence; such is the case for phenylalanine 100.
Conclusion
As illustrated by the sequence logos and the fold recognition experiments performed above, the predictive power of the models proposed here is encouraging, but nevertheless still weak. It is not yet clear to what extent this is due to the specific choice made concerning the form of the statistical potential, to the approximation of ln Z_{ s }as a function of the sole composition of the sequence, or to yet other reasons. Most probably, we are facing a combination of several factors. The methods proposed here can now be used to address these difficult questions empirically.
In one direction, other approximations of ln Z_{ s }, less drastic than the random energy model, but still accessible in practice, can be investigated. For instance, following Deutsch and Kurozky (1996), the conditional probability of a sequence could be defined as:
p(s  c) ∝ e^{[E(S,C)〈E(S)〉]}p(s) (37)
where the expectation 〈·〉 is taken over a predefined set of decoy conformations. More sophisticated Monte Carlo methods, jointly sampling the sequence and conformation spaces, can also be imagined, in order to get more precise evaluations of ln Z_{ s }, while staying in the same global maximum likelihood formalism.
On the other hand, all the many statistical potentials that have been proposed over the last fifteen years may in principle be investigated in the same way as we have done here. In particular, distancedependent potentials [47] and mainchain dihedral angle potentials [48], which imply a richer representation of the protein structure, may result in models of greater predictive power. Other ways of implicitly considering sidechain conformation may also be easily incorporated into the model.
In a completely different perspective, it is possible to devise probabilistic models that are not exclusively defined in terms of a conformational free energy, even in a formal way. For instance, additional terms, concerning secondary structure aspects, interactions between successive positions along the sequence, or terms related to the folding constraints, can all be combined in an additive manner in the inverse potential. In fact, the model need not even be formulated in terms of a Boltzmann distribution, as long as the parameters are fitted by ML, and the predictive power of the resulting models is evaluated in a systematic way. Altogether, this amounts to setting up a robust statistical framework helping us to understand how, and to what extent, the sequences of natural proteins are determined by protein structure.
Methods
Structure representation
We used Miyazawa and Jernigan's definition of contacts [17]: each residue is represented by the center of its side chain atom positions; the positions of C^{ α } atoms are used for glycine. Residues whose centers are closer than 6.5Å are defined to be in contact. The accessible surface of a residue is defined as the atomic accessible area when a probe of the radius of a molecule of water is rolled around the Van der Waal's surface of the protein [49]. We used the program Naccess [50] to make this calculation. When treating PDB files with multiple chains, solvent accessibility was calculated taking into account all molecules in the structure. The accessibility classes (percentage relative to the accessibility in AlaXAla fully extended tripeptide) were defined so as to generate D equalsized subsets of sites. The complete definition of accessibility classes is available as supporting material [see Additional file 1].
Monte Carlo implementation
In order to calculate the derivative of ω in the gradient descent procedure, expectations with respect to p(S'  C, θ)in equation 33 are evaluated numerically. A sample drawn from p(S  C, θ) is obtained by a Gibbs sampling algorithm similar to that of Robinson et al. [10]. The elementary cycle of our Gibbs sampler is as follows: for each p = 1..P, and for each i = 1..N_{ P }, each of the 20 amino acids is proposed at site i of protein p, by successively setting ${s}_{i}^{p}$ = a, for all a = 1..20; in each case, the energy change ΔG_{ a }induced by this point substitution is evaluated; then, ${s}_{i}^{p}$ is set to amino acid a with probability ${p}_{a}\propto {e}^{\Delta {G}_{a}}.$. After Q cycles of burnin, a series of h = 1..K_{ EM }cycles are performed, and after each cycle, the current sequence, S_{ h }, is recorded. Once the sample is obtained, the expectation (32) is evaluated as
$\u3008\frac{\partial G}{\partial \theta}\u3009\simeq \frac{1}{{K}_{EM}}{\displaystyle \sum _{h=1}^{{K}_{EM}}\frac{\partial G\left({S}_{h},C\right)}{\partial \theta}}\phantom{\rule{0.1em}{0ex}}\left(38\right)$
and the derivative of ω with respect to θ follows immediately.
The overall gradient descent procedure runs as follows: we start from a random potential θ_{0} and a random set of sequences, and perform the following iterative scheme:

perform Q Gibbs cycles for the burnin, and k_{ EM }additional cycles for the sampling itself. Keep the final sequences as the starting point of the next cycle.

update θ by gradient descent, based on the estimate of the gradient obtained over the sample:
${\theta}_{n+1}={\theta}_{n}\delta \theta .\frac{\partial \omega \left(S\right)}{\partial \theta}\phantom{\rule{0.1em}{0ex}}\left(39\right)$
where . is a scalar product, and δθ is a stepvector. In practice, the coefficients of δθ are tuned empirically, allowing three degrees of freedom, for the α, the ε, and the μ component of the potential respectively.

iterate.
As a stopping rule, we monitor the evolution of ω(θ) itself, which we evaluate every 100 steps by a numerical procedure (see below), and stop when ω(θ) has stabilized. In practice, we used Q = 100 and k_{ EM }= 100. At first sight, it would seem that a larger number of points k_{ EM }would be needed to get a precise expectation, but in the present case one can rely on the selfaveraging of the derivatives across the 100,000 sites of the database.
Likelihood evaluation
The difficult part in estimating the likelihood (or equivalently ω(θ)), for a given value of θ, is to obtain an evaluation of ln Y. We do this by thermodynamic integration, or path sampling [51, 52], using the quasistatic method which we developed previously [53].
First, for 0 ≤ β ≤ 1, we define
The associated probability distribution is:
${p}_{\beta}\left(sc,\theta \right)=\frac{{e}^{{G}_{\beta}\left(s,c\right)}}{{Y}_{\beta}},\phantom{\rule{0.1em}{0ex}}\left(41\right)$
${Y}_{\beta}={\displaystyle \sum _{{s}^{\prime}}{e}^{{G}_{\beta}\left({s}^{\prime},c\right)}}.\phantom{\rule{0.1em}{0ex}}\left(42\right)$
What we are looking for is In Y_{1}. As for In Y_{0}, it factors out, and can be computed directly:
$\mathrm{ln}\phantom{\rule{0.25em}{0ex}}{Y}_{0}=N\mathrm{ln}\phantom{\rule{0.25em}{0ex}}\left({\displaystyle \sum _{a=1}^{20}{e}^{{\mu}_{a}}}\right).\phantom{\rule{0.1em}{0ex}}\left(43\right)$
We can thus equivalently evaluate the difference ln Y_{1} – ln Y_{0}. To do this, we rely on the following identity:
$\begin{array}{cc}\mathrm{ln}\phantom{\rule{0.25em}{0ex}}{Y}_{1}\mathrm{ln}\phantom{\rule{0.25em}{0ex}}{Y}_{0}={\displaystyle {\int}_{0}^{1}\frac{\partial \mathrm{ln}\phantom{\rule{0.25em}{0ex}}Y}{\partial \beta}}d\beta & \left(\text{44}\right)\\ ={\displaystyle {\int}_{0}^{1}{\u3008\frac{\partial G}{\partial \beta}\u3009}_{\beta}}d\beta ,& \left(45\right)\end{array}$
where 〈·〉_{ β }is the expectation over p_{ β }(s'  c, θ).
In practice, the method consists in first equilibrating the Gibbs sampler at β = 0, and then, performing a series of K_{ Th }+ 1 cycles, where at each step, the value of β is increased by a small amount δβ = 1/K_{ Th }. The successive values of $\frac{\partial G}{\partial \beta}$ obtained during this quasistatic sampling scheme are recorded, and their average is our estimate of ln Y_{1} – ln Y_{0}:
Note that these developments are for one protein, but the generalization over the database is straightforward.
In the conditions of the present work, K_{ Th }= 1, 000 is sufficient to obtain an estimate of ln Y_{1} – ln Y_{0} with an error less than one natural unit of logarithm.
Model comparison
We measured the fit of each model using crossvalidation (CV): the potentials optimized on a first data set, i.e. the learning set, (θ_{ L }) are applied on the second data set (the test set), and the loglikelihood is directly taken as a measure of fit. More precisely, for each model M,
CV_{ M }= ln p(S_{ T } C_{ T }, θ_{ L }, M), (47)
where S_{ T }and C_{ T }are the sequences and structures of the test set. The difference with the CV score obtained for the flat potential (μ) is reported: ΔCV = CV_{ μ }– CV_{ M }.
Sequence sampling: sitespecific profiles
Once an optimal value of θ is obtained, sequences compatible with a given conformation can be sampled from p(s  c, $\widehat{\theta}$) by Gibbs sampling, and then further investigated. For instance, the frequency of each of the 20 amino acids (a) at each position (i) can be computed (q_{ i }(a)), yielding a vector of sitespecific marginal profiles, graphically displayed as sequence logos [54]. Alternatively, leaveoneout profiles can be obtained by computing the probability of each of the 20 aminoacids at each site of the test sequence, given the potential and the native sequence at all other positions:
p(s_{ i }= a  s_{ j }, j ≠ i, θ). (48)
We measured the amount of information displayed by the profiles using the sitespecific Shannon entropy:
${h}_{i}={\displaystyle \sum _{a}{q}_{i}\left(a\right)}\mathrm{ln}\phantom{\rule{0.25em}{0ex}}{q}_{i}\left(a\right)\phantom{\rule{0.1em}{0ex}}\left(49\right)$
We compared both marginal and leaveoneout profiles to the empirical profiles, i.e. profiles displayed by natural sequences. We generated these empirical profiles from multiple sequence alignments obtained from the ConSurfHSSP database [55].
Sequence sampling: design specificity
As a test for specificity, designed sequences were submitted to a fold recognition experiment, using the fold recognition program THREADER [37]. In THREADER, the compatibility of a sequence s for a given structure c is measured by the Zscore:
$Z=\frac{\u3008E\left(s,C\right)\u3009E\left(s,c\right)}{\sigma}\phantom{\rule{0.1em}{0ex}}\left(50\right)$
where 〈E(S, C)〉 is the average of the THREADER statistical potential over all conformations of the decoy set, and σ is the corresponding standard deviation.
We randomly chose 70 structures of sizes ranging from 100 to 300 residues from the default THREADER dataset [see Additional file 8]. Structures whose native sequences produced a Zscore < 3 were discarded for the analysis. For each structure, c, we sampled 20 sequences from p(s  c, $\widehat{\theta}$) by Gibbs sampling. These designed sequences were then submitted to THREADER [37], and their specificity for the target structure c was measured by the ranking of c among all other structures, sorted by increasing Zscore.
A subset of 120 among the 1,200 sequences generated with the combined (ε + α_{14ac} + μ) potential (3–5 sequences for 23 distinct conformations, chosen at random; [see Additional file 8]) were also submitted to another fold recognition program, LOOPP [38], and the presence of the native conformation c as the first hit or in the first 10 hits was recorded.
Learning databases
We used proteins culled from the entire PDB according to structure quality (resolution better than 2.0 Å) and with less than 25% of mutual sequence identity [56]. Two subsets of approximately equal size were obtained by partitioning the proteins randomly: DS1, 449 proteins, 100,077 sites, and DS2, 465 proteins, 99,894 sites. The final list of proteins is available as supporting material [see Additional file 2] [see Additional file 3].
Declarations
Acknowledgements
Authors are grateful to Thomas Simonson, Pierre Tufféry, Laurent Chiche, Jérome Gracy and Gertraud Burger, for their critical comments on the manuscript and useful discussions. This work was financially supported in part by the "60ème comission francoquébécoise de coopération scientifique". CLK was supported by NSERC, CIHR and the Université de Montréal; NR was supported by a bioinformatics grant from Genome Québec; HP by the Canada Research Chair Program and the Université de Montréal; CB and NL were funded by the french Centre National de la Recherche Scientifique, through the ACIIMPBIO ModelPhylo funding program.
Authors’ Affiliations
References
 Drexler KE: Molecular engineering: an approach to the development of general capabilities for molecular manipulation. Proc Natl Acad Sci USA 1981, 78: 5275–5278. 10.1073/pnas.78.9.5275PubMed CentralView ArticlePubMedGoogle Scholar
 Pabo C: Molecular technology: designing proteins and peptides. Nature 1983, 301: 200. 10.1038/301200a0View ArticlePubMedGoogle Scholar
 Ponders JW, Richards FM: Tertiary templates for proteins. Use of packing criteria in the enumeration of allowed sequences for different structural classes. J Mol Biol 1987, 193: 775–791. 10.1016/00222836(87)903585View ArticleGoogle Scholar
 Seno F, Vendruscolo M, Maritan A, Banavar JR: Optimal protein design procedures. Phys Rev Lett 1996, 77: 1901–1904. 10.1103/PhysRevLett.77.1901View ArticlePubMedGoogle Scholar
 Abkevich VI, Gutin AM, Shakhnovich EI: Improved design of stable and fastfolding model proteins. Fold Des 1996, 1: 221–230. 10.1016/S13590278(96)000338View ArticlePubMedGoogle Scholar
 Hellinga HW, Richards FM: Optimal sequence selection in proteins of known structure by simulated evolution. Proc Natl Acad Sci USA 1994, 91: 5803–5807. 10.1073/pnas.91.13.5803PubMed CentralView ArticlePubMedGoogle Scholar
 Parisi G, Echave J: Structural constraints and emergence of sequence patterns in protein evolution. Mol Biol Evol 2001, 18: 750–756.View ArticlePubMedGoogle Scholar
 Bastolla U, Porto M, Roman HE, Vendruscolo M: Lack of selfaveraging in neutral evolution of proteins. Phys Rev Lett 2002., 89:Google Scholar
 Bastolla U, Porto M, Roman HE, Vendruscolo M: Connectivity of neutral networks, overdispersion, and structural conservation in protein evolution. J Mol Evol 2003, 56: 243–254. 10.1007/s0023900223500View ArticlePubMedGoogle Scholar
 Robinson DM, Jones DT, Kishino H, Goldman N, Thorne JL: Protein evolution with dependence among codons due to tertiary structure. Mol Biol Evol 2003, 20: 1692–1704. 10.1093/molbev/msg184View ArticlePubMedGoogle Scholar
 Rodrigue N, Lartillot N, Bryant D, Philippe H: Site interdependence attributed to tertiary structure in amino acid sequence evolution. Gene 2005, 347: 207–217. 10.1016/j.gene.2004.12.011View ArticlePubMedGoogle Scholar
 Park S, Yang X, Saven JG: Advances in computational protein design. Curr Opin Struct Biol 2004, 14: 487–494. 10.1016/j.sbi.2004.06.002View ArticlePubMedGoogle Scholar
 Wernisch L, Hery S, Wodak SJ: Automatic protein design with all atom forcefields by exact and heuristic optimization. J Mol Biol 2000, 301: 713–736. 10.1006/jmbi.2000.3984View ArticlePubMedGoogle Scholar
 Larson SM, England JL, Desjarlais JR, Pande VS: Thoroughly sampling sequence space: largescale protein design of structural ensembles. Protein Sci 2002, 11: 2804–2813. 10.1110/ps.0203902PubMed CentralView ArticlePubMedGoogle Scholar
 Dahiyat BI, Sarisky CA, Mayo SL: De novo protein design: towards fully automated sequence selection. J Mol Biol 1997, 273: 789–796. 10.1006/jmbi.1997.1341View ArticlePubMedGoogle Scholar
 Jaramillo A, Wernisch L, Héry S, Wodak SJ: Folding free energy function selects nativelike protein sequences in the core but not on the surface. Proc Natl Acad Sci USA 2002, 99: 13554–13559. 10.1073/pnas.212068599PubMed CentralView ArticlePubMedGoogle Scholar
 Miyazawa S, Jernigan RL: Estimation of effective interresidue contact energies from protein crystal structures: quasichemical approximation. Macromolecules 1985, 18: 534–552. 10.1021/ma00145a039View ArticleGoogle Scholar
 Sippl MJ: Boltzmann's principle, knowledgebased mean fields and protein folding. An approach to the computational determination of protein structures. J Comput Aided Mol Des 1993, 7: 473–501. 10.1007/BF02337562View ArticlePubMedGoogle Scholar
 Godzik A, Kolinski A, Skolnick J: Are proteins ideal mixtures of amino acids? Analysis of energy parameter sets. Protein Sci 1995, 4: 2107–2117.PubMed CentralView ArticlePubMedGoogle Scholar
 Solis AD, Rackovsky S: Improvement of statistical potentials and threading score functions using information maximization. Proteins 2006, 62: 892–908. 10.1002/prot.20501View ArticlePubMedGoogle Scholar
 Hendlich M, Lackner P, Weitckus S, Floeckner H, Froschauer R, Gottsbacher K, Casari G, Sippl MJ: Identification of native protein folds amongst a large number of incorrect models. The calculation of low energy conformations from potentials of mean force. J Mol Biol 1990, 216: 167–180. 10.1016/S00222836(05)800683View ArticlePubMedGoogle Scholar
 Maiorov V, Crippen G: Contact potential that recognizes the correct folding of globular proteins. J Mol Biol 1992, 227: 876–888. 10.1016/00222836(92)90228CView ArticlePubMedGoogle Scholar
 Mirny LA, Shakhnovich EI: How to derive a protein folding potential? A new approach to an old problem. J Mol Biol 1996, 264: 1164–1179. 10.1006/jmbi.1996.0704View ArticlePubMedGoogle Scholar
 Bastolla U, Farwer J, Knapp EW, Vendruscolo M: How to guarantee optimal stability for most representative structures in the protein data bank. Proteins 2001, 44: 79–96. 10.1002/prot.1075View ArticlePubMedGoogle Scholar
 Shakhnovich EI, Gutin AM: Engineering of stable and fastfolding sequences of model proteins. Proc Natl Acad Sci USA 1993, 90: 7195–7199. 10.1073/pnas.90.15.7195PubMed CentralView ArticlePubMedGoogle Scholar
 Kurosky T, Deutsch JM: Design of copolymeric material. J Phys A Math Gen 1995, 27: L387L393. 10.1088/03054470/28/14/003View ArticleGoogle Scholar
 Deutsch JM, Kurosky T: New algorithm for protein design. Phys Rev Lett 1996, 76: 323–326. 10.1103/PhysRevLett.76.323View ArticlePubMedGoogle Scholar
 Seno F, Micheletti C, Maritan A, Banavar JR: Variational approach to protein design and extraction of interaction potentials. Phys Rev Lett 1998, 81: 2172–2175. 10.1103/PhysRevLett.81.2172View ArticleGoogle Scholar
 Micheletti C, Seno F, Maritan A, Banavar J: Design of proteins with hydrophobic and polar amino acids. Proteins 1998, 32: 80–87. 10.1002/(SICI)10970134(19980701)32:1<80::AIDPROT9>3.0.CO;2IView ArticlePubMedGoogle Scholar
 Banavar J, Cieplak M, Maritan A, Nadig G, Seno F, Vishveshwara S: Structurebased design of model proteins. Proteins 1998, 31: 10–20. 10.1002/(SICI)10970134(19980401)31:1<10::AIDPROT2>3.0.CO;2LView ArticlePubMedGoogle Scholar
 Rossi A, Maritan A, Micheletti C: A novel iterative strategy for protein design. J Chem Phys 2000, 112: 2050–2055. 10.1063/1.480766View ArticleGoogle Scholar
 Rossi A, Micheletti C, Seno F, Maritan A: A selfconsistent knowledgebased approach to protein design. Biophys J 2001, 80: 480–490.PubMed CentralView ArticlePubMedGoogle Scholar
 Chiu TL, Goldstein RA: Optimizing potentials for the inverse protein folding problem. Protein Eng 1998, 11: 749–752. 10.1093/protein/11.9.749View ArticlePubMedGoogle Scholar
 Wald A: Note on the consistency of maximumm likelihood. Ann Math Stat 1949, 20: 595–601.View ArticleGoogle Scholar
 Sun S, Brem R, Chan R, Dill K: Designing amino acid sequences to fold with good hydrophobic cores. Protein Eng 1995, 8: 1205–1213.View ArticlePubMedGoogle Scholar
 Dempster A, Laird N, Rubin D: Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc B 1977, 39: 1–38.Google Scholar
 Jones DT, Taylor WR, Thornton JM: A new approach to protein fold recognition. Nature 1992, 358: 86–89. 10.1038/358086a0View ArticlePubMedGoogle Scholar
 Meller J, Elber R: Linear optimization and a double statistical filter for protein threading protocols. Proteins 2001, 45: 241–261. 10.1002/prot.1145View ArticlePubMedGoogle Scholar
 Thomas PD, Dill KA: An iterative method for extracting energylike quantities from protein structures. Proc Natl Acad Sci USA 1996, 93: 11628–11633. 10.1073/pnas.93.21.11628PubMed CentralView ArticlePubMedGoogle Scholar
 Kono H, Saven JG: Statistical theory for protein combinatorial libraries. Packing interactions, backbone flexibility, and the sequence variability of a mainchain structure. J Mol Biol 2001, 306: 607–628. 10.1006/jmbi.2000.4422View ArticlePubMedGoogle Scholar
 Thomas PD, Dill KA: Statistical potentials extracted from protein structures: how accurate are they? J Mol Biol 1996, 257: 457–469. 10.1006/jmbi.1996.0175View ArticlePubMedGoogle Scholar
 Skolnick J, Jaroszewski L, Kolinski A, Godzik A: Derivation and testing of pair potentials for protein folding. When is the quasichemical approximation correct? Protein Sci 1997, 6: 676–688.PubMed CentralView ArticlePubMedGoogle Scholar
 Tiana G, Colombo M, Provasi D, Broglia RA: Deriving amino acid contact potentials from their frequencies of occurrence in proteins: a lattice model study. J Phys Condens Matter 2004, 16: 2551–2564. 10.1088/09538984/16/15/007View ArticleGoogle Scholar
 Tobi D, Elber R: Distancedependent, pair potential for protein folding: Results from linear optimization. Proteins 2000, 41: 40–46. 10.1002/10970134(20001001)41:1<40::AIDPROT70>3.0.CO;2UView ArticlePubMedGoogle Scholar
 Vendruscolo M, Najmanovich R, Domany E: Can a pairwise contact potential stabilize native protein folds agaionst decoys obtained by threading? Proteins 2000, 38: 134–148. 10.1002/(SICI)10970134(20000201)38:2<134::AIDPROT3>3.0.CO;2AView ArticlePubMedGoogle Scholar
 Koehl P, Levitt M: De novo protein design. I. In search of stability and specificity. J Mol Biol 1999, 293: 1161–1181. 10.1006/jmbi.1999.3211View ArticlePubMedGoogle Scholar
 Sippl MJ: Calculation of conformational ensembles from potentials of mean force. An approach to the knowledgebased prediction of local structures in globular proteins. J Mol Biol 1990, 213: 859–883. 10.1016/S00222836(05)802694View ArticlePubMedGoogle Scholar
 Betancourt MR, Skolnik J: Local propensities and statiscal potentials of backbone dihedral angles in proteins. J Mol Biol 2004, 342: 635–649. 10.1016/j.jmb.2004.06.091View ArticlePubMedGoogle Scholar
 Lee B, Richards M: The interpretation of protein structures: Estimation of static accessibility. J Mol Biol 1971, 55: 379–400. 10.1016/00222836(71)90324XView ArticlePubMedGoogle Scholar
 Hubbard SJ, Thornton JM: Naccess. Depart of Biochem and Molec Biol University College London 1993.Google Scholar
 Ogata Y: A Monte Carlo method for high dimensional integration. Numerische Mathematik 1989, 55: 137–157. 10.1007/BF01406511View ArticleGoogle Scholar
 Gelman A: Simulating normalizing constants: from importance sampling to bridge sampling to path sampling. Stat Sci 1998, 13: 163–185. 10.1214/ss/1028905934View ArticleGoogle Scholar
 Lartillot N, Philippe H: Computing Bayes factors using thermodynamic integration. Syst Biol 2006, in press.Google Scholar
 Schneider TD, Stephens RM: Sequence Logos: a new way to display consensus sequences. Nucleic Acid Res 1990, 18: 6097–6100.PubMed CentralView ArticlePubMedGoogle Scholar
 Glaser F, Rosenberg Y, Kessel A, Pupko T, BenTal N: The ConSurfHSSP Database: The Mapping of Evolutionary Conservation Among Homologs Onto PDB Structures. Proteins 2005, 58: 610–617. 10.1002/prot.20305View ArticlePubMedGoogle Scholar
 Wang G, Dunbrack RLJ: PISCES: a protein sequence culling server. Bioinformatics 2003, 19: 1589–1591. 10.1093/bioinformatics/btg224View ArticlePubMedGoogle Scholar
 Laskowski RA, Chistyakov VV, M TJ: PDBsum more: new summaries and analyses of the known 3D structures of proteins and nucleic acids. Nucleic Acids Res 2005, 33: D266D268. 10.1093/nar/gki001PubMed CentralView ArticlePubMedGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.