| TM-score | MaxSub | GDT | Combined |
---|
ModFOLD | 0.69 | 0.67 | 0.70 | 0.69 |
ProQLG | 0.56 | 0.56 | 0.57 | 0.56 |
PROQ* | 0.55 | 0.57 | 0.55 | 0.56 |
MODCHECK | 0.48 | 0.47 | 0.46 | 0.47 |
ProQMX | 0.19 | 0.33 | 0.33 | 0.28 |
3D-Jury†| 0.01 | -0.04 | -0.04 | -0.02 |
ModSSEA | -0.07 | -0.02 | 0.00 | -0.03 |
Random | -3.41 | -3.62 | -3.78 | -3.58 |
- Similar to Table 9, however the original server ranking is also considered and added to the score as an extra weighting ((6-r)/40, where r is the original server ranking between 1 and 5). The results achieved from a random re-ranking of models from each server (random assignment of scores between 0 and 1) are also shown for comparison. * The official predicted MQAP scores for these methods were downloaded from CASP7 website; all other MQAP methods were run in house during the CASP7 experiment. †MQAP methods which rely on the comparison of multiple models or additional information from multiple servers; all other methods are capable of producing a single score for a single model.