|
TM-score
|
MaxSub
|
GDT
|
Combined
|
---|
3D-Jury†
|
0.955
|
0.924
|
0.925
|
0.943
|
LEE*†
|
0.943
|
0.903
|
0.909
|
0.926
|
ModFOLD
|
0.843
|
0.807
|
0.807
|
0.825
|
PROQ*
|
0.828
|
0.764
|
0.759
|
0.789
|
Pcons*†
|
0.803
|
0.773
|
0.765
|
0.786
|
ProQ-MX
|
0.779
|
0.755
|
0.751
|
0.768
|
ModSSEA
|
0.744
|
0.736
|
0.742
|
0.747
|
MODCHECK
|
0.729
|
0.659
|
0.658
|
0.686
|
ProQ-LG
|
0.688
|
0.651
|
0.640
|
0.665
|
- All models are pooled together and the ρ is measured between predicted and observed model quality scores. The combined observed model quality score was also calculated for each individual model e.g. mean score for each model (TM-score+MaxSub+GTD)/3.*The MQAP scores for these methods were downloaded from CASP7 website; all other MQAP methods were run in house during the CASP7 experiment. †MQAP methods which rely on the comparison of multiple models or include additional information from multiple servers; all other methods are capable of producing a single score based on a single model.