Skip to main content

Table 5 Benchmark of global model quality

From: Improved model quality assessment using ProQ2

Single-model methods R R target ∑GDT1 Z GDT1
ProQ2 0.80/0.80 0.72/0.69 75.2/47.0 100.4/68.6
QMEAN 0.75/0.77 0.71/0.66 73.6/44.7 81.1/52.1
MetaMQAP –/0.76 –/0.59 –/43.1 –/40.3
ConQuass –/0.73 –/0.66 –/40.4 –/20.4
Distill_NNPIF –/0.71 –/0.64 –/43.9 –/43.5
MULTICOM-CMFR 0.71/– 0.68/– 74.0/– 83.7/–
ProQ 0.67/0.68 0.65/0.54 71.5/42.3 59.3/40.0
Consensus methods
QMEANclust 0.89/0.96 0.94/0.91 75.8/48.6 104.1/81.41
MULTICOM-CLUSTER 0.96 0.91 48.7 82.3
Mufold 0.96 0.91 48.7 82.5
ProQ2+Pcons 0.89/0.95 0.94/0.89 76.9/48.7 118.5/81.6
Pcons 0.89/0.95 0.95/0.91 75.9/48.3 101.6/76.8
PconsM 0.95 0.90 47.9 70.2
United3D 0.95 0.92 48.8 81.2
MUFOLD-QA 0.95 0.92 48.3 79.5
ModFOLDclust2 0.95 0.90 48.4 80.6
MetaMQAPclust 0.95 0.91 48.4 78.2
IntFOLD-QA 0.95 0.90 48.4 79.9
MULTICOM-REFINE 0.94 0.88 46.2 66.6
MULTICOM 0.94 0.88 48.7 84.7
MQAPmulti 0.94 0.91 48.2 75.4
ModFOLDclustQ 0.94 0.87 48.6 82.3
MQAPsingle 0.92 0.81 45.3 45.2
MULTICOM-CONSTRUCT 0.90 0.82 46.6 63.3
gws 0.90 0.81 45.3 44.2
Splicer 0.89 0.85 47.6 75.4
LEE 0.89 0.80 45.1 42.9
Splicer_QA 0.88 0.84 47.8 77.4
Modcheck-J2 0.87 0.77 41.7 26.2
MUFOLD-WQA 0.86 0.91 49.0 83.9
SMEG-CCP 0.83 0.76 47.9 74.9
QMEANdist 0.80 0.84 47.8 77.1
QMEANfamily 0.75 0.68 44.8 50.3
GRIER-CONSENSUS 0.68 0.86 48.3 82.0
Baltymus 0.58 0.53 41.8 32.8
Best Possible 1.00/1.00 1.00/1.00 82.3/52.2 182.2/127.9
  1. Benchmark of global model quality on CASP8 and CASP9 data sets. R is overall correlation, R target the average correlation per target, ∑GDT1 is the the sum of the first-ranked models for each target and ∑Z GDT1 is the summed Z-score for the first-ranked models for each target. The first value corresponds to results on CASP8, the second to CASP9, and cells with only one value are CASP9 only.