Skip to main content

Table 4 Comparison of 14 model quality assessment methods and their consensus ranking

From: A large-scale conformation sampling and evaluation server for protein tertiary structure prediction and its assessment in CASP11

QA method

Best of top 1

Avg loss

Best in the pool

Avg loss

Consensus ranking

34

0.04

17

0.07

SELECTpro

32

0.05

17

0.08

ModFOLDclust2

30

0.07

18

0.10

APOLLO

30

0.07

16

0.09

Pcons

29

0.07

16

0.10

ProQ2

27

0.05

15

0.07

QApro

18

0.07

8

0.09

ModelCheck2

16

0.16

10

0.18

MULTICOM-NOVEL_QA

11

0.11

4

0.14

DFIRE2

9

0.11

6

0.14

Dope

9

0.11

6

0.14

ModelEvaluator

9

0.13

6

0.16

OPUS_PSP

9

0.11

6

0.14

RF_CB_SRS_OD

9

0.11

6

0.14

RWplus

9

0.11

6

0.14

  1. “Best of top 1” means the number of times when top 1 models selected by an individual QA method were actually the best of the top 1 models identified by all the QA methods. “Best in the pool” means the number of times when top 1 models by an individual method were actually the best models in the MULTICOM model pool. “Avg loss” means the average loss of GDT-TS scores between the best models and top 1 models ranked by each QA method