Skip to main content

Table 4 CASP9 comparison on labelled data.

From: Learning by aggregating experts and filtering novices: a solution to crowdsourcing problems in bioinformatics

Predictor Name

Institution

ACC

AUC

AEFN

 

0.801

0.887

GMM-MAPML

 

0.785

0.874

MAP-ML

 

0.764

0.859

MV

 

0.735

0.776

PRDOS2

Tokyo Tech

0.754

0.855

MULTICOM-REFINE

U of Missouri

0.750

0.822

BIOMINE_DR_PDB

U of Alberta

0.741

0.821

GSMETADISORDERMD

IIMCB in Warsaw

0.738

0.816

MASON

George Mason U

0.736

0.743

ZHOU-SPINE-D

Indiana University

0.731

0.832

DISTILL-PUNCH1

UCD Dublin

0.726

0.800

OND-CRF

Umea University

0.706

0.759

UNITED3D

Kitasato University

0.704

0.780

CBRC_POODLE

CBRC

0.694

0.830

MCGUFFIN

University of Reading

0.688

0.817

ISUNSTRUCT

IPR RAS

0.676

0.739

DISOPRED3C

UCL

0.670

0.853

ULG-GIGA

University of Liege

0.588

0.726

MEDOR

Aix-Marseille U

0.579

0.679

  1. Comparisons of AEFN vs. alternative multi-annotator methods (GMM-MAPML, MAP-ML and MV) and individual CASP9 protein disorder predictors.