Skip to main content

Table 3 Comparison of classifiers performance as measured by AUC

From: Improving peptide-MHC class I binding prediction for unbalanced datasets

allele

Trees, unit λ2

Trees, best λ2

ARB*

SMM*

ANN*

External Tool*

A0101

.903

.951

.964

.980

.982

.955

A0201

.796

.842

.934

.952

.957

.922

A0202

.770

.770

.875

.899

.900

.793

A0203

.746

.781

.884

.916

.921

.788

A0206

.732

.747

.872

.914

.927

.735

A0301

.763

.763

.908

.940

.937

.851

A1101

.841

.859

.918

.948

.951

.869

A2301

.742

.782

NA

NA

NA

NA

A2402

.685

.748

.718

.780

.825

.770

A2403

.673

.846

NA

NA

NA

NA

A2601

.606

.811

.907

.931

.956

.736

A2902

.783

.847

NA

NA

NA

NA

A3001

.741

.861

NA

NA

NA

NA

A3002

.777

.810

NA

NA

NA

NA

A3101

.825

.833

.909

.930

.928

.829

A3301

.636

.827

.892

.925

.915

.807

A6801

.756

.761

.840

.885

.883

.772

A6802

.699

.714

.865

.898

.899

.643

A6901

.614

.813

NA

NA

NA

NA

B0702

.887

.911

.952

.964

.965

.942

B0801

.547

.835

.936

.943

.955

.766

B1501

.759

.823

.900

.952

.941

.816

B1801

.745

.833

.573

.853

.838

.779

B2705

.753

.892

.915

.940

.938

.926

B3501

.712

.771

.851

.889

.875

.792

B4001

.587

.897

NA

NA

NA

NA

B4002

.718

.778

.541

.842

.754

.775

B4402

.588

.762

.533

.740

.778

.783

B4403

.647

.804

.461

.770

.763

.698

B4501

.679

.824

NA

NA

NA

NA

B5101

.664

.792

.822

.868

.866

.820

B5301

.795

.819

.871

.882

.899

.861

B5401

.654

.796

.847

.921

.903

.799

B5701

.756

.936

.428

.871

.826

.767

B5801

.815

.864

.899

.964

.961

.899

  1. The second and third columns correspond to the decision trees described in the present work. Note the improvement in the performance of the trees with the use of training costs. *Values extracted from table 2 of Peters et al., 2006.