Skip to main content

Table 1 The table shows the results of dropping data that falls in the margin. For any data point xi; let maxm{fm(xi)} = fyi, and Let fm = maxm{fm(xi)} for all m ≠ yi, then we define the margin as: (fyi - fm), hence data point xi is dropped if (fyi - fm) ≤ Confidence Parameter. Using the margin drop approach, there is even less tuning, and there is improved throughput (approximately 75% for all species).

From: Support Vector Machine Implementations for Classification & Clustering

Kernel

8GC

9AT

9CG

9GC

9TA

Gaussian

P: 1268

TP: 1087

SN+SP: 1.76

P: 1087

TP: 1087

SN+SP: 2

Drop = 9.42

P: 1178

TP: 934

SN+SP: 1.57

P: 934

TP: 934

SN+SP: 2

Drop = 22.17

P: 1166

TP: 904

SN+SP: 1.53

P: 904

TP: 904

SN+SP: 2

Drop = 24.67

P: 1172

TP: 897

SN+SP: 1.51

P: 897

TP: 897

SN+SP: 2

Drop = 25.25

P: 1216

TP: 1027

SN+SP: 1.70

P: 1027

TP: 1027

SN+SP: 2

Drop = 14.42

AbsDiff

P: 1407

TP: 1134

SN+SP: 1.75

P: 1134

TP: 1134

SN+SP: 2

Drop = 5.5

P: 1151

TP: 928

SN+SP: 1. 58

P: 928

TP: 928

SN+SP: 2

Drop = 22.67

P: 1177

TP: 906

SN+SP: 1.53

P: 906

TP: 906

SN+SP: 2

Drop = 24.5

P: 1050

TP: 870

SN+SP: 1.55

P: 870

TP: 870

SN+SP: 2

Drop = 27.5

P: 1215

TP: 1040

SN+SP: 1.72

P: 1040

TP: 1040

SN+SP: 2

Drop = 13.33

Entropic

P: 1165

TP: 1038

SN+SP: 1.75

P: 1038

TP: 1038

SN+SP: 2

Drop = 13.5

P: 1480

TP: 995

SN+SP: 1.50

P: 991

TP: 991

SN+SP: 2

Drop = 17.42

P: 1348

TP: 922

SN+SP: 1.45

P: 920

TP: 920

SN+SP: 2

Drop = 23.33

P: 960

TP: 804

SN+SP: 1.50

P: 803

TP: 803

SN+SP: 2

Drop = 33.08

P: 1047

TP: 970

SN+SP: 1.73

P: 970

TP: 970

SN+SP: 2

Drop = 19.17