Skip to main content

Table 2 Performance comparison with other KO annotation tools

From: KEGG orthology prediction of bacterial proteins using natural language processing

Method

Match

Unmatch

Missed

Added

Precision

Recall

F1

BlastKOALA

6172

64

552

100

0.974

0.909

0.941

GhostKOALA

6423

26

339

117

0.978

0.946

0.962

KofamKOALA

5955

88

745

953

0.851

0.877

0.864

Ours

6426

183

179

171

0.948

0.947

0.947

Ours w/o classifier

5943

62

783

407

0.876

0.927

0.900

Ours with threshold

6399

169

220

151

0.952

0.943

0.948

  1. Best performance is marked in bold. We calculated the number of match (predicted KO is identical to the KO defined in KEGG GENES), unmatch (predicted KO is different from the KO defined in KEGG GENES), missed (the KO is defined in KEGG GENES but no prediction was made), and added (no KO is defined in KEGG GENES, but the prediction assigned a K number) for each tool, along with precision, recall, and F1 score