Skip to main content

Table 2 Effect of pruning and post-processing method on the performance of GrAPFI-GO and two other automatic annotation tools

From: Improving automatic GO annotation with semantic similarity

Method

Post-processing cut-off

Precision

Recall

F1-score

GrAPFI

No-post-processing

0.165

0.108

0.107

SS-max

0.573

0.115

0.175

SS-5

0.445

0.380

0.376

SS-5-MS-max/2

0.440

0.391

0.379

PANNZER

No-post-processing

0.547

0.942

0.668

SS-max

0.637

0.225

0.301

SS-5

0.634

0.515

0.536

SS-5-MS-max/2

0.603

0.689

0.609

DeepGOPlus

No-post-processing

0.053

0.653

0.095

SS-max

0.249

0.120

0.138

SS-5

0.186

0.182

0.160

SS-5-MS-max/2

0.167

0.233

0.1725

  1. The bold numbers indicate which post-processing cut off achieved the maximum performance score for a particular performance metric and annotation tool
  2. Average precision, recall and F1-score are computed for each method in four situations. No-post-processing: without post-processing and pruning; SS-max: pruned using highest SS as cut-off; SS-5: pruned using 5th highest SS as cut-off; SS-5-MS-max/2: pruned using 5th highest SS and (maximum MS)/2 as cut-offs