Skip to main content

Table 3 GeneRIF data set discourse prediction

From: GeneRIF indexing: sentence selection based on machine learning

Discourse

Positives

TP

FP

Precision

Recall

F-measure

Background

258

180

78

0.7692

0.6977

0.7317

Conclusions

165

108

57

0.6316

0.6545

0.6429

Methods

179

105

74

0.5412

0.5866

0.5630

Purpose

36

24

12

0.3750

0.6667

0.4800

Results

260

163

97

0.6417

0.6269

0.6342

  1. Results of a classifier trained on the discourse labels annotated in our data set. For each label we show the number of instances in the data set for each label (Positives), the number of True Positives (TP), the number of False Positives (FP) and the precision, recall and F-measure values.