Skip to main content

Table 5 Structured abstracts discourse prediction label based on a CRF model

From: GeneRIF indexing: sentence selection based on machine learning

Discourse

Positives

TP

FP

Precision

Recall

F-measure

Background

6161

4154

2259

0.6477

0.6742

0.6607

Conclusions

10126

8455

1683

0.8340

0.8350

0.8345

Methods

15617

13473

2357

0.8511

0.8627

0.8569

Objective

4657

2810

1634

0.6323

0.6034

0.6175

Results

22228

18724

3240

0.8525

0.8424

0.8474

  1. Results of a CRF trained model on structured abstracts. For each label we show the number of instances in the data set for each label (Positives), the number of True Positives (TP), the number of False Positives (FP) and the precision, recall and F-measure values.