Skip to main content

Table 7 Hardware, memory, and time used for training for all evaluated algorithms

From: Concept recognition as a machine translation problem

Algorithm

Hardware

Training memory (GBs)

Training time (h)

CRF

CPUs

2–13

1–4

BiLSTM*

GPUs/CPUs**

17

29

BiLSTM-CRF

CPUs

7

15

Char-Embeddings

CPUs

30

84

BiLSTM-ELMo*

GPUs

42

700–1000

BioBERT

GPUs/CPUs**

5

20

UZH@CRAFT-ST BioBERT* [4]

GPUS

120***

200

OpenNMT*

CPUs

620

515

ConceptMapper [20]

CPUs

N/A

N/A

  1. A given training time specifies the total hours if training for all ontology annotation sets were run consecutively, but these can be parallelized by ontology
  2. ConceptMapper runs on CPUs but has no training, as it is a dictionary-based lookup tool, hence the specifications as N/A
  3. *Parallelized per ontology due to time constraints
  4. **Runs significantly faster on GPUs
  5. ***Total free RAM available