Skip to main content


Table 5 The effectiveness of domain-specific contextual word representation according to the mean F1 scores of 30 different random seeds

From: Relation extraction between bacteria and biotopes from biomedical texts with attention mechanisms and domain-specific contextual representations

Pre-trained word modelF1 score
PubMed word2vec53.422.5146.6756.70
general-purpose ELMo54.303.6142.7656.51
random-PubMed ELMo53.813.6538.8957.01
specific-PubMed ELMo55.911.4951.2457.48
  1. All of the highest scores are highlighted in bold except for the SD. The first-row results derive from the best results of previous experiments (i.e., the last row in Table 4). Note: “PubMed word2vec” denotes the context-free word model, “general-purpose ELMo” denotes the general-purpose contextual word model, “random-PubMed ELMo” denotes the domain-general contextual word model based on 118 million randomly selected tokens abstracts from PubMed, and “specific-PubMed ELMo” denotes the domain-specific contextual word model based on 118 million bacterial-relevant abstracts from PubMed