Skip to main content

Table 5 Performance of BERT models on PPI, DDI, and ChemProt.

From: Investigation of improving the pre-training and fine-tuning of BERT model for biomedical relation extraction

Model PPI DDI ChemProt
P R F P R F P R F
BioBERT 79.0 83.3 81.0 79.9 78.1 79.0 74.3 76.3 75.3
BioBERT_SLL_LSTM 80.2 84.0 82.0 80.5 78.5 79.5 77.6 74.4 76.0
BioBERT_SLL_biLSTM 80.2 82.7 81.4 80.8 78.5 79.6 77.9 73.9 75.9
BioBERT_SLL_Att 80.7 84.4 82.5 81.6 79.4 80.5 77.5 75.1 76.3
PubMedBERT 80.1 84.3 82.1 82.6 81.9 82.3 78.8 75.9 77.3
PubMedBERT_SLL_LSTM 79.8 85.6 82.6 82.6 82.8 82.7 78.9 77.0 77.9
PubMedBERT_SLL_biLSTM 80.5 82.6 81.7 82.6 81.4 82.0 78.5 76.5 77.5
PubMedBERT_SLL_Att 81.3 85.0 83.1 84.3 82.7 83.5 78.3 77.6 77.9
  1. Bold values indicate better results
  2. P: Precision; R: Recall; F: F1 Score; BioBERT/PubMedBERT_SLL_LSTM: model of summarizing the outputs of the last layer using LSTM; BioBERT/PubMedBERT_SLL_biLSTM: model of summarizing the outputs of the last layer using biLSTM; BioBERT/PubMedBERT_SLL_Att: model of summarizing the outputs of the last layer using attention mechanism