Skip to main content

Advertisement

Table 5 Performance changes with different input representations on the overall-2013 dataset

From: An attention-based effective neural model for drug-drug interactions extraction

Input representation P(%) R(%) F(%)
(1): word without attention 54.7 42.8 48.0
(2): word + att 76.5 67.5 71.7
(3): word + att + pos 70.9 74.7 72.7
(4): word + att + position 79.1 73.9 76.4
(5): word + att + pos + position 78.4 76.2 77.3
  1. Every model in this table uses all preprocessing techniques of our approach. Word without attention denotes the model without the attention mechanism which uses only word embedding. Word + att denotes the model which uses the attention mechanism and word embedding