Skip to main content

Table 3 Test accuracies (in percent) on 20 words from our universal models and the two baselines

From: Biomedical word sense disambiguation with bidirectional long short-term memory and attention-based neural networks

Words

Univ NN

Univ Attention

Baseline 1

Baseline 2

AA

100.00

100.00

96.00

98.99

Astragalus

100.00

100.00

100.00

97.47

CDR

85.19

82.10

97.00

100.00

Cilia

80.77

79.80

82.00

94.87

CNS

97.06

86.10

98.00

98.48

CP

98.15

100.00

97.00

98.32

dC

94.44

82.40

98.00

98.48

EMS

91.43

98.80

98.00

100.00

ERUPTION

97.14

91.10

100.00

100.00

FAS

100.00

88.10

100.00

99.49

Ganglion

86.11

73.30

90.00

93.43

HCl

91.67

94.10

100.00

100.00

INDO

100.00

100.00

87.00

99.18

lymphogranulomatosis

88.89

75.80

83.00

83.33

MCC

83.33

92.80

97.00

100.00

PAC

81.82

81.10

94.00

100.00

Phosphorus

83.33

75.70

78.00

83.84

Phosphorylase

70.00

66.40

52.00

87.35

TMP

100.00

78.30

81.00

98.00

TNT

94.44

78.20

98.00

99.49

Average

91.19

86.21

91.30

96.54

  1. The Univ NN means the universal BiLSTM neural network model and the Univ Attention means the universal attention model. All our universal models (deep network and attention) are trained without layer \(\mathcal {C}\)