Skip to main content
Fig. 2 | BMC Bioinformatics

Fig. 2

From: Incorporating representation learning and multihead attention to improve biomedical cross-sentence n-ary relation extraction

Fig. 2

Overview of our model. The Bi-LSTM first encodes each word by concatenating word and position embeddings, followed the multihead attention directly draws the global dependencies of the Bi-LSTM output. Then, sentence embedding is concatenated with relation information, which comes from the KG. edrug,egene and emutation are the drug, gene and mutation entities, respectively. vdrug−gene and vdrug−mutation denote the different relation vectors. Finally, sentence representation with entity relation information is fed to a softmax classifier

Back to article page