Fig. 2From: Incorporating representation learning and multihead attention to improve biomedical cross-sentence n-ary relation extractionOverview of our model. The Bi-LSTM first encodes each word by concatenating word and position embeddings, followed the multihead attention directly draws the global dependencies of the Bi-LSTM output. Then, sentence embedding is concatenated with relation information, which comes from the KG. edrug,egene and emutation are the drug, gene and mutation entities, respectively. vdrug−gene and vdrug−mutation denote the different relation vectors. Finally, sentence representation with entity relation information is fed to a softmax classifierBack to article page