Skip to main content
Fig. 4 | BMC Bioinformatics

Fig. 4

From: AttentionDDI: Siamese attention-based deep learning method for drug–drug interaction predictions

Fig. 4

AttentionDDI model architecture. (1) The sets of drug pair feature vectors \((u_a,u_b)\) from each similarity matrix are used as model input, separately for each drug. (2) A Transformer-based Siamese encoder model generates new drug feature representation vectors for each drug. First, by applying learned weights (through Self-Attention) to the drug feature vectors. Then, by non-linearly transforming the weighted feature vectors by a feed-forward network. Finally, a Feature Attention pooling method aggregates the transformed feature vectors into a single feature vector representation for each drug (\(z_a\) or \(z_b\) respectively). 3) A separate classifier model concatenates the encoded feature vectors \(z_a,z_b\) with their distance (euclidean or cosine). Lastly, through affine mapping of the concatenated drug pair vectors followed by Softmax function, a drug-interaction probability distribution is generated for each drug pair

Back to article page