Skip to main content
Fig. 1 | BMC Bioinformatics

Fig. 1

From: MATHLA: a robust framework for HLA-peptide binding prediction integrating bidirectional LSTM and multiple head attention mechanism

Fig. 1

The network structure of MATHLA. a Embedding layer. Encode peptide and HLA pseudo-sequence through BLOSUM62 similarity matrix. b Sequence learning layer. Encoded information from embedding layer was input into sequence learning layer for retrieving contextual sequence features. c Attention block. Each head assigns weights to individual positions of the original input according to the corresponding subspace of sequence representation. d Fusion layer. A 2-dimension convolutional neural network with a 1*1*head filter is used to fuse vectors output from (c). e Output layer. Output normal affinity score between 0 and 1 through linear layer and sigmoid function

Back to article page