Skip to main content

Table 3 The average BA and GM values under different CNN + Transformer structures

From: A deep learning approach for orphan gene identification in moso bamboo (Phyllostachys edulis) based on the CNN + Transformer model

Method

BA

GM

Train time (min)

Test time (s)

E + FC_256

0.677

0.612

25

88

E + CNN_6 + FC_256

0.748

0.644

29

106

E + CNN_6 + CNN_3 + FC_256

0.773

0.784

28

99

E + Transformer + FC_256

0.844

0.832

57

390

E + CNN_6 + Transformer + FC_256

0.866

0.849

61

377

E + CNN_6 + CNN_3 + Transformer + FC_256

0.877

0.881

44

342

E + CNN_6 + CNN_3 + CNN_3 + Transformer + FC_256

0.853

0.837

48

366

E + CNN_6 + CNN_3 + Transformer + FC_256 + FC_256

0.871

0.865

55

404

  1. Bold values indicate the highest values of different evaluation indicators
  2. E: word embedding coding. CNN_6: multiscale convolution layer, with a convolution kernel size of {2, 3, 4, 5, 6, 7} for each scale. CNN_3: multiscale convolution layer, the convolution kernel size of each scale is {3, 6, 9}. Transformer: three-layer transformer neural network. FC_256: fully connected neural network with 256 neurons