Skip to main content

Table 1 Hyper-parameters optimization

From: iEnhancer-DCLA: using the original sequence to identify enhancers and their strength based on a deep learning framework

Hyper-Parameters

Range

Recommendation

Convolutional layer number

[1, 2, 3, 4]

2

Convolutional neurons number

[16, 32, 64, 128, 256]

128,256

Convolutional kernel size

[3, 6, 8, 16, 20, 30]

8

Max Pooling layer size

[2, 4, 6, 8]

2

Dropout rates

[0.1, 0.2, 0.3, 0.5]

0.2

Number of neurons in Bi-LSTM

[16, 32, 50, 64]

64

Optimizer

[SGD, Adam]

Adam

Learning rate

[2e−6, 5e−6, 8e−6, 2e−5]

5e−6, 2e−6

Batch Size

[16, 32, 64, 128]

32