Skip to main content

Table 4 FFNN hyperparameter space explored with hg19 and hg38 data through Bayesian optimization. A. Architecture and learning hyperparameters of the fixed-FFNN; B. Architecture and hyperparameter space of the Bayesian-FFNN models trained on the hg19 dataset; C. Architecture and hyperparameter space of the Bayesian-FFNN models models trained on the hg38 dataset

From: Boosting tissue-specific prediction of active cis-regulatory regions through deep learning and Bayesian optimization techniques

Layers

Units

Activation

A: fixed-FFNN

Dense

16

ReLU

Dense

4

ReLU

Dense

2

ReLU

Output

1

Sigmoid

Learning parameters

   Learning rate

0.5

 

   Learning rate decay

0.1

 

   l2 regularizer

0.0

 

   Batch size

32

 

   Optimizer

SGD

 

   Max no. of epochs

64

 

Layers

Hyperparameter space

Activation

B: Bayesian-FFNN (hg19 dataset)

No. of dense layers

{0, 1, 2, 3}

 

No. of units layer 1

{256, 128, 64, 32, 16, 8, 4, 2}

ReLU

No. of units layer 2

{128, 64, 32, 16, 8, 4, 2}

ReLU

No. of units layer 3

{64, 32, 16, 8, 4, 2}

ReLU

Output

1

Sigmoid

Learning parameters

   Learning rate

[0.1, 0.5]

 

   Learning rate decay

[0.01, 0.2]

 

   l2 regularizer

[0, 0.1]

 

   Batch size

[32, 256]

 

   Optimizer

SGD

 

   Max no. of epochs

[32, 1000]

 

C: Bayesian-FFNN (hg38 dataset)

Groups

\(n=4\)

 

No. of hidden layers, composing the group

\(\{0, \ldots , 3\}\)

 

No. of units of the dense layer

\(\{0, \ldots , 256\}\)

ReLU

Dropout

\([0, \ldots , 0.5]\)

 

Output

1

Sigmoid

Learning parameters

   Learning rate

[0.1, 0.5]

 

   Learning rate decay

[0.01, 0.2]

 

   l2 regularizer

[0, 0.1]

 

   Batch size

[32, 256]

 

   Optimizer

SGD

 

   Max no. of epochs

[32, 1000]

 
  1. In Tables B and C, for each otpimized hyperparameter, the search hyperparameter space is shown, where square brackets are used for continuous hyperparameter spaces, while curly brackets are used for discrete ones. “Dense” refers to fully connected layers