Skip to main content

Table 1 Implemention of the new BN structure learning algorithm

From: Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

   n: number of nodes in the network.
   D: discretized expression data matrix.
   BurnIn: number of steps to take before drawing sample networks for evaluation. Default value: 50 times the size of the sampling reservoir.
   n_iteration: number of iterations. Default value: 80 times the size of the sampling reservoir.
   Δ_samples: interval of sample networks being collected from the chain after burn-in. Default
   value: 1000.
   maxFanIn: maximum number of parents of a node.
   A set of DAGs after reaching the max iteration step.
   An average DAG in the form of a matrix.
   1. Create a sampling edge reservoir based on p link .
   2. Set all elements of the adjacency matrix for the initial DAG to 0.
   3. for loop_index = 1: n_iteration do
(1) randomly select a element edge(i,j) from the edge sampling reservoir, corresponding to gene pair (i,j).
(2) if edge(i,j) exists in the current DAG, delete the edge; else if edge(j,i) exists in the current DAG, reverse edge(j,i) to edge(j,i); else add edge(i,j). We name these operations as "delete", "reverse" and "add", respectively.
(3) check whether the newly proposed DAG remains acyclic and satisfy the maxFanIn rules to nodes (i,j). If not, keep the current DAG and give up proposed DAG, go to (1).
(4) calculate log value of the marginal likelihood (LL)* of the expression data D of node j and its parents given the current DAG (LL_old) or the proposed DAG (LL_new) and define bf1 = exp(LL_new - LL_old).
(5) if the operation is "delete" or "add", bf2 = 1; if the operation is "reverse", calculate bf2 for node i in same way as for node j in (4).
(6) calculate the prior probability* of current DAG (prior_old) and propose DAG (prior_new); calculate the Metropolis-Hastings ratio (RHM) of the two DAGs; generate a random number u between 0 to 1, if bf1*bf2*prior_new/prior_old<u*RHM, keep the current DAG and give up proposed DAG, go to (1).
(7) when loop_index>BurnIn and (loop_index-BurnIn) is exactly divisible by Δ _samples, record the proposed DAG and its posterior probability.
   4. End of loop, calculate the average DAG in the form of a matrix, where the elements are given by the averaged edges of all recorded DAGs weighted by their posterior probabilities.
  1. *Details of the definition of marginal likelihood, and how to calculate LL, prior probability of DAG, can be found in [10, 31].