Pseudo code of network simulations by CaiNet
Effective rates given by the promoter structure
Activation by a single transcription factor
For the most simplified case of activation of a gene we assume that a promoter is activated once a transcription factor (TF) binds. This means that the on-rate is given by the arrival rate λeff of the TF at the promoter. Once a TF has arrived at the Promoter the gene is ‘on’ for the time 1/µeff. This time may vary depending on the TF and the promoter of the gene. If not otherwise stated we assume that the binding time of the TF, 1/µ, corresponds to this on-time.
$$\begin{aligned} \lambda_{eff} & = n_{TF} \lambda_{0} \\ \mu_{eff} & = \mu \\ \end{aligned}$$
(11)
where nTF is the number of transcription factors and λ0 is the arrival rate of a single transcription factor.
Promoter with AND-logic
The AND-logic refers to a promoter that is only activated if TF1 and TF2 up to TFN are bound (Fig. 2d). Once a single TF leaves, the activation criterion is immediately violated and the promoter is off. Therefore, the off-rate µeff of the promoter is the sum over all off-rates µTFi of the TFs.
$$\mu_{eff} = \sum\limits_{i = 1}^{N} {\mu_{TFi} }$$
(12)
Combinatorically, we can also write down the probability of the promoter to be on as the product of the probability of all TFs to be bound
$$p_{on} = \prod\limits_{i} {p_{on,TFi} } = \prod\limits_{i} {\lambda_{TFi} /(\lambda_{TFi} + \mu_{TFi} )}$$
(13)
where λTFi = λTFi,0 nTFi is the arrival rate of a TF at the promoter. From pon,eff and µeff, we can calculate the on-rate of the promoter
$$\lambda_{eff} = p_{on} /(1 - p_{on} )\mu_{eff} = K\mu_{eff}$$
(14)
Promoter with OR logic
When a promoter is active if TF1 or TF2 up to TFN or a combination of all is bound, we refer to this promoter as OR-logic (Fig. 2d). We start the calculation of effective rates with the on-rates of individual TFs, λTFi. Since the arrival of any TF is enough to activate the promoter, the on-rate is
$$\lambda_{eff} = \sum\limits_{i} {\lambda_{TFi} }$$
(15)
The probability to be on can be calculated combinatorically with pon,TFi implicitly defined in (13)
$$p_{on} = 1 - (1 - p_{on,TF1} )(1 - p_{on,TF2} )...(1 - p_{on,TFN} )$$
(16)
With (14) we find the effective off rate of the promoter
$$\mu_{eff} = \lambda_{eff} (1 - p_{on} )/p_{on}$$
(17)
Competitive repression
We now enhance all promoters developed above by an additional feature, which is the blocking of the promoter by a repressor. We assume that the promoter cannot be activated once a repressor is bound. Once a gene is activated however, the repressor does not abort expression. Therefore, the repressor directly affects the effective on-rate of an arbitrary promoter while the off-rate remains unchanged
$$\begin{aligned} \lambda_{eff,repressor} & = \lambda_{eff} \cdot p_{{off,{\text{Re}} pressor}} \\ \mu_{eff,repressor} & = \mu_{eff} \\ \end{aligned}$$
(18)
To proof this, we write down the differential equations describing the change in the blocked and free promoter populations. We denote blocked promoters with b, free but inactive promoters with f and active promoters with a
$$\begin{aligned} \dot{b} & = - \mu_{R} b + \lambda_{R} f \\ \dot{f} & = \mu_{R} b + \mu_{eff} a - \lambda_{eff} f - \lambda_{R} f \\ \end{aligned}$$
(19)
Assuming that the changes in the repressor-population are small, we find
$$b = \frac{{\lambda_{R} }}{{\mu_{R} }}f = K_{R} f$$
(20)
If we now calculate the sum of blocked and unblocked promoters, we obtain
$$p = b + f = (1 + K_{R} )f$$
(21)
We add up Eq. (19), plug the result in (21) and obtain
$$\dot{p} = \mu a - \tfrac{{\lambda_{eff} }}{{1 + K_{R} }}p$$
(22)
From this result we conclude for the new effective rate modified by the repressor
$$\lambda_{eff,repressor} = \lambda_{eff} \left( {1 + K_{R} } \right)^{ - 1}$$
(23)
This is equivalent to Eq. (18) and the proof is complete.
Gene expression including delays
In a simplified approach, we considered one rate-limiting step with rate constant ν for synthesis of the gene product. This assumption neglects the fact that certain processes in gene product synthesis may introduce a delay between initiation of product synthesis and availability of the synthesized product, e.g. elongation of RNA, termination of transcription, initiation of translation, elongation of the protein and termination of translation. In addition, the mRNA needs time to be transported to ribosomes and might be subject to posttranslational modifications. To account for such processes in gene product synthesis, we included additional rate-limiting steps corresponding to Poisson processes with rates β1…βN.
To determine the resulting function n(t), we solve the general system of ODEs for an arbitrary number of delay processes. Using the switching function a(t) of a single gene, we arrive at the system of differential equations
$$\begin{aligned} & \dot{c}_{1} = a(t)\nu - \beta_{1} c_{1} \\ & \dot{c}_{2} = \beta_{1} c_{1} - \beta_{2} c_{2} \\ & ... \\ & \dot{c}_{N} = \beta_{N - 1} c_{N - 1} - \beta_{N} c_{N} \\ \end{aligned}$$
(24)
where cN represents the output expression level of the gene. Our first step to solve the system for cN(t) is to calculate the Laplace transform of all equations in (24). The result for the first and the i-th equation is
$$\begin{aligned} \tilde{c}_{1} & = \frac{1}{{p + \beta_{1} }}\nu \tilde{a} \\ \tilde{c}_{i} & = \frac{{\beta_{i - 1} }}{{p + \beta_{i} }}\tilde{c}_{i - 1} \\ \end{aligned}$$
(25)
where p the transform variable and ã is the Laplace transform of a. With these results we can easily find an equation for the Laplace-transform of the final product cN
$$\tilde{c}_{N} = \frac{{\prod\nolimits_{i = 1..N - 1} {\beta_{i} } }}{{\prod\nolimits_{i = 1..N} {(p + \beta_{i} )} }}\nu \tilde{a}$$
(26)
Before we calculate the inverse Laplace transform to obtain cN(t) we simplify the denominators in (26) by partial fraction composition with the coefficients αi
$$\begin{aligned} & \frac{{\prod\nolimits_{i = 1..N - 1} {\beta_{i} } }}{{\prod\nolimits_{i = 1..N} {(p + \beta_{i} )} }} = \sum\limits_{i} {\frac{{\alpha_{i} }}{{p + \beta_{i} }}} \\ & {\text{where}}\quad \alpha_{i} = \frac{{\prod\nolimits_{n = 1..N - 1} {\beta_{n} } }}{{\prod\nolimits_{n \ne i} {( - \beta_{i} + \beta_{n} )} }} \\ \end{aligned}$$
(27)
For few delays this simplification can most easily be verified by plugging in αi and rewriting the right hand side of (27). For a high number of delays the result can be derived by the well-known technique of partial fraction decomposition. We plug this result in (26) and obtain
$$\tilde{c}_{N} = \nu \tilde{a}\sum\limits_{i} {\frac{{\alpha_{i} }}{{p + \beta_{i} }}}$$
(28)
We next use the multiplication theorem for the Laplace transform and find
$$c_{N} = \sum\limits_{i} {\alpha_{i} \nu \int {\exp ( - \beta_{i} (t - t^{\prime}))} \cdot a(t^{\prime}){\text{d}} t^{\prime}}$$
(29)
Training of recurrent regulatory networks
We assume that steady state expression levels for a subset of species of a GRN are given. Our goal is to infer parameters characterizing the steady state behaviour of the network from this information. Importantly, since we work with steady state information, our approach is limited to the inference of equilibrium constants rather than kinetic rate constants. To infer these constants, we apply a gradient method particularly design for recurrent network topologies [56]. Once these equilibrium parameters are inferred, the temporal evolution of the network can still be simulated.
We start by writing down the minimization problem for the objective function W(ϕ) of the difference between the current values X and the target expression levels T of the network. The objective function shall be minimized by varying the parameters of the GRN, ϕ = {λ, µ, ν, δ}.
$$\mathop {\min }\limits_{{{\varvec{\upvarphi }} > 0}} {\mkern 1mu} W({\varvec{\upvarphi }})\quad {\text{where}}\;\;\;W = \frac{1}{2}\sum\limits_{i} {\left( {x_{i} - T_{i} } \right)^{2} }$$
(30)
where we used the l2-norm of the distance between target expression level and corresponding simulated values as a loss-function for the i-th element of the training data set.
To obtain the gradient for a parameter ϕj ∈ ϕ we calculate the derivative of the objective function with respect to ϕj and obtain
$$\frac{\partial }{{\partial \varphi_{j} }}W = \sum\limits_{i} {\left( {x_{i} - T_{i} } \right)} \frac{{\partial x_{i} }}{{\partial \varphi_{j} }}$$
(31)
Using the delta rule introduced by Pineda et al. [56], we find a step size for the parameter ϕj
$$\Delta \varphi_{j} = - \eta \cdot \sum\limits_{i} {\left( {x_{i} - T_{i} } \right)} \frac{{\partial x_{i} }}{{\partial \varphi_{j} }}$$
(32)
where we use the empirical parameter η to scale the step size. In the above equation, the derivatives of xi with respect to ϕj are unknown. To derive a formula for Δϕj, we in the following formally work on a generic system of ODEs corresponding to a GRN.
The time evolution of all expression levels X = {x1, …, xi}, is given by the set of differential equations \({\dot{\mathbf{x}}} = {\mathbf{F}}({\mathbf{x}})\). To obtain the missing derivatives we take the total derivative with respect to ϕj of the steady state case 0 = F(X). We obtain
$$\frac{d}{{d\varphi_{j} }}{\mathbf{F}}(\varphi_{j} ,x_{i} ) = \frac{{\partial {\mathbf{F}}}}{{\partial \varphi_{j} }} + \sum\limits_{k} {\frac{{\partial {\mathbf{F}}}}{{\partial x_{k} }}\frac{{\partial x_{k} }}{{\partial \varphi_{j} }}} = 0$$
(33)
In this equation we recognize the partial derivative of xk with respect to ϕj. Importantly, the system of equations is linear with respect to this derivative. We introduce the abbreviation
$$L_{ik} = \frac{{\partial F_{i} }}{{\partial x_{k} }}$$
(34)
With this abbreviation, we can rewrite (33) as a linear system of equations for the derivatives of xk with respect to ϕj.
$$- \frac{{\partial {\mathbf{F}}}}{{\partial \varphi_{j} }} = {\mathbf{L}}\frac{{\partial x_{k} }}{{\partial \varphi_{j} }}$$
(35)
By introducing a new variable z, we can make the step of solving this system of ODEs independent from the parameter ϕj, such that we only need to solve the system once in order to obtain the gradients for all parameters in ϕ. This variable z is characterized by the equation
$${\mathbf{z}}^{{\text{T}}} = \left( {{\mathbf{x}} - {\mathbf{T}}} \right)^{{\text{T}}} {\mathbf{L}}^{{{\mathbf{ - 1}}}} \Leftrightarrow {\mathbf{L}}^{{\text{T}}} {\mathbf{z}} = \left( {{\mathbf{x}} - {\mathbf{T}}} \right)$$
(36)
We plug this equation in (32) and identify z by introducing the identity matrix. We obtain
$$\Delta \varphi_{j} = - \eta \cdot \left( {{\mathbf{x}} - {\mathbf{T}}} \right) \cdot {\mathbf{L}}^{ - 1} {\mathbf{L}}\frac{{\partial x_{i} }}{{\partial \varphi_{j} }} = \eta \cdot {\mathbf{z}} \cdot \frac{{\partial {\mathbf{F}}}}{{\partial \varphi_{j} }}$$
(37)
To calculate this gradient using a network set up in CaiNet, we need to calculate the derivatives of the differential equations \(\partial {\mathbf{F}}/\partial \varphi_{j}\) and the derivatives \(\partial x_{k} /\partial \varphi_{j}\). Since the parameter \(\varphi_{j}\) and the species \(x_{i}\) only occur in a single network element, each network element can return these derivatives independently of other elements in the network.
For a promoter with or-logic, the differential equation is
$$\frac{{dn_{j} }}{dt} = \left[ {1 - \prod\limits_{{i \in A_{j} }} {p_{off,i} } } \right]\nu_{j} - \delta_{j} n_{j}$$
(38)
where \(A_{j}\) is the set of all transcription factors that activate the element \(j\). We now take the derivative of the steady state equation with respect to \(\lambda_{k}\) and obtain
$$0 = \left( {\prod\limits_{i} {p_{off,i} } } \right) \cdot \nu_{j} \cdot \sum\limits_{i} {\frac{{\lambda_{i} }}{{\lambda_{i} n_{i} + \mu_{i} }}\frac{{\partial n_{i} }}{{\partial \lambda_{k} }}} - \delta_{j} \frac{{\partial n_{j} }}{{\partial \lambda_{k} }} + \nu_{j} \frac{{\mu_{i} }}{{\left( {\lambda_{i} n_{i} + \mu_{i} } \right)^{2} }}n_{i} \Delta_{ik}$$
(39)
where \(\Delta_{ik}\) is the Kronecker delta. From this equation we can identify the variables that the network element returns to calculate a gradient decent step:
$$\begin{aligned} \frac{{\partial F_{j} }}{{\partial \lambda_{k} }} & = \nu_{j} \frac{{\mu_{i} }}{{\left( {\lambda_{i} n_{i} + \mu_{i} } \right)^{2} }}n_{i} \Delta_{ik} \\ L_{ji} & = \left\{ {\begin{array}{*{20}c} {\left( {\prod\limits_{{i \in A_{j} }} {p_{off,i} } } \right) \cdot \nu_{j} \cdot \frac{{\lambda_{i} }}{{\lambda_{i} n_{i} + \mu_{i} }} - \delta_{j} \Delta_{ij} } & {i \in A_{j} } \\ { - \delta_{j} \Delta_{ij} } & {i \notin A_{j} } \\ \end{array} } \right. \\ \end{aligned}$$
(40)
For a complete gradient descent scheme, we first obtain the steady state expression levels by simulating the network with CaiNet until all elements have reached a steady state. Next we use Eq. (37) to calculate the change for all parameters in ϕ based on the derivatives above and the difference between ground truth expression levels and simulated expression levels. Using the new parameters, we again simulate the steady state expression levels. We proceed in this manner until an abortion criterium is met. This abortion is \(W < b\), i.e. that the sum of all differences between ground truth and expression levels of the trained network are smaller than an upper bound \(b\).
Chemical reactions
In the following we derive analytical solutions for the elementary reactions ‘homo-dimerization’, ‘hetero-dimerization’ and ‘transformation of a species by an enzyme’. We then use these analytical solutions to couple multiple elementary reactions to obtain more complex biochemical reaction networks.
Hetero-dimerization
We start with the hetero-dimerization of the two species \(n_{1}\) and \(n_{2}\) that forms the species \(f\). For a closed system of these three species the change in \(f\) over time is
$$\dot{f} = \lambda n_{1} n_{2} - \left( {\delta_{1} + \delta_{2} + \mu } \right)f$$
(41)
where \(\lambda\) is the association rate of the two dimerizing species and \(\mu\) is the dissociation rate of the dimer. The rates \(\delta_{1}\) and \(\delta_{2}\) correspond to the degradation of the monomers \(n_{1}\) and \(n_{2}\) respectively. The law of mass conservation yields equations for the species \(n_{1} ,n_{2}\)
$$\begin{aligned} N_{1} & = f + n_{1} \\ N_{2} & = f + n_{2} \\ \end{aligned}$$
(42)
Plugging in the law of conservation we obtain
$$\dot{f} = \lambda (N_{1} - f)(N_{2} - f) - \left( {\delta_{1} + \delta_{2} + \mu } \right)f$$
(43)
To account for coupling to other elementary systems, we introduce the flow \(j_{in}\) that represents flux of the species \(f\) from other elements into the element at hand. The new differential equation for \(f\) is
$$\dot{f} = \lambda (N_{1} - f)(N_{2} - f) - \left( {\delta_{1} + \delta_{2} + \mu } \right)f + j_{in}$$
(44)
Next, we solve this equation for \(f(t)\) for an initial value of \(f_{0}\). We start by calculating the fixed points (defined by \(\dot{f} = 0\)) of \(f\). The corresponding quadratic equation yields the fixed points \(f_{1} ,f_{2}\)
$$f_{1,2} = \frac{{N_{1} + N_{2} + K^{ - 1} \pm \sqrt {\left( {N_{1} + N_{2} + K^{ - 1} } \right)^{2} - 4N_{1} N_{2} + j_{in} \lambda^{ - 1} } }}{2}$$
(45)
where \(K = \lambda \cdot \left( {\mu + \delta_{1} + \delta_{2} } \right)^{ - 1}\). Using these fixed points, we can rewrite Eq. (44) as
$$\dot{f} = - \lambda \left( {f - f_{2} } \right)\left( {f - f_{1} } \right)$$
(46)
Integration of this ODE yields
$$\begin{aligned} f\left( t \right) & = f_{1} + \left( {f_{2} - f_{1} } \right)\frac{u}{1 + u} \\ u & = \frac{{f_{0} - f_{1} }}{{f_{2} - f_{0} }}\exp ( - kt) \\ k & = \lambda \left( {f_{2} - f_{1} } \right) \\ \end{aligned}$$
(47)
Since the reaction consumes the species \(n_{1} ,n_{2}\), we need to calculate the flux of the respective elements into the species \(f\). This flux is calculated by
$$\begin{aligned} j_{in,n1} & = \int\limits_{0}^{\Delta t} {\left[ {\lambda (N_{1} - f)(N_{2} - f) - \left( {\delta_{2} + \mu } \right)f + j_{in} } \right]} \, dt \\ j_{in,n2} & = \int\limits_{0}^{\Delta t} {\left[ {\lambda (N_{1} - f)(N_{2} - f) - \left( {\delta_{1} + \mu } \right)f + j_{in} } \right]} \, dt \\ \end{aligned}$$
(48)
During each synchronization time-step, these fluxes are reported to the network element representing the corresponding species.
Homo-dimerization
The differential equations for a homo-dimerization are
$$\dot{f} = \lambda (N - 2f)^{2} - \left( {2\delta_{1} + \mu } \right)f + j_{in}$$
(49)
where \(\lambda\) is the association rate of the two dimerizing monomers and \(\mu\) is the dissociation rate of the dimer. The rate \(\delta_{1}\) corresponds to the degradation of the monomers \(N\). The shape of this equation is similar to the heterodimer case. Thus, by setting
$$\begin{aligned} \lambda \, & { = 4}\lambda \\ N_{1} & = N_{2} = N/2 \\ j_{in} & = 2j_{in} \\ \end{aligned}$$
(50)
we can reuse our solution (47) to calculate the number of dimers and (48) to calculate the flux out of the monomer into the homodimer. During each synchronization time-step, this flux is reported to the monomer element.
Enzyme kinetics
In case of a dimerization with an enzyme, the substrate can be subject to a reaction catalysed by the enzyme. Here, the species \(m\) is produced.
$$\dot{f} = \lambda (N_{1} - f)(N_{2} - f) - \left( {\delta_{1} + \delta_{2} + \nu + \mu } \right)f$$
(51)
$$\dot{m} = \nu f - \delta_{3} m + j_{in}$$
(52)
where \(\lambda\) is the association rate of substrate to enzyme and \(\mu\) is the dissociation rate of the substrate-enzyme complex. The rates \(\delta_{1}\), \(\delta_{2}\) and \(\delta_{3}\) correspond to the degradation of the substrate \(n_{1}\), degradation of the enzyme \(n_{2}\) and degradation of the product respectively. The rate \(\nu\) is the transformation rate of substrate to product. The shape of this equation is similar to the heterodimer case. Thus, by setting
$$\delta_{1} = \delta_{1} + v$$
(53)
we can reuse our solution (47) to calculate the number of enzyme–substrate–complexes and (48) to calculate the flux out of substrate and enzyme into the enzyme–substrate–complex. To obtain the amount of product, we integrate (52) and obtain
$$m(\Delta t) = m_{0} \exp \left( { - \delta_{3} \Delta t} \right) + \frac{{\nu \overline{f} + j_{in} }}{{\delta_{3} }}\left[ {1 - \exp \left( { - \delta_{3} \Delta t} \right)} \right]$$
(54)
where
$$\overline{f} = \frac{1}{\Delta t}\int\limits_{0}^{\Delta t} {f(t)dt = f_{1} + \frac{1}{\lambda \Delta t}\log \left( {\frac{1 + u(t = 0)}{{1 + u(t = 0)\exp ( - k\Delta t)}}} \right)}$$
(55)
We arrived at this result by reusing our solution (47). During each synchronization time-step, m(Δt) is reported to all other elements in the network.
Simulation of the negative feedback cascade
Gillespie simulation
All genes in the cascade have three states, since the promoter structure is not modelled with an effective two-state model as in the case of the CaiNet simulation. The promoter is empty in the second state. By binding of a transcription repressor the promoter enters the first state. This state can only be exited upon unbinding of the transcription repressor. The third state is entered upon binding of a transcription activator
$${\text{Repressed}} \rightleftharpoons {\text{Empty}} \rightleftharpoons {\text{Active}} \to {\text{TF}} \to \emptyset$$
(56)
This state model leads to the following system of stochastic reactions. We consider the n-th link of the chain. \(P_{n}\) then is transcriptionally repressed by the gene product of the (n − 1)-th link. The n-th link of the chain produces the transcription repressor \(R_{n}\)
$$\begin{aligned} & {\text{P}}_{n} + {\text{A}}\mathop{\longrightarrow}\limits^{{\lambda_{A} }}{\text{P}}_{n} - {\text{A}} \\ & {\text{P}}_{n} - {\text{A}}\mathop{\longrightarrow}\limits^{{\mu_{A} }}{\text{P}}_{n} + {\text{A}} \\ & {\text{P}}_{n} + R_{n - 1} \mathop{\longrightarrow}\limits^{{\lambda_{R} }}{\text{P}}_{n} - R_{n - 1} \\ & {\text{P}}_{n} - R_{n - 1} \mathop{\longrightarrow}\limits^{{\mu_{R} }}{\text{P}}_{n} + R_{n - 1} \\ & {\text{P}}_{n} - {\text{A}}\mathop{\longrightarrow}\limits^{\nu }{\text{P}}_{n} - {\text{A}} + {\text{R}}_{n} \\ & {\text{R}}_{n} \mathop{\longrightarrow}\limits^{\delta }\emptyset \\ \end{aligned}$$
(57)
We implemented this set of reactions for \(n = 1,2,3,4\).
System of ODEs
For the system of ODEs we do not explicitly simulate the association and dissociation of transcription factors to the promoters. Rather, we give the on-probability of the gene. The state model (56) corresponds to a competitive repression model of a promoter and we can give the on-probability of the n-th link of the chain using (18)
$$p_{on,n} = \frac{{\lambda_{a} p_{off,n} }}{{\lambda_{a} p_{off,n} + \mu_{a} }}\quad {\text{where}}\;\;\;{\text{p}}_{off,n - 1} = 1 - \frac{{\lambda_{n} }}{{\lambda_{n} R_{n - 1} + \mu_{n} }} \,$$
(58)
With this on-probability we can give the ODE for the gene product
$$\frac{{dR_{n} }}{dt} = p_{on,n} \nu_{n} - \delta_{n} R_{n}$$
(59)
We implemented this set of reactions for \(n = 1,2,3,4\). For solving the system of differential equations we used the ode45 solver of Matlab 2019a.
Noise-induced bi-stability and oscillations
The positive feedback loop
We simulated a gene that is activated by its own gene product A to transcribe RNA. The RNA is translated to yield the transcription factor, which can be degraded spontaneously or by an enzyme E.
$$\begin{aligned} & {\text{P}} + {\text{A}}\mathop{\longrightarrow}\limits^{{\lambda_{A} }}{\text{P}} - {\text{A}} \\ & {\text{P}} - {\text{A}}\mathop{\longrightarrow}\limits^{{\mu_{A} }}{\text{P}} + {\text{A}} \\ & {\text{P}} - {\text{A}}\mathop{\longrightarrow}\limits^{\nu }{\text{P}} - {\text{A}} + {\text{RNA}}_{A} \\ & {\text{RNA}}_{A} \mathop{\longrightarrow}\limits^{\beta }A \\ & E + A\mathop{\longrightarrow}\limits^{{\lambda_{E} }}EA \\ & EA\mathop{\longrightarrow}\limits^{{\mu_{E} }}E + A \\ & EA\mathop{\longrightarrow}\limits^{{\nu_{E} }}E \\ & A\mathop{\longrightarrow}\limits^{\delta }\emptyset \\ \end{aligned}$$
(60)
The simulation used the rate constants given in Additional file 3: Table S3.
The negative feedback loop
We simulated a gene that is repressed by its own gene product R. If R is absent, RNA is transcribed and further translated to yield the transcription repressor. The repressor can be degraded spontaneously or by an enzyme E.
$$\begin{aligned} & {\text{P}} + {\text{R}}\mathop{\longrightarrow}\limits^{{\lambda_{R} }}{\text{P}} - {\text{R}} \\ & {\text{P}} - {\text{R}}\mathop{\longrightarrow}\limits^{{\mu_{R} }}{\text{P}} + {\text{R}} \\ & {\text{P}}\mathop{\longrightarrow}\limits^{\nu }{\text{P}} + {\text{RNA}}_{R} \\ & {\text{RNA}}_{R} \mathop{\longrightarrow}\limits^{\beta }R \\ & E + R\mathop{\longrightarrow}\limits^{{\lambda_{E} }}ER \\ & ER\mathop{\longrightarrow}\limits^{{\mu_{E} }}E + R \\ & ER\mathop{\longrightarrow}\limits^{{\nu_{E} }}E \\ & R\mathop{\longrightarrow}\limits^{\delta }\emptyset \\ \end{aligned}$$
(61)
The simulation used the rate constants given in Additional file 3: Table S3.