 Research article
 Open Access
 Published:
Improvement of the memory function of a mutual repression network in a stochastic environment by negative autoregulation
BMC Bioinformatics volume 20, Article number: 734 (2019)
Abstract
Background
Cellular memory is a ubiquitous function of biological systems. By generating a sustained response to a transient inductive stimulus, often due to bistability, memory is central to the robust control of many important biological processes. However, our understanding of the origins of cellular memory remains incomplete. Stochastic fluctuations that are inherent to most biological systems have been shown to hamper memory function. Yet, how stochasticity changes the behavior of genetic circuits is generally not clear from a deterministic analysis of the network alone. Here, we apply deterministic rate equations, stochastic simulations, and theoretical analyses of FokkerPlanck equations to investigate how intrinsic noise affects the memory function in a mutual repression network.
Results
We find that the addition of negative autoregulation improves the persistence of memory in a small gene regulatory network by reducing stochastic fluctuations. Our theoretical analyses reveal that this improved memory function stems from an increased stability of the steady states of the system. Moreover, we show how the tuning of critical network parameters can further enhance memory.
Conclusions
Our work illuminates the power of stochastic and theoretical approaches to understanding biological circuits, and the importance of considering stochasticity when designing synthetic circuits with memory function.
Background
Memory is ubiquitous in biological systems [1,2,3,4]. Characterized by a continued response to a transient stimulus [5], cellular memory has been found to aid in the robust control of diverse biological functions such as synaptic plasticity [6], differentiation [7, 8], cell cycle transition [9], or gene regulation [10]. Memory in cellular circuitry is tightly linked to bistability, i.e. the presence of two stable steady states [11, 12]. To this end, memory is achieved if an input signal evokes a switch to an alternative steady state where the system remains over time even after the input signal has disappeared [11, 12].
Our general understanding of design principles of biological networks has improved dramatically during recent years [13]. Feedback loops as elementary components provide common control mechanisms of cellular networks [14]. For instance, negative feedback can facilitate adaptation and oscillation [15], while positive feedback loops, which play pivotal roles in cellular signaling [16], often promote signal amplification, bistable switches [17] and memory [11]. Indeed, a common circuit architecture that is known to give rise to memory is based on interlinked positive feedback network loops [18, 19]. Several experimental and theoretical analyses of such network architectures that can give rise to memory have been reported [7, 12, 20,21,22,23,24,25]. These studies have revealed many general properties underlying successful memory, for instance that ultrasensitivity may be sufficient to generate two stable states [26], and that the transition periods between the bistable states compose an important characteristic of a cellular memory module [19]. In all cases, bistability and memory can convert a transient input signal to a continued cellular response [27, 28]. This property is directly visible from the characteristic hysteresis behavior observed for bistability and memory systems. Experimental examples confirm these findings, such as the presence of hysteresis behavior and bistability in a synthetic genetic circuit in Escherichia coli [18]. Based on these general principles that link network architecture to systems behavior, the blossoming field of synthetic biology has contributed many useful designs of genetic circuits based on a quantitative understanding of biological networks [29,30,31] with myriad implications for biotechnology, biocomputing and gene therapy [13, 26, 32]. However, the rational design of a robust memory function can be implemented via many alternative mechanisms that remain incompletely understood [21, 29].
Importantly, the vast majority of genes in a cell is normally only expressed at very low levels. Due to the inherent randomness of transcription and translation, this gives rise to substantial stochastic fluctuations in cellular mRNA and protein copy numbers of lowly expressed genes [33,34,35,36,37,38,39,40]. Accordingly, the analysis and design of biological circuits cannot be based solely on deterministic properties of the network topology alone. In response, much progress has been made in better understanding the role of stochasticity to network function [41,42,43,44,45]. For instance, careful work has delineated the sources of intrinsic noise in small transcriptional networks [46] and how noise may propagate through network architectures [39, 41], as well as how feedback loops regulate bistability [42, 47] and intrinsic noise [48]. Similarly, extensive work has established general principles underlying the design of genetic circuits that exhibit bistability and memory [11, 42,43,44,45].
Recent examples of opposing behavior further highlight the importance of considering the contribution of stochasticity to cellular circuitry. On the one hand, it has been shown that noise can induce multimodality and even stochastic memory in a system that, according to a deterministic description, lacks bistability [49, 50]. On the other hand, stochastic fluctuations in gene expression levels often reduce or disrupt the memory function of biological networks [21, 51, 52]. Equally importantly, seminal technical and conceptual advances have made strong progress in the efficient and accurate approximation of multivariate nonlinear stochastic systems [53, 54].
Here, we investigate how a negative autoregulation network architecture can improve the sustained memory function of a mutual repression network in a stochastic environment. Our work extends the previously published regulated mutual repression network (MRN) [12] to the regulated mutual repression network with negative autoregulation (MRNNA) to investigate the effect of a negative autoregulation loop on memory function. The network architecture is characterized by a mutual repression cycle that can give rise to bistability and memory, adopted from two wellcharacterized mutual repression networks, the system consisting of LacI and TetR in E. coli that displays a bistable gene expression memory module [24], and the mutual repression of the two repressors cI and Cro that yield a bistable memory module in the bacteriophage λ [21].
To explore and compare the memory behavior of the MRNNA network model, as well as how it compares to the MRN model, deterministic rate equations, stochastic simulations and theoretical analyses of FokkerPlanck equations were employed to identify principles of the robustness of the memory function. We demonstrate how negative autoregulation can reduce intrinsic noise and thus improve the memory function by increasing the stability of the steady states. Negative autoregulation can for instance be achieved by the ability of proteins to bind and sequester their own mRNA. Systematic analyses of successful memory as a function of central model parameters that describe the mutual repression cycle highlight principles and limits of the memory function in these mutual repression networks. Moreover, our work emphasizes the importance of considering stochasticity when designing synthetic circuits with memory function.
Results
To investigate the effect of negative autoregulation and stochasticity on network memory function, we constructed a model network following a previously established graphical notation [55, 56] (Fig. 1). Specifically, we modelled a mutual repression network with negative autoregulation (MRNNA) that extends our previously published mutual repression network (MRN) [12]. The network consists of proteins y(1), y(2) and y(3). An input signal S induces the synthesis of y(1), which activates and represses the syntheses of y(2) and y(3), respectively. The syntheses of y(2) and y(3) are mutually repressed with cooperativity. Moreover, negative autoregulation controls the synthesis reactions of y(2) and y(3) (Fig. 1). The addition of negative autoregulation permits to investigate how negative feedback loops may affect memory. While modulation of several parameter values should tune the presence of memory, the present analysis focuses on a direct backtoback comparison of the two network architectures that only differ in the addition of negative autoregulation circuits. To compare the memory regions of MRNNA and MRN networks across our deterministic, stochastic, and theoretical analyses, the values of the corresponding kinetic parameters as well as the steady state levels of y(2) and y(3) were conserved as much as possible in discrete parameter space [12] (Table 1; Additional file 1: Texts S1, S2) (see Methods).
MRNNA model exhibits robust memory
Similar to our previously reported results on the memory function of the MRN [12], the MRNNA network exhibits robust memory functionality that depends on strong cooperativity in the mutual repression cycle, as expressed by high Hill coefficients n. Thus expectedly, the deterministic solution to the time evolution of the MRNNA model showed persistent memory after the end of the period during which the signal S was applied for both Hill coefficients of n = 7 and n = 8 (Fig. 2a). Strong hysteresis behavior was observed for both y(2) and y(3) levels as a function of increasing and decreasing S (Fig. 2b, c; Additional file 1: Figure S1). In the deterministic case, bistability is sufficient to generate memory. Stochastic simulations of the MRNNA model as described by the birth and death processes of Eqs. 1–3 revealed strong and persistent memory both with Hill coefficients n = 7 (Fig. 2d) and n = 8 (Fig. 2e). Importantly, the MRN alone could not sustain memory for either y(2) or y(3) at n = 7 in a stochastic context [12], but the MRNNA yielded strong and sustained memory due to the addition of negative feedback loops (Fig. 2d; Additional file 1: Figure S2, S3).
To further understand how the addition of negative autoregulation increases the robustness of the memory function, we sought to quantify and compare the intrinsic noise in the protein levels of y(2) and y(3) in the MRN and MRNNA models. It is well established that noise or a stochastic perturbation can flip gene expression levels from one to the other steady state [40, 46]. To obtain a reliable and comparable estimate, we computed the coefficients of variation (CVs) of y(2) and y(3) during the active signal S period; the interval from simulation step 270 to 500 was chosen to allow the system to respond to the stimulus for 20 simulation steps. Our analysis of noise in protein levels y(2) and y(3) as a function of the dissociation constants K(2) = K(4) confirmed a systematic reduction of stochasticity in y(2) and y(3) levels upon the addition of negative autoregulation (Fig. 2f). While maintaining the same steady state levels, noise was dramatically reduced for y(3) (Fig. 2f). Of note, the lower steady state y(3) is more susceptible to noise. Similarly, intrinsic fluctuations in y(2) during the signal period were reduced in the MRNNA compared to the MRN model in our stochastic simulations irrespective of the dissociation constants tested (Fig. 2f). Even more important to the stability of the steady states are the intrinsic fluctuations after the end of the signal period. The levels of intrinsic noise are overall slightly higher, likely because the signal does not stabilize gene expression levels any longer. However, our corresponding analysis reveals that the negative autoregulation reduces, as expected, variability in the system also after the stop of the signal (Fig. 2g; Additional file 1: Figure S4). Taken together, our results suggest that the negative autoregulation played a vital role in increasing the persistence of stochastic memory by reducing intrinsic noise.
Negative autoregulation enhances the memory region
Our analyses show that the MRNNA model can exhibit memory, which however critically depends on persistent mutual repression with strong cooperativity. We next set out to systematically map its memory region as a function of Hill coefficient and dissociation constants K(2) = K(4) in comparison to the previously reported MRN [12] (Fig. 3). Successful memory was defined as sustained levels of y(2) and y(3) at or near their levels during the applied S after the end of the input signal S at simulation step 500 until the end of the simulations at simulation step 1000. Moreover, for direct comparison, the kinetic parameter k(7) was adjusted to conserve the steadystate levels between the two models [12] (Additional file 1: Text S1). Indeed, the high steadystate levels of both the models were set to the same expression values, while the low steadystate levels were tuned be as similar as possible. Memory was assessed in both the deterministic and stochastic context (see Methods). Accordingly, two memory regions were defined: deterministic memory and stochastic memory (Fig. 3).
In both the MRN and MRNNA models, deterministic memory could be observed for Hill coefficients of n = 4 or greater (Fig. 3). In contrast, stochastic memory not only required higher Hill coefficients, but also posed different limits on the MRN and MRNNA models. Without negative autoregulation that suppresses noise, stochastic memory in the MRN model required a Hill coefficient of n = 8 or greater (Fig. 3). In the presence of negative autoregulation that reduces noise in protein levels, stochastic memory in the MRNNA model could be observed from n = 7 (Fig. 3). Thus, the negative autoregulation relaxes the need for strong cooperativity to generate stochastic memory. Furthermore, the stochastic memory region for the MRNNA model was found to be larger than for the MRN model (Fig. 3, red area), indicating that negative autoregulation of the MRNNA enhances memory function by reducing intrinsic noise (Fig. 2f, g). While the deterministic memory regions in both networks are essentially the same (Fig. 3, green area), it was clearly more difficult to maintain a memory effect under noise in the MRN than the MRNNA (Fig. 3, red area). The large discrepancy between the deterministic and stochastic memory areas indicate the general challenge of maintaining memory function in the context of stochasticity.
Stochastic potential of the MRNNA
Prerequisite for memory is normally the presence of bistability in the system. Thus, a limited bistable regime could explain why such high levels of cooperativity in the mutual repression between y(2) and y(3) were required to yield memory. To map out the bistable regimes of the MRNNA in comparison to the MRN [12], theoretical analyses based on the chemical master equations were employed to support the numerical simulation results.
First, the corresponding FokkerPlanck equations Eqs. 9–13 describing the time evolution of the probability densities for y(2) and y(3) levels were derived from the three rate equations Eqs. 1–3 (see Methods). We made use of the quasisteadystate assumption to derive a onevariable equation for each system [12] (Additional file 1: Text S2). While there are inherent limitations to it [51], the quasisteadystate assumption is widely used [46]. Since we focus on the steadystate at t → ∞, applying the quasisteadystate assumption to y(2) or y(3) is mathematically reasonable. By using the FokkerPlanck equations, we next estimated the twodimensional stochastic bistable regions, characterized through the presence of a double well potential, for the MRN and MRNNA models as a function of the Hill coefficient and dissociation constants [12] (Fig. 4; Additional file 1: Figure S5; Texts S1–3).
In these theoretical analyses, both the MRN and the MRNNA models permitted bistability starting from a Hill coefficient of n = 3 for both y(2) and y(3) (Fig. 4). However, the bistable regions in the MRNNA were larger than those in the MRN model (Fig. 4). To display bistability in a stochastic context, the MRN model required dissociation constants K(2) = K(4) within a more constrained parameter space (Fig. 4). These analyses established that the MRNNA more readily exhibits stochastic bistability than the MRN. Conversely, a limited bistability regime in the MRN in a stochastic context (Fig. 4) contributes to explaining why this network had a smaller stochastic memory region (Fig. 3).
Mean firstpassage time of the MRNNA model
Having established that the presence of memory in the MRNNA model depended on both bistability and robustness of the steady states to stochasticity, we next sought to further explore the origins and limits of memory functionality in context of fluctuating protein levels. To this end, the stability of a steady state of a stochastic system can be estimated by the mean firstpassage time (MFPT). The MFPT quantifies the average number of simulation steps it takes for a system to leave a favorable steady state due to random events. As such, the MFPT provides a useful description of the timescale on which a phase transition is likely to happen [57,58,59]. Thus, because the presence of a stochastic bistable region only indicates the bimodality of gene expression as prerequisite for memory but not the presence of memory itself, a MFPT analysis can be helpful for identifying the precise conditions under which memory can be attained in a stochastic context.
Here, MFPT analyses were performed between the two stable steady states under the quasisteadystate assumption [12] (Additional file 1: Figure S5) (see Methods). Specifically, we calculated the MFPTs of y(2) and y(3) for the MRNNA model in comparison to the MRN [12] (Fig. 5); the lower and upper steadystates of y(2) and y(3) are denoted as\( y{(2)}_l^{st} \),\( y{(2)}_u^{st} \), \( y{(3)}_l^{st} \) and \( y{(3)}_u^{st} \), respectively. Similarly, the MFPTs of leaving the lower (T_{L}) and upper (T_{U}) steady states were calculated as a function of the Hill coefficient and dissociation constants in the mutual repression cycle for y(2) and y(3).
Due to the asymmetry in the both MRN and MRNNA models (Table 2), the MFPTs for y(2) and y(3) showed opposing behavior, as expected (Fig. 5). While the MFPT for y(2) to leave the lower steady state, \( {T}_L\left(y{(2)}_l^{st}\right) \), is consistently longer than the corresponding time to leave the upper steady state, \( {T}_U\left(y{(2)}_u^{st}\right) \), the system is more likely to leave the lower steady state of y(3) as indicated by \( {T}_L\left(y{(3)}_l^{st}\right) \)being much smaller than \( {T}_U\left(y{(3)}_u^{st}\right) \) (Fig. 5).
An increase in the Hill coefficient as measure of cooperativity exacerbated this trend observed for the MFPTs of y(2) and y(3) (Fig. 5a). The prolonged MFPTs as a result of increasing Hill coefficients explained why the robustness of sustaining stochastic memory improved with increasing cooperativity. For the parameter space explored, the lower steady state of y(2) was more persistent in both models for any Hill coefficients, as evident by \( {T}_L\left(y{(2)}_l^{st}\right) \) > \( {T}_U\left(y{(2)}_u^{st}\right) \)(Fig. 5a). Moreover, all MFPTs for the MRNNA, \( {T}_L\left(y{(2)}_l^{st}\right) \), \( {T}_U\left(y{(2)}_u^{st}\right) \), \( {T}_L\left(y{(3)}_l^{st}\right) \), and \( {T}_U\left(y{(3)}_u^{st}\right) \), were observed to be longer than their equivalents for the MRN model, for any Hill coefficient (Fig. 5a). These results confirm an increased persistence of the steady states upon introduction of negative autoregulation in MRNNA compared to MRN, which explains the improved memory function in the MRNNA system. Specifically, the residence times at the upper and lower steady states of y(2) and y(3) were extended to achieve the persistent memory after the stop of the signal.
Next, we investigated the effect of the dissociation constants in the mutual repression cycle on the MFPTs of y(2) and y(3) (Fig. 5b). Consistently, the MFPTs for all steady states, \( {T}_L\left(y{(2)}_l^{st}\right) \), \( {T}_L\left(y{(3)}_l^{st}\right) \), \( {T}_U\left(y{(2)}_u^{st}\right) \) and \( {T}_U\left(y{(3)}_u^{st}\right) \),were found to be longer in the MRNNA than the MRN for any dissociation constants (Fig. 5b). For both models, all MFPTs, \( {T}_L\left(y{(2)}_l^{st}\right) \), \( {T}_U\left(y{(2)}_u^{st}\right) \), \( {T}_L\left(y{(3)}_l^{st}\right) \) and \( {T}_U\left(y{(3)}_u^{st}\right) \)gradually decreased with an increase in the dissociation constant, suggesting that strong binding in the mutual repression cycle is necessary for persistent memory. We also studied the MFPTs for the MRNNA as a function of the negative autoregulation constants k(9) = k(10) (Fig. 5c). The MFPTs for the MRNNA at all steady states, \( {T}_L\left(y{(2)}_l^{st}\right) \), \( {T}_L\left(y{(3)}_l^{st}\right) \), \( {T}_U\left(y{(2)}_u^{st}\right) \) and \( {T}_U\left(y{(3)}_u^{st}\right) \), were found to increase with increasing strength of the negative autoregulation. In summary, these MFPT analyses illuminated in detail how memory in the mutual repression networks improved with increasing stability of the steady states as a function of cooperativity or binding strength.
Probability densities of the steadystate levels in the MRNNA
The previous analyses revealed a strong dependency of sustained memory on the robustness of the steadystates. To formally establish the probability densities associated with populating the upper and lower steady states of y(2) and y(3), we calculated the probability densities from the FokkerPlanck Equations (Eq. (15)) as a function of the Hill coefficient and dissociation constants (Fig. 6). The probability density of the upper steady state of y(2) increased while that of the corresponding lower steady state decreased with an increase in the Hill coefficient for both the MRN and the MRNNA models (Fig. 6a). The addition of negative autoregulation in the MRNNA system visibly made the upper steady state level of y(2) more dominant (Fig. 6a).
The probability density of y(3) clearly illustrated the evolution of bistability and subsequent increased probability densities of the lower steady states with increasing Hill coefficient (Fig. 6b). At low Hill coefficient (n = 3) the MRN lacked a pronounced probability density of a lower steady state (Fig. 6b, solid blue line) while in the MRNNA the probability density did not separate well the peaks at lower and upper levels of y(3) (Fig. 6b, dashed blue line). This result of weak bistability was consistent with our previous analysis of the bistability regions (Fig. 4). In turn, higher Hill coefficients of n = 8 and n = 12 promoted first the emergence and then separation of two clearly populated steady states (Fig. 6b). While the upper steady state of y(3) was dominant for n = 12 the system switched to a dominant lower steady state for y(3) (Fig. 6b). Moreover, the addition of negative autoregulation strongly promoted the population of the lower steady state at all Hill coefficients analyzed (Fig. 6b, dashed lines). By buffering intrinsic noise, this analysis thus confirmed that the negative feedback loops stabilized particularly the low expression states of y(3), i.e. those that are most susceptible to fluctuations.
Similarly, the probability densities of y(2) and y(3) were found to quantitatively and qualitatively also depend on the dissociation constants K(2) and K(4) in the mutual repression cycle (Fig. 6c, d). At low K(2) = K(4) = 15, the upper steady state of y(2) was dominant in the MRN and MRNNA models with slightly higher probability density for the MRNNA model (Fig. 6c). In general, with decreasing values of K(2) and K(4) the upper steady states became more pronounced in both models, even more so in the MRNNA (Fig. 6c). In turn, the probability density of the lower steady state of y(3) became dominant with decreasing the dissociation constants in both the models (Fig. 6d). Moreover, lower values for K(2) and K(4) as well as negative autoregulation favored more pronounced bistability (Fig. 6d). The MRNNA model increased the probability density of the upper steadystate of y(2) and the lower steadystate of y(3) more than the MRN model. This result can support the persistent memory of the upper steadystate of y(2) and lower steadystate of y(3) after the stop of the signal (Fig. 6d). Taken together, these results suggest a theoretical basis for our observation that the MRNNA model displays an improved memory function in a stochastic context.
Consistency between stochastic simulation and the FokkerPlanck equations
To verify the quasisteadystate assumption for our case, we validated that the solutions to the FokkerPlanck equations were consistent with the probability densities obtained by stochastic simulation of the full system (Figs. 7 and 8). The FokkerPlanck equations provided almost the same probability density as the Gillespie stochastic simulation for initial molecule numbers ofy(1) = y(2) = y(3) = 1(Figs. 7 and 8). The main limitation of the quasisteadystate assumption is an overestimation of the separation of the steady states ofy(2). However, in the MRNNA y(3) is more susceptible to fluctuations and the driver for switching between steady states. Consequently, the results obtained under the quasisteadystate assumption are meaningful for investigating the memory function in these mutual repression networks.
Discussion
By applying numerical integration of the rate equations, stochastic simulations and theoretical analysis of the FokkerPlanck equations, we investigated if negative autoregulation can improve the memory function in a mutual repression network. Previous work had established the regulated mutual activation network (MAN) and mutual repression network (MRN) as good model systems to study fundamental properties of cellular memory [12]. Here, we have systematically decoupled contributions of additional negative autoregulation (MRNNA) to the persistence of the memory functionality. In general and as expected, stochasticity decreased the memory function in all models. The addition of negative autoregulation in the MRNNA however extended robust and persistent memory in both the deterministic and stochastic approaches to lower levels of cooperativity in the mutual repression cycle. Our results thus suggest that, in addition to the added negative autoregulation, the robustness of the stochastic memory of the MRNNA could be further improved by increasing the binding strength of the repressor proteins.
In the present work, select sets of kinetic parameters that can give rise to memory were used instead of exhaustive searches for alternative parameter combinations. Despite accurately simulating the time evolution of protein concentrations in stochastic systems, the Gillespie algorithm is known for its limited power in characterizing sustained memory in more complex networks due to its computational cost that can quickly become prohibitive. To generalize some of our findings, phase diagrams of the memory region were derived as a function of the two most critical and previously identified parameters. However and as evident from the small stochastic memory regions, finelytuned parameter combinations were necessary to achieve robust memory. This observation suggested both a biological challenge of implementing a robust memory circuit, as well as a theoretical challenge in rationalizing its mechanistic basis through exhaustive parameter perturbation.
Theoretical approaches that can complement these shortcomings and make accessible the exploration of larger parameter ranges often rely on additional assumptions. Notably, the quasisteady state assumption of separately analyzing steady states is widely used, performs in general well and is reasonably justified in the presented work. Seminal advances have now provided advanced approximation methods that start to render even multivariate and nonlinear chemical masters equations and related FokkerPlanck equations amenable to theoretical analyses. For instance, the recently introduced linearmapping approximation converts a nonlinear system to a linear problem via a meanfield approximation [54]. This and related techniques will vastly improve accuracy and computational efficiency for future analyses of stochasticity in more complex networks.
Biology has certainly evolved more complex architectures, but in most cases the tradeoffs between network complexity and function remain poorly understood. For instance, while the MAN network was found to yield robust memory based on mutual activation [12], comparable results for the mutual repression network could only be obtained after addition of negative autoregulation, i.e. by a more complex network architecture. More complex network architectures are often more expensive to maintain, but may yield advantages in fidelity of their robustness and dynamic control. To this end, several related cellular networks of increased complexity have been discovered and await further characterization. For instance, a NotchDelta mutual repression network serves to communicate between neighboring cells [60] where an increase in Notch activity within a cell decreases that in its neighboring cell. The NotchDelta mutual repression provides inhomogeneous or opposite protein synthesis in homogeneous cell populations that however depends on spatial changes in gene expression.
Finally, recent papers have outlined many fundamental principles of how to achieve bistability in small networks [61,62,63]. The present work here extends these findings by a comparative network analysis that delineates the effect of additional negative autoregulation on cellular memory.
Conclusions
We have shown that the addition of negative autoregulation can reduce intrinsic noise and generate persistent stochastic memory in a mutual repression network. Our mathematical comparison and theoretical analyses contribute to an improved understanding of how genetic circuits can encode biological function and may aid the rational engineering of memory networks for applications in synthetic biology, medicine, and biotechnology.
Methods
MRNNA model
We constructed a simple model of a gene regulatory network that consisted of two genes encoding a transcription factor that we visualized according to a previously established graphical notation [55, 56] (Fig. 1). Our mutual repression network with negative autoregulation (MRNNA) consisted of proteins y(1), y(2) and y(3). The input signal S induced the synthesis of y(1), which in turn activated the synthesis of y(2) and repressed production of y(3). Furthermore, the synthesis reactions of both of y(2) and y(3) were mutually repressed with cooperativity. Negative autoregulation governs synthesis reactions of y(2) and y(3). This system can be bistable and exhibits memory. In formulating our model, we made deliberate use of MichaelisMenten approximations under the assumption that substrate concentrations were in excess and association equilibria quickly attained. The model is described by the following eqs. 1–3:
All parameters are described in Table 1.
Systems modeling and determination of successful memory
The time evolution of the gene expression levels of y(1), y(2) and y(3) was simulated both deterministically and stochastically. For a deterministic systems description, the ordinary differential equations (Eqs. 1–3) were solved in MATLAB (Mathworks) with standard solvers. Stochastic trajectories were simulated with the Gillespie algorithm [64].
In all model simulations, the deterministic and stochastic time courses of y(1), y(2) and y(3) were simulated for 1000 simulation time steps. The input signal S was applied from simulation step 250 to 500. Deterministic memory was defined as sustained protein levels in the numerical integration of the rate equations during the subsequent period from simulation steps 500 to 1000 after stop of the signal S. Similarly, stochastic memory was assessed as sustained protein levels in the stochastic simulations in the period from simulation steps 500 to 1000 after stop of the signal S. While arbitrary, a threshold of 1000 simulation time steps yielded a robust and readily accessible criterium to assess memory. Due to the probabilistic nature of the stochastic simulations, robust stochastic memory for a given set of parameters was defined as successful memory if 18 out of 20 stochastic simulations, i.e. 90%, yielded persistent memory [12].
Theoretical model comparisons
All parameters were set as to render the MRN and MRNNA models as comparable as possible (Tables 1, 2) [12, 65]. With the exception of the additional negative autoregulation loops, this meant using the same parameters throughout. The parameters of the negative autoregulation were tuned to conserve the steady state levels of y(2) and y(3) between the MRN and MRNNA models (Tables 1, 2). Indeed, the high steadystate levels of both the models were set to the same values, while the low steadystate levels to be as similar as possible. Note that it is not possible to also tune the low steadystate levels to the exact same values. Given the asymmetry of the models (activation of y(2) and suppression of y(3) by S), the deterministic steadystate levels of y(2) and y(3) always show opposing behavior: when the steadystate level of y(2) is high, that of y(3) is low and vice versa. Our parameters choices conserved both the low and high steady states of y(2) and y(3) between the MRN and MRNNA models [12] (Additional file 1: Texts S1, S2).
To systematically identify parameter choices that can give rise to successful memory, phase diagrams of the memory region as a function of the two most critical parameters, the Hill coefficient and dissociation constants in the mutual repression cycle, were computed by successive simulation at different parameter combinations.
Intrinsic noise in gene expression was quantified by computing the coefficient of variation (CV) of the levels of y(2) and y(3) during the input signal S from the stochastic simulations. To eliminate transition effects, we considered the period from simulation step 270 to 500, i.e. omitting the first 20 simulation steps of the signal period (Fig. 2f). In the same manner, we estimated the CVs of the levels of y(2) and y(3) after stop of the signal, for the period from simulation step 500 to 750 (Fig. 2g).
Stochastic potential and probability density function
The reaction rate equations (Eqs. 1–3) were converted into f_{MRN − NA2}(y) and f_{MRN − NA3}(y) (Additional file 1: Text S2). Using the quasisteadystate assumption, we solved the probability density at the steady state (at t → ∞). At t → ∞ y(2) and y(3) definitely approach to the steady state, i.e., \( \frac{dy(2)}{dt}\to 0 \) and\( \frac{dy(3)}{dt}\to 0 \). Therefore, we assumed \( \frac{dy(3)}{dt}=0 \)to solve the ODE of \( \frac{dy(2)}{dt} \)at t → ∞. In a similar manner, we assumed \( \frac{dy(2)}{dt}=0 \)to solve the ODE of \( \frac{dy(3)}{dt} \) at t → ∞.
Here, we illustrated how a onevariable equationf_{MRN − NA2}(y) in a stochastic environment is given by:
where
This equation can be described by the birthanddeath stochastic processes [12, 21, 49, 66]:
where y_{ss}is given by Eq. (5).
The corresponding chemical master equation was given by:
where P(y, t) was the probability density of protein concentration y. Next, the chemical master equation was transformed into the FokkerPlanck equation [12, 21, 50, 66,67,68]:
where
and y_{ss}is given by Eq. (5). The noise function is given by:
where y_{ss} is given by Eq. (5). In the same manner, the FokkerPlanck equations of the four onevariable equations including f_{MRN − NA2}(y) were solved under the following conditions (Additional file 1: Text S2):
and the noise functions were given by:
Finally, we consider the stochastic potential analysis. The limit of P(y, t) at t → ∞ yields P_{st}(y), the stationary probability density function of y [12, 21, 59, 66], which is given by:
where N_{c} is the normalization constant. Eq. (14) can be recast in the form:
where
is called the stochastic potential of f(y) [57, 58, 66].
Mean firstpassage time analysis
In gene expression, the stability of a steady state has to be estimated in the presence of noise. The stability of a steady state of a stochastic system can be estimated by the mean firstpassage time (MFPT), which describes the expected time within which the system leaves a stable steady state due to random fluctuations. An equilibrium point can exit from its minimum potential due to the effect of noise. The exit time depends on the specific realization of the random process; this is known as the first passage time. The MFPT is the average of the first passage times over many realizations. In the context of anticipating phase shifts, the MFPT provides a useful tool to characterize the timescale on which a phase transition is likely to occur.
Let us consider \( {y}_l^{\mathrm{st}} \)and \( {y}_u^{\mathrm{st}} \)(\( {y}_l^{\mathrm{st}}<{y}_u^{\mathrm{st}} \)) as two steady states corresponding to a low and a high protein concentration, respectively, separated by the unstable steady state defining the potential barrier \( {y}_b^{\mathrm{un}} \) (i.e., the unstable equilibrium point). The basin of attraction of the state \( {y}_u^{\mathrm{st}} \) extends from\( {y}_b^{\mathrm{un}} \) to +∞, as it is to the right of\( {y}_l^{\mathrm{st}} \). Let T(y) be the MFPT to state \( {y}_b^{\mathrm{un}} \) starting at \( y>{y}_b^{\mathrm{un}} \). T(y) satisfies the following ordinary differential equation [21, 59, 67, 68]:
with boundary conditions:
By solving Eqs. 17–18, the MFPTs of \( {y}_u^{\mathrm{st}} \) and\( {y}_l^{\mathrm{st}} \): \( {T}_U\left({y}_u^{\mathrm{st}}\right) \) and \( {T}_L\left({y}_l^{\mathrm{st}}\right) \) are calculated to state \( {y}_b^{\mathrm{un}} \) for the basin of attraction of the state \( {y}_u^{\mathrm{st}} \) extending from\( {y}_b^{\mathrm{un}} \) to +∞ and for the basin of attraction of the state \( {y}_l^{\mathrm{st}} \) which extends from 0 to\( {y}_b^{\mathrm{un}} \), respectively, as follows:
where
with y_{0} = 0 for the \( {y}_u^{\mathrm{st}}\to {y}_l^{\mathrm{st}} \)transition and \( {y}_0={y}_b^{\mathrm{un}} \) for the \( {y}_l^{\mathrm{st}}\to {y}_u^{\mathrm{st}} \)transition. A high value of the MFPT of a steady state protein level indicates that the level is sustained for a longer time, whereas a low value indicates that the protein can readily leave the steady state and quickly transition to another level.
Availability of data and materials
The computer code used to perform all analyses and reproduce all project data is available at https://github.com/ABMSUH/MRNandMRNNAmodels
Abbreviations
 CV:

Coefficient of variation
 MAN:

Mutual activation network
 MFPT:

Mean firstpassage time
 MRN:

Mutual repression network
 mRNA:

Messenger ribonucleic acid
 MRNNA:

Mutual repression network with negative autoregulation
 ODE:

Ordinary differential equation
References
 1.
Kandel ER, Dudai Y, Mayford MR. The molecular and systems biology of memory. Cell. 2014;157(1):163–86.
 2.
Casadesus J, D'Ari R. Memory in bacteria and phage. BioEssays. 2002;24(6):512–8.
 3.
Harvey ZH, Chen Y, Jarosz DF. Proteinbased inheritance: epigenetics beyond the chromosome. Mol Cell. 2018;69(2):195–202.
 4.
BergeronSandoval LP, Safaee N, Michnick SW. Mechanisms and consequences of macromolecular phase separation. Cell. 2016;165(5):1067–79.
 5.
Burrill DR, Silver PA. Making cellular memories. Cell. 2010;140(1):13–8.
 6.
Shen K, Teruel MN, Connor JH, Shenolikar S, Meyer T. Molecular memory by reversible translocation of calcium/calmodulindependent protein kinase II. Nat Neurosci. 2000;3(9):881–6.
 7.
Xiong W, Ferrell JE Jr. A positivefeedbackbased bistable 'memory module' that governs a cell fate decision. Nature. 2003;426(6965):460–5.
 8.
Wang L, Walker BL, Iannaccone S, Bhatt D, Kennedy PJ, Tse WT. Bistable switches control memory and plasticity in cellular differentiation. Proc Natl Acad Sci U S A. 2009;106(16):6638–43.
 9.
Doncic A, Atay O, Valk E, Grande A, Bush A, Vasen G, et al. Compartmentalization of a bistable switch enables memory to cross a feedbackdriven transition. Cell. 2015;160(6):1182–95.
 10.
HerreraDelgado E, PerezCarrasco R, Briscoe J, Sollich P. Memory functions reveal structural properties of gene regulatory networks. PLoS Comput Biol. 2018;14(2):e1006003.
 11.
AjoFranklin CM, Drubin DA, Eskin JA, Gee EP, Landgraf D, Phillips I, et al. Rational design of memory in eukaryotic cells. Genes Dev. 2007;21(18):2271–6.
 12.
Ul Hasan ABMS, Kurata H. Mathematical comparison of memory functions between mutual activation and repression networks in a stochastic environment. J Theor Biol. 2017;427:28–40.
 13.
Alon U. Network motifs: theory and experimental approaches. Nat Rev Genet. 2007;8(6):450–61.
 14.
Brandman O, Ferrell JE Jr, Li R, Meyer T. Interlinked fast and slow positive feedback loops drive reliable cell decisions. Science. 2005;310(5747):496–8.
 15.
Elowitz MB, Leibler S. A synthetic oscillatory network of transcriptional regulators. Nature. 2000;403(6767):335–8.
 16.
Freeman M. Feedback control of intercellular signalling in development. Nature. 2000;408(6810):313–9.
 17.
Hasty J, Pradines J, Dolnik M, Collins JJ. Noisebased switches and amplifiers for gene expression. Proc Natl Acad Sci U S A. 2000;97(5):2075–80.
 18.
Chang DE, Leung S, Atkinson MR, Reifler A, Forger D, Ninfa AJ. Building biological memory by linking positive feedback loops. Proc Natl Acad Sci U S A. 2010;107(1):175–80.
 19.
Ferrell JE Jr. Selfperpetuating states in signal transduction: positive feedback, doublenegative feedback and bistability. Curr Opin Cell Biol. 2002;14(2):140–8.
 20.
Shopera T, Henson WR, Ng A, Lee YJ, Ng K, Moon TS. Robust, tunable genetic memory from protein sequestration combined with positive feedback. Nucleic Acids Res. 2015;43(18):9086–94.
 21.
Cheng Z, Liu F, Zhang XP, Wang W. Robustness analysis of cellular memory in an autoactivating positive feedback system. FEBS Lett. 2008;582(27):3776–82.
 22.
Kim TH, Jung SH, Cho KH. Interlinked mutual inhibitory positive feedbacks induce robust cellular memory effects. FEBS Lett. 2007;581(25):4899–904.
 23.
Acar M, Becskei A, van Oudenaarden A. Enhancement of cellular memory by reducing stochastic transitions. Nature. 2005;435(7039):228–32.
 24.
Gardner TS, Cantor CR, Collins JJ. Construction of a genetic toggle switch in Escherichia coli. Nature. 2000;403(6767):339–42.
 25.
Frigola D, Casanellas L, Sancho JM, Ibanes M. Asymmetric stochastic switching driven by intrinsic molecular noise. PLoS One. 2012;7(2):e31407.
 26.
Cherry JL, Adler FR. How to make a biological switch. J Theor Biol. 2000;203(2):117–33.
 27.
Ferrell JE, Xiong W. Bistability in cell signaling: how to make continuous processes discontinuous, and reversible processes irreversible. Chaos. 2001;11(1):227–36.
 28.
Pedraza JM, Paulsson J. Effects of molecular memory and bursting on fluctuations in gene expression. Science. 2008;319(5861):339–43.
 29.
Tabor JJ, Salis HM, Simpson ZB, Chevalier AA, Levskaya A, Marcotte EM, et al. A synthetic genetic edge detection program. Cell. 2009;137(7):1272–81.
 30.
Auslander S, Auslander D, Muller M, Wieland M, Fussenegger M. Programmable singlecell mammalian biocomputers. Nature. 2012;487(7405):123–7.
 31.
Daniel R, Rubens JR, Sarpeshkar R, Lu TK. Synthetic analog computation in living cells. Nature. 2013;497(7451):619–23.
 32.
Becskei A, Seraphin B, Serrano L. Positive feedback in eukaryotic gene networks: cell differentiation by graded to binary response conversion. EMBO J. 2001;20(10):2528–35.
 33.
Raj A, van Oudenaarden A. Nature, nurture, or chance: stochastic gene expression and its consequences. Cell. 2008;135(2):216–26.
 34.
McAdams HH, Arkin A. Stochastic mechanisms in gene expression. Proc Natl Acad Sci U S A. 1997;94(3):814–9.
 35.
Ozbudak EM, Thattai M, Kurtser I, Grossman AD, van Oudenaarden A. Regulation of noise in the expression of a single gene. Nat Genet. 2002;31(1):69–73.
 36.
Sneppen K, Micheelsen MA, Dodd IB. Ultrasensitive gene regulation by positive feedback loops in nucleosome modification. Mol Syst Biol. 2008;4:182.
 37.
Elowitz MB, Levine AJ, Siggia ED, Swain PS. Stochastic gene expression in a single cell. Science. 2002;297(5584):1183–6.
 38.
Blake WJ, Kærn M, Cantor CR, Collins JJ. Noise in eukaryotic gene expression. Nature. 2003;422(6932):633–7.
 39.
Pedraza JM, van Oudenaarden A. Noise propagation in gene networks. Science. 2005;307(5717):1965–9.
 40.
Eldar A, Elowitz MB. Functional roles for noise in genetic circuits. Nature. 2010;467(7312):167–73.
 41.
Zhang H, Chen Y, Chen Y. Noise propagation in gene regulation networks involving interlinked positive and negative feedback loops. PLoS One. 2012;7(12):e51840.
 42.
Tan C, Marguet P, You L. Emergent bistability by a growthmodulating positive feedback circuit. Nat Chem Biol. 2009;5(11):842–8.
 43.
Basu S, Gerchman Y, Collins CH, Arnold FH, Weiss R. A synthetic multicellular system for programmed pattern formation. Nature. 2005;434(7037):1130–4.
 44.
Moon TS, Lou C, Tamsir A, Stanton BC, Voigt CA. Genetic programs constructed from layered logic gates in single cells. Nature. 2012;491(7423):249–53.
 45.
Kurata H, Maeda K, Onaka T, Takata T. BioFNet: biological functional network database for analysis and synthesis of biological systems. Brief Bioinform. 2014;15(5):699–709.
 46.
Thattai M, van Oudenaarden A. Intrinsic noise in gene regulatory networks. Proc Natl Acad Sci U S A. 2001;98(15):8614–9.
 47.
Madar D, Dekel E, Bren A, Alon U. Negative autoregulation increases the input dynamicrange of the arabinose system of Escherichia coli. BMC Syst Biol. 2011;5:111.
 48.
Dublanche Y, Michalodimitrakis K, Kummerer N, Foglierini M, Serrano L. Noise in transcription negative feedback loops: simulation and experimental analysis. Mol Syst Biol. 2006;2:41.
 49.
Thomas P, Popovic N, Grima R. Phenotypic switching in gene regulatory networks. Proc Natl Acad Sci U S A. 2014;111(19):6994–9.
 50.
Schnoerr D, Sanguinetti G, Grima R. Approximation and inference methods for stochastic biochemical kineticsa tutorial review. J Phys A: Math Theor. 2018;51(16):1–73.
 51.
Hornos JE, Schultz D, Innocentini GC, Wang J, Walczak AM, Onuchic JN, et al. Selfregulating gene: an exact solution. Phys Rev E. 2005;72(5 Pt 1):051907.
 52.
Michael EW. Quantitative biology: from molecular to cellular systems. Boca Raton: Chapman & Hall/CRC Mathematical and Computational Biology. CRC Press; 2012.
 53.
Thomas P, Straube AV, Grima R. Communication: limitations of the stochastic quasisteadystate approximation in open biochemical reaction networks. J Chem Phys. 2011;135(18):181103.
 54.
Cao Z, Grima R. Linear mapping approximation of gene regulatory networks with stochastic dynamics. Nature Comms. 2018;9(1):3305.
 55.
Kurata H, Matoba N, Shimizu N. CADLIVE for constructing a largescale biochemical network based on a simulationdirected notation and its application to yeast cell cycle. Nucleic Acids Res. 2003;31(14):4071–84.
 56.
Kurata H, Inoue K, Maeda K, Masaki K, Shimokawa Y, Zhao Q. Extended CADLIVE: a novel graphical notation for design of biochemical network maps and computational pathway analysis. Nucleic Acids Res. 2007;35(20):e134.
 57.
Gardiner CW. Stochastic Methods: A Handbook for the Natural and Social Sciences, Springer Series in Synergetics. 4th ed. Berlin: Springer–Verlag; 2009.
 58.
Risken H, Frank T. The Fokker–Planck Equation: Methods of Solution and Applications. 2nd ed. Berlin: Springer; 1996.
 59.
Sharma Y, Dutta PS, Gupta AK. Anticipating regime shifts in gene expression: the case of an autoactivating positive feedback loop. Phys Rev E. 2016;93(3):032404.
 60.
Matsuda M, Koga M, Woltjen K, Nishida E, Ebisuya M. Synthetic lateral inhibition governs celltype bifurcation with robust ratios. Nature Comms. 2015;6:6195.
 61.
Lipshtat A, Loinger A, Balaban NQ, Biham O. Genetic toggle switch without cooperative binding. Phys Rev Lett. 2006;96(18):188101.
 62.
Loinger A, Lipshtat A, Balaban NQ, Biham O. Stochastic simulations of genetic switch systems. Phys Rev E. 2007;75(2 Pt 1):021904.
 63.
Warren PB, ten Wolde PR. Chemical models of genetic toggle switches. J Phys Chem B. 2005;109(14):6812–23.
 64.
Gillespie DT. Exact stochastic simulation of coupled chemicalreactions. J Phys Chem. 1977;81(25):2340–61.
 65.
Alves R, Savageau MA. Extending the method of mathematically controlled comparison to include numerical comparisons. Bioinformatics. 2000;16(9):786–98.
 66.
Scott M, Hwa T, Ingalls B. Deterministic characterization of stochastic genetic circuits. Proc Natl Acad Sci U S A. 2007;104(18):7402–7.
 67.
Gillespie DT. The chemical Langevin equation. J Chem Phys. 2000;113(1):297–306.
 68.
Drury KLS. Shot noise perturbations and mean first passage times between stable states. Theor Popul Biol. 2007;72(1):153–66.
Acknowledgments
We are grateful to Dr. Kazuhiro Maeda for helpful discussions.
Funding
SP holds the Canada Research Chair in Computational Systems Biology and acknowledges funding from the Merck Sharp Dohme program of the Faculty of Medicine, Université de Montréal. The funding bodies played no roles in the design of the study, collection, analysis, and interpretation of data, and writing the manuscript.
Author information
Affiliations
Contributions
Conceptualized the research: ABMSUH, HK; Developed all methodology and performed all analysis: ABMSUH; Interpreted the results: ABMSUH, HK, SP; Wrote the original draft: ABMSUH; Reviewed and edited the manuscript: ABMSUH, HK, SP. All authors have read and approved the final manuscript.
Corresponding authors
Ethics declarations
Ethics approval and consent to participate
This research did not discriminate against gender, sexual orientation, visible minorities, people with disabilities, or religion beliefs. No animals or human subjects were used nor harmed in this study.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Additional file 1: Text S1.
Parameter settings of the MRNNA. Text S2. Onevariable equation and noise function of the MRNNA. Text S3. Stochastic potential profile analysis. We estimated the stochastic potential profile of the onevariable rate equation derived for the MRN and MRNNA models as described in the Appendix F of reference [12]. Figure S1. Bifurcation analysis of the MRN model and MRNNA models. Figure S2. Stochastic simulations for 2000 simulation steps. Figure S3. Stochastic simulation of the MRN for 20000 simulation time steps. Figure S4. Stochastic fluctuations after the signal period. Figure S5. Stochastic potential profile of the MRN and MRNNA models.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Hasan, A.B.M.S.U., Kurata, H. & Pechmann, S. Improvement of the memory function of a mutual repression network in a stochastic environment by negative autoregulation. BMC Bioinformatics 20, 734 (2019). https://doi.org/10.1186/s1285901933152
Received:
Accepted:
Published:
Keywords
 Memory
 Mutual repression
 Negative autoregulation
 FokkerPlanck
 Stochasticity
 Hill coefficient
 Bistability
 Probability density