Comparative study of three commonly used continuous deterministic methods for modeling gene regulation networks
 Martin T Swain^{2},
 Johannes J Mandel^{1} and
 Werner Dubitzky^{2}Email author
DOI: 10.1186/1471210511459
© Swain et al; licensee BioMed Central Ltd. 2010
Received: 21 April 2010
Accepted: 14 September 2010
Published: 14 September 2010
Abstract
Background
A generegulatory network (GRN) refers to DNA segments that interact through their RNA and protein products and thereby govern the rates at which genes are transcribed. Creating accurate dynamic models of GRNs is gaining importance in biomedical research and development. To improve our understanding of continuous deterministic modeling methods employed to construct dynamic GRN models, we have carried out a comprehensive comparative study of three commonly used systems of ordinary differential equations: The Ssystem (SS), artificial neural networks (ANNs), and the general rate law of transcription (GRLOT) method. These were thoroughly evaluated in terms of their ability to replicate the reference models' regulatory structure and dynamic gene expression behavior under varying conditions.
Results
While the ANN and GRLOT methods appeared to produce robust models even when the model parameters deviated considerably from those of the reference models, SSbased models exhibited a notable loss of performance even when the parameters of the reverseengineered models corresponded closely to those of the reference models: this is due to the high number of power terms in the SSmethod, and the manner in which they are combined. In crossmethod reverseengineering experiments the different characteristics, biases and idiosynchracies of the methods were revealed. Based on limited training data, with only one experimental condition, all methods produced dynamic models that were able to reproduce the training data accurately. However, an accurate reproduction of regulatory network features was only possible with training data originating from multiple experiments under varying conditions.
Conclusions
The studied GRN modeling methods produced dynamic GRN models exhibiting marked differences in their ability to replicate the reference models' structure and behavior. Our results suggest that care should be taking when a method is chosen for a particular application. In particular, reliance on only a single method might unduly bias the results.
1 Background
Regulation of gene expression (or gene regulation) refers to processes that cells use to create functional gene products (RNA, proteins) from the information stored in genes (DNA). These processes range from DNARNA transcription to the posttranslational modification of proteins. Gene regulation is essential for life as it increases the versatility and adaptability of an organism by allowing it to express protein when needed. While aspects of gene regulation are well understood, many open research questions still remain [1]. Due to the wide availability of wellcharacterized components from biological gene networks, the stage has been set for mathematical modeling and computational simulation of gene regulatory networks (GRNs). The modeling of biomedical phenomena is inspired by the approach taken in physics. In physics, models (theories) are frequently constructed to explain existing data, then predictions are made, which again are compared to new data. If a sufficient correspondence exists, it is claimed that the phenomenon has been understood. A model should not be a black box, but should be interpretable in some way. Ideally, the model components and elements should have an interpretation in the real world consistent with the existing knowledge. At the same time a model involves a simplification of the real world. The ability to construct, analyze, and interpret qualitative and quantitative GRN models is becoming increasingly important tool in studying gene regulation [2]. Because of its potential to help improve our understanding of gene regulation, modeling and simulation of GRNs has received considerable interest in the bioinformatics and computational biology communities. Many mathematical and computational techniques have been proposed in recent years. In practice, different mathematical techniques generate models with different properties and features. Therefore, it is very important for users to understand the merits and limitations of these approaches.
The dynamic behavior and regulatory interactions of genes can be revealed by timeseries experiments, that is, experiments that measure the expression of multiple genes over time [3–6]. As this type of experimental data becomes more readily available, mathematical modeling and computational simulation become an important tool for investigating the structure and timedependent behavior of GRNs. In contrast to static gene expression data, the modeling and simulation approach allows the determination of stable states in response to a condition or stimulus as well as the identification of pathways and networks that are activated in the process [7]. Besides logical [8–10] and stochastic [11] modeling approaches, various continuous modeling methods capable of capturing such complex behavior deterministically are commonly used. A range of mathematical methods facilitating the reverseengineering of quantitative, dynamic GRN models from timeseries expression data have been reported in the literature [12].
Our study focused primarily on continuous deterministic simulation methods used to model and simulate GRNs on the basis of timevariant gene expression data. The approaches considered in our study concentrate on GRN abstractions that ignore intricate intermediate biological processes of cellular gene regulation, such as splicing, capping, translation, binding and unbinding [13]. Because of their importance in the field and their common use, we have compared the following three mathematical methods:
 1.
How do the methods fare in terms of their ability to estimate model parameters from dynamic gene expression data?
 2.
How do the methods fare in terms of their ability to accurately predict dynamic gene expression data?
 3.
To what extent are the methods interchangeable?
 4.
Are there any inherent biases or limitations in the individual methods?
To answer the above questions we developed artificial data sets based on a number of model networks that were selected because they either incorporate motifs frequently seen in GRNs, or because they have been specifically discussed in the literature in the context of the three mathematical methods of interest. The advantages of using artificial data sets is due to the ease, speed and flexibility with which different experiments can be performed and evaluated, allowing validation of the methods under a range of tightly controlled conditions which would be very difficult, expensive and timeconsuming to perform with real biological data. However, computer generated data sets feature important differences when compared to real biological data [17, 18], and so we have also included a recent biological data set generated by Cantone et al. specifically for in vivo assessment of reverseengineering and gene network modelling approaches [19]. The reverseengineering experiments we have performed on this biological network play an important role in demonstrating that the results we have obtained from our model networks are applicable to real biological networks.
 1.
To investigate a range of different conditions, we obtain two timeseries data sets for a biological network from the literature [19], and we define and construct five artificial network models and set their parameters and initial expression levels.
 2.
We wish to explore how the quantity of available data influences the models produced by reverseengineering. Therefore, for each artificial network model we generated two sets of data sparse and detailed data  using each mathematical method. Sparse data is based on a single model simulation, detailed data is based on five simulations each with a different set of initial condition.
 3.
Here we perform the reverseengineering of dynamic GRN models from the artificial and biological data sets using an evolutionary algorithm [20]. In order to highlight differences and idiosynchracies between and within the models, we apply each mathematical method to all the available data sets. A large number of models are generated in this step, investigating combinations of different data sets, experimental scenarios, and mathematical methods.
 4.
This step is designed to assess specific aspects of the models, and to elucidate the extent to which the reverseengineered models are able to discover the original GRN models and their parameters, or to reproduce their dynamic behavior according to the relevant biological or artificial timeseries data sets.
 5.
The final step is designed to investigate if the GRN models are able to predict how the network will behave under new conditions, and if the mathematical methods are all able to correctly capture the behavior of the GRNs: if this is true then all our reverseengineered models should behave in a similar fashion under new conditions and make similar predictions about the network behavior.
This paper is structured as follows: first we outline the mathematical methods, then we discuss their different characteristics, strengths and weaknesses, followed by the different GRN models to which we apply them. We then outline our results and relate them to the experimental workflow. Before concluding we give a detailed discussion of differences between the mathematical methods as shown by our results.
1.1 GRN modeling methods
The nonlinear differential equations of the three modeling methods investigated in this study describe the mutual activating and repressing influences of genes in a GRN at a highlevel of abstraction. In particular, it is assumed that the rate of gene expression depends exclusively on the concentration of gene products arising from the nodes (genes) of the GRN. This means that the influence of other molecules (e.g., transcription factors) and cellular processes (translation) is not taken into account directly. Even with these limitations, dynamic GRN models of this kind can be useful in deciphering basic aspects of generegulatory interactions. The three methods we have studied have been widely used to model dynamic GRNs. One major advantage of all three methods lies in their simple homogeneous structures, as this allows the settings of parameter discovering software to be easily customized for these structures. In addition, all three modeling methods either already have the potential to describe additional levels of detail, or their structures can be easily extended for this purpose.
The three methods describe dynamic GRN models by means of a system (or set) of ordinary differential equations. For a GRN comprising N genes, N differential equations are used to describe the dynamics of N gene product concentrations, X_{ i } with i = 1, ..., N. In all three methods, the expression rate dX_{ i }/dt of a gene product concentration may depend on the expression level of one or more gene products of the genes X_{ j } , with j = 1, ..., N. Thus, the gene product concentration X_{ i } may be governed by a selfregulatory mechanism (when i = j), or it may be regulated by products of other genes in the GRN. The three modeling methods differ in the way they represent and calculate expression rates. Before discussing such differences, we introduce the three modeling methods in some detail.
1.1.1 The artificial neural network (ANN) method
Vohradsky [15] introduced ANNs as a modeling method capable of describing the dynamic behavior of GRNs. The way this method represents and calculates expression rates depends on the weighted sum of multiple regulatory inputs. This additive input processing is capable of representing logical disjunctions. The expression rate is restricted to a certain interval where a sigmoidal transformation maps the regulatory input to the expression interval. ANNs provide an additional external input which has an influence on this transformation in that it can regulate the sensitivity to the summed regulatory input. Finally, the ANN method defines the degradation of a gene product on the basis of standard massaction kinetics.

N: Number of genes in the GRN to be modeled. The genes of the GRN are indexed by i and j, where i, j = 1, ..., N.

v_{ i } : Maximal expression rate of gene i.

w_{ ij } : The connection weight or strength of control of gene j on gene i. Positive values of w_{ ij } indicate activating influences while negative values define repressing influences.

ϑ_{ i }: Influence of external input on gene i, which modulates the gene's sensitivity of response to activating or repressing influences.

f: Represents a nonlinear sigmoid transfer function modifying the influence of gene expression products X_{ j } and external input ϑ_{ i } to keep the activation from growing without bounds.

k_{ i } : Degradation of the ith gene expression product.
The mathematical properties of the ANN method have been well studied because it is a special case of a recurrent neural network [15]. In particular, the symmetry of the matrix of connection weights w_{ ij } influences whether the network dynamics are oscillatory or whether they converge on a steady (or even chaotic) state. High positive or negative values of the external input, ϑ_{ i } , reduce the effect of the connection weights. This is explored in Case D where ϑ_{ i } has been interpreted as a delay to the reaction kinetics of the transcriptional machinery [15].
1.1.2 The Ssystem (SS) method
Savageau [14] proposed the synergistic system or Ssystem (SS) as a method to model molecular networks. When modeling GRNs with the SS method, the expression rates are described by the difference of two products of powerlaw functions, where the first represents the activation term and the second the degradation term of a gene product X_{ i } . This multiplicative input processing can be used to define logical conjunctions for both the regulation of gene expression processes and for the regulation of degradation processes. The SS method has no restrictions in the gene expression rates and thus does not implicitly describe saturation.

N: Number of genes in the GRN to be modeled. The genes of the GRN are indexed by i and j, where i, j = 1, ..., N.

α_{ i } : Rate constant of activation term; in SS GRN models, all activation (upregulation) processes of a gene i are aggregated into a single activation term.

β_{ i } : Rate constant of degradation term; in SS GRN models, all degradation processes of a gene i are aggregated into a single degradation term.

g_{ ij } , h_{ ij } : Exponential parameters called kinetic order. These parameters describe the interactive influences of gene j on gene i. Positive values of g_{ ij } indicate an activating influence on the expression of gene i, whereas inhibiting influences are represented by negative values. Similarly, positive values of h_{ ij } indicate increasing degradation of the gene product X_{ i } , whereas decreasing degradation is represented by negative values.

α_{ i } : Rate constant of activation process k.

β_{ ik } : Rate constant of degradation process k.

g_{ ijk } : Exponential parameter called kinetic order describing the interactive influence of X_{ j } on gene i of process k.

h_{ ijk }: Exponential parameter called kinetic order describing the interactive influence of X_{ j } on gene i of process k.
1.1.3 The general rate law of transcription (GRLOT) method
The GRLOT method has been used to generate benchmark timeseries data sets to facilitate the evaluation of different reverseengineering approaches [16, 24]. GRLOT models multiply individual regulatory inputs. Activation and inhibition are represented by different functional expressions that are similar to Hill kinetics, which allow the inclusion of cooperative binding events [16]. Identical to the ANN, the degradation of gene products is defined via massaction kinetics.

v_{ i } : Maximal expression rate of gene i.

I_{ j } : Inhibitor (repressor) j.

A_{ k } : Activator k; the number of inhibitors I, and the number of activators A can be related to the total number of genes by I + A = N.

Ki_{ j } : Concentration at which the influence of inhibitor j is half of its saturation value.

Ka_{ k } : Concentration at which the influence of activator k is half of its saturation value.

n_{ j } , n_{ k } : Regulate the sigmoidicity of the interaction behavior in the same way as Hill coefficients in enzyme kinetics.

k_{ i } : Degradation of the ith gene expression product.
1.2 Systematic biases in the ANN, SS and GRLOT methods
Major differences among the three methods for GRN modeling can be found in the limitation of expression rates, regulation of degradation processes, in the processing of multiple inputs, and in the interpretation of model parameters.
Limitation of expression rates
The SS method has no restrictions on expression rates, dX_{ i }/dt, while the ANN and GRLOT methods restrict expression rates to the interval [0, v_{ i } ] through a sigmoid function. The three methods have in common that concentration changes are not synchronized, that is, the formation or synthesis of a molecule does not necessarily entail the reduction or degradation of other molecules. (In particular, this can quickly lead to unrealistic dynamics in SS models as singularities may arise when modeling inhibition and the inhibitor reaches a nearzero concentration.) However, the expression concentration of a gene product must reach a saturation level at some point and a sigmoid expression rate response is thought to act as the molecular switch that models this behavior [25].
Regulation of degradation processes
The SS method covers a wider spectrum of regulatory targets since SS models are able to describe the regulation of both transcription and degradation processes.
Multiple input processing
GRN models based on the ANN method weight and sum multiple regulatory inputs. This contrasts with dynamic GRN models based on the SS and GRLOT methods, where regulatory inputs are exponentiated and multiplied. The input processing in SS and GRLOT models certainly accord better with the fundamentals of reaction kinetics and collision theory (chemical reaction rates correlate with the multiplied concentrations of the reactants) than ANN models. Nevertheless, none of the three methods follow strictly the chemical reaction processes as a single kinetic equation typically comprises several processes. Instead, all methods are designed to be flexible enough to find phenomenological expressions for their approximations of experimentally observed behavior [15, 22]. In particular, for the SS method, we have found that the unrestricted expression rates coupled with multiple input processing based on multiplied exponentials is a serious flaw, frequently leading to extreme sensitivity and unstable dynamics (as we describe in Section 2).
Parameter interpretation
The different parameters used in the three methods describe activating (inducing) and inhibiting (repressing) interactions among the genes of a GRN:

In the ANN method, the parameters w_{ ij }can be intuitively used to define activation and inhibition by assigning positive or negative values, respectively. Here, small absolute values indicate a minor impact on the transcription process (or a missing regulatory interaction) and large absolute values indicate a correspondingly major impact on the transcription process. However, one should be aware that the multiple regulatory inputs required in the case of coregulation can compensate each other due to their additive input processing.

The GRLOT method uses two inverse functional forms to describe activating and inhibiting influences both of which involve two parameters. For example, in the case of activation, each individual dependency is described with a Hill equation where the Hill exponent n_{ ij }determines the sigmoid response curve to varying concentration levels of the regulating molecule. Furthermore, the second parameter, Ka_{ k }, allows the concentration of the regulator molecule to be specified, which gives half of its maximal influence on the expression rate. Consequently, high exponent values together with low Ka_{ k }values, defined for a regulator molecule, indicate that already low concentration levels have a strong activating influence on the transcription process while, on the other hand, low exponent values in combination with high Ka_{ k }values require high concentration levels of the regulator molecule for effective activation.

The behavior of SS models is mainly determined by the exponent values g_{ ij }and h_{ ij }. Similar to the ANN method, high absolute values define strong influence, whereas small absolute values indicate weak influence. However, the dynamics can become particularly complicated when describing inhibiting dependencies using negative exponent values. Here, the effect depends strongly on the actual concentration ranges of the inhibitor which can introduce singular behavior at nearzero concentrations.
Method characteristics
Characteristic  ANN  SS  GRLOT 

number of parameters  N * (N + 3)  N * (2 * N + 2)  N * (2 * N + 1) 
limitation (saturation and sigmoidicity) of expression rates  yes  no  yes 
Processing of multiple signals  additive, logical disjunction (OR)  multiplicative, logical conjunction (AND) corresponds well with collision theory  multiplicative, logical conjunction (AND) corresponds well with collision theory 
regulation of degradation processes  no  yes  no 
singularities  no  yes  no 
1.3 Manually constructing case study models
Due to the limited number of time points at which gene expression measurements are typically made, the reverseengineering of GRNs usually constitutes an underdetermined problem. Essentially, this means that there are more parameters to estimate than there are measurements [26, 27]. Although hundreds of gene activities can be measured simultaneously with microarray experiments, the number of (timedependent) data points established for each gene is typically small, leading to highly underdetermined systems. As a consequence, multiple solutions can be found through reverseengineering which are able to fit the available data very well, yet which are very weak in their ability to predict dynamic activity under different conditions to those initially explored [28].
Therefore, in order to reduce the number of parameters when performing a modeling study, a first step may involve analyzing gene expression profiles and performing clustering of genes exhibiting similar dynamics [29]. However, the relationships between coregulated genes are very complex [30] and it is important to identify the key genes that act as regulators to the clusters of coexpressed genes [31]. Once the gene regulators are identified they may be used to construct simplified network models that are useful for reverseengineering [32]. For instance, Kimura and colleagues [33] built a quantitative model comprising 24 gene groups found in 612 putative open reading frames measured with UvrA gene disruptant experiments in Thermus thermophilus HB8 strains, while Guthke and coworkers [34] clustered a total of 1336 genes into six clusters for dynamic network reconstruction from gene expression data. The latter example was used to investigate the immune response during bacterial infection of peripheral blood mononuclear cells.
1.3.1 Case studies A, B and C
Each of these three cases is modeled using the ANN, SS and GRLOT method. The parameters of each model are manually chosen, though we ensured that the models produce plausible dynamics.
In all models describing the cases A, B and C, we circumscribed regulatory operations to expression processes since the regulation of degradation processes can be only described with the SS method. For each gene product X_{ i } we therefore set in all constructed models the degradation rates k_{ i } (β_{ i } in the SS method) to 0.3. In all SS models we set the exponent values h_{ ij } to 1. This allows us to concentrate on the transcriptional part of the different methods.

Case A: This static GRN model describes a fourstep gene activation cascade starting from gene X1 up to gene X5. To generate harmonic dynamics with models of case A, we defined in all three methods (ANN, SS and GRLOT) constant signal propagation in the cascades by using identical parameter values within the parameter sets.

Case B: With this model, we describe again a gene activation cascade, however, this case incorporates signal branching (multiple output) and coregulatory relationships (multiple input) into the GRN. Here, gene X2 activates gene X3 and X4, both of which stimulate (activate) the expression of gene X5. We defined asymmetric signal branching such that the regulatory impact between X2 and X3 is different from the regulatory impact between X2 and X4. Furthermore, the coregulated gene X5 is more sensitive to signals originating from X4 than to those of X3.

Case C: The gene network described in the static GRN model C extends the GRN described in scenario B by incorporating multicomponent loops, a positive feedback loop between gene X2 and gene X3, and a negative feedback loop between the genes X2 and X5. The models describing case C simply extend the parameter sets of case B with parameters defining stimulation between X3 and X2 and inhibition between X2 and X5. Thus, all parameter sets of case C contain two closedloop relationships, one positive feedback loop between X2 and X3, and one negative feedback loop between X2 and X5.
 1.
Case A: Uniform degradation rate (0.3), and constant signal propagation.
 2.
Case B: Uniform degradation rate (0.3), asymmetric signal branching and asymmetric coregulation.
 3.
Case C: Uniform degradation rate (0.3), negative and positive feedback loops.
Method parameters for case studies
The SS method  The ANN method  

α  β  X1  X2  X3  X4  X5  v _{ i }  θ  k  X1  X2  X3  X4  X5  
Case A  
X1  0.0  0.3            1.0  4.0  0.3           
X2  1.0  0.3  2.0          1.0  4.0  0.3  5.0         
X3  1.0  0.3    2.0        1.0  4.0  0.3    5.0       
X4  1.0  0.3      2.0      1.0  4.0  0.3      5.0     
X5  1.0  0.3        2.0    1.0  4.0  0.3        5.0   
Case B  
X1  0.0  0.3            1.0  4.5  0.3           
X2  1.0  0.3  1.5          1.0  4.5  0.3  7.0         
X3  1.0  0.3    2        1.0  4.5  0.3    6.0       
X4  1.0  0.3    3        1.0  4.5  0.3    4.0       
X5  1.0  0.3      2  3    1.0  4.5  0.3      4.0  6.0   
Case C  
X1  0.0  0.3            1.0  4.5  0.3           
X2  1.0  0.3  1.5    0.6    0.3  1.0  4.5  0.3  7.0    7.0    8.0 
X3  1.0  0.3    2        1.0  4.5  0.3    6.0       
X4  1.0  0.3    3        1.0  4.5  0.3    4.0       
X5  1.0  0.3      2  3    1.0  4.5  0.3      4.0  6.0   
The GRLOT method  
V  k  X1 n  X1 ki  X2 n  X2 ki  X3 n  X3 ki  X4 n  X4 ki  X5 n  X5 ki  
Case A  
X1  0.0  0.3                      
X2  1.0  0.3  1.5  1.1                  
X3  1.0  0.3      1.5  1.1              
X4  1.0  0.3          1.5  1.1          
X5  1.0  0.3              1.5  1.1  
Case B  
X1  0.0  0.3                      
X2  1.0  0.3  1.5  0.8                  
X3  1.0  0.3      2.5  0.5    
X4  1.0  0.3      1.5  0.8    
X5  1.0  0.3          1.5  0.8  2.5  0.5  
Case C  
X1  0.0  0.3                      
X2  1.0  0.3  1.5  0.8      3.5  0.1      1.5  2.0  
X3  1.0  0.3      2.5  0.5              
X4  1.0  0.3      1.5  0.8              
X5  1.0  0.3          1.5  0.8  2.5  0.5     
1.3.2 Case studies D and E
The following two GRN scenarios (D and E) were taken from the literature [15, 21]. These are depicted in Figure 2:

Case D (V): The 3gene static GRN model describing case D was used elsewhere to discuss the influence of the external input parameter ϑ_{ i }on the dynamics of an ANN GRN model [15]. In this example ϑ_{ i }is interpreted as delaying the reaction kinetics. Figure 3 shows two dynamic gene expression behaviors obtained by small variations in this parameter. This 3gene GRN model comprises a positive and negative feedback regulation where products of gene X1 activate the expression of gene X2 and products of gene X2 stimulate the expression of gene X3 and X1. Gene X3 inhibits the expression of gene X1.

Case E (HS): The fifth GRN system is the 5gene static GRN model used by Hlavacek and Savageau [21]. This model is frequently used to evaluate reverseengineering algorithms in combination with the SS method [27, 33, 37]. The dynamic behavior of this model is strongly influenced by singularities allowed by the SS method. In this scenario, singularities can be caused by the inhibitory dependency between gene X5 and X3 when the concentration level of X5 reaches zero.
We use the two cases (D and E), which were specifically generated for the ANN and SS methods, respectively, to provide additional evidence for the idiosyncratic behavior of the investigated modeling methods.
1.3.3 Case study F
This biological GRN was included in this study to ensure that the results we obtain from our artificial networks can be transferred to real biological systems. It was constructed synthetically in yeast by Cantone et al.[19] to facilitate an in vivo assessment of various reverseengineering and gene network modelling approaches, including approaches based on ordinary differential equations.
Cantone et al. present expression profiles of the network genes after a shift from glucose to galactoseraffinosecontaining medium: this is called the switchon time series; and after a shift from galactoseraffinose to glucosecontaining medium: the switchoff time series. In this study we used the first 100 minutes of these two data sets, excluding the first 10 minute interval during which the washing steps and subsequent medium shift are performed. After 100 minutes the biological system is perturbed, and Cantone et al use timedelay terms to model this perturbation. We have not explored timedelay terms in our mathematical systems and so we do not model this perturbation.
The network includes a variety of regulatory interactions, thus capturing the behavior of larger eukaryotic gene networks on a smaller scale (Figure 2). It was designed to be minimally affected by endogenous genes and to transcribe its genes in response to galactose. While the yeast GRN appears relatively simple, it is actually quite articulated in its interconnections, which include regulator chains, singleinput motifs, and multiple feedback loops, which are generated by the combination of transcriptional activators and repressors.
2 Results and Discussion
As we performed many different experiments, we first clarify how the different stages of our study contribute to our final set of results before discussing them in detail.
2.1 Results
Figure 1 describes the study design we adopted to compare the three modeling methods. Here we briefly describe the inputs and outputs of each step along with the relevant result tables and figures.
Step 1: Construction of reference models
Input: Three 5gene static GRN models (models A, B, C), comprising regulatory cascades, single input modules, multiple input modules and multicomponent loops [35, 36]. Three models taken from the literature: a 3gene static GRN model (D) [15], a 5gene static GRN model (E) [21], and a 5gene biological GRN (model F) synthetically generated in yeast [19].
Output: 13 dynamic GRN models:

Nine 5gene dynamic GRN models based on the static models A, B and C. For each static model, three dynamic models were constructed using the modeling methods ANN, SS and GRLOT, respectively.

One 3gene dynamic GRN model based on static model D, using the ANN method. The model parameters were taken directly from the literature [15].

One 5gene dynamic GRN model based on static model E, using the SS method. The model parameters were taken directly from the literature [21].

Two 5gene dynamic GRN models based on the switchon and switchoff time series data sets presented in the literature [19].
Step 2: Generation of the artificial GRN expression data
Input: The 11 artificial GRN models from Step 1 (based on models AE).
Output: 20 dynamic gene expression data sets:

11 sparse data sets based on simulating each GRN model with a single set of initial conditions.

9 detailed data sets from the first nine models of Step 1 (corresponding to the static GRN models A, B and C, respectively). Each data set consists of 5 time series each created with different initial conditions.
Step 3: Reverseengineering of dynamic GRN models from generated data
Input: The output from Step 2, 20 dynamic gene expression data sets, plus the output from Step 1 for model F only, the switchon and switchoff data sets: giving a total of 22 data sets.
Output: 66 dynamic GRN models. Each of the 22 data sets is reverseengineered using the ANN, SS and GRLOT method, respectively.
Step 4: Verification of reverseengineered GRN models
Input: 66 dynamic GRN models from Step 3.
Output: Calculation of training predictive power, P_{ ver } , the inferential power, P_{ inf } , and qualitative and visual comparison of GRN features:

Execution of 60 dynamic GRN for models AE from Step 3 to determine 60 P_{ ver }values, each reflecting how well the model is able to predict (fit) the data from which it was generated.

Calculation of the inferential power, P_{ inf }, for 20 of the 60 models by comparing the reverseengineered model parameters to the model parameters of the reference models from Step 1. This is possible only for model pairs, from Step 1 and Step 3, that share the same modeling method.

Qualitative analysis of GRN features.

Visual analysis of the models generated from the switchon and switchoff data sets for the biological network (model F) as described in Section 2.2.1.
Reverseengineering verification and validation results for A, B and C
ANN data  SS data  GRLOT data  

Sparse data set  
P _{ inf }  0.9620  0.9701  0.8562  0.9308  0.9994  0.9947  0.9546  0.4820  0.6046 
P _{ ver }  0.9996  0.9797  0.9982  0.9998  0.9988  0.9973  0.9985  0.9941  0.9999 
P _{ val }  0.9972  0.9870  0.9976  0.9953  0.8635  0.7878  0.9982  0.9927  0.9999 
Δ P_{ fit }  0.0024  0.0073  0.0016  0.0045  0.1353  0.2095  0.0003  0.0014  0.0000 
Detailed data set  
P _{ inf }  0.9896  0.9992  1.0000  1.0000  0.9999  1.0000  0.6704  0.7070  0.6432 
P _{ ver }  0.9998  0.9999  1.0000  1.0000  1.0000  1.0000  0.9983  0.9978  0.9987 
P _{ val }  0.9997  0.9999  1.0000  0.9978  0.9882  0.9891  0.9978  0.9972  0.9996 
ΔP_{ fit }  0.0001  0.0000  0.0000  0.0022  0.0118  0.0109  0.0005  0.0006  0.0009 
Reverseengineering results with data fitting scores only
Sparse data  ANN data  SS data  GRLOT data  

to ANN model  P _{ ver }  0.9925  0.9909  0.9926 
P _{ val }  0.9939  0.9907  0.9900  
ΔP _{ fit }  0.0014  0.0002  0.0026  
to SS model  P _{ ver }  0.9825  0.9986  0.9955 
P _{ val }  0.6769  0.8822  0.6847  
ΔP _{ fit }  0.3056  0.1164  0.3108  
to GRLOT model  P _{ ver }  0.9737  0.9672  0.9975 
P _{ val }  0.9712  0.9333  0.9970  
ΔP _{ fit }  0.0025  0.0339  0.0005  
Detailed data  
to ANN model  P _{ ver }  0.9999  0.9846  0.9975 
P _{ val }  0.9998  0.9789  0.9967  
ΔP _{ fit }  0.0001  0.0057  0.0008  
to SS model  P _{ ver }  0.9916  0.9999  0.9981 
P _{ val }  0.8294  0.9917  0.8321  
ΔP _{ fit }  0.1622  0.0082  0.1660  
to GRLOT model  P _{ ver }  0.9936  0.9934  0.9986 
P _{ val }  0.9935  0.9879  0.9982  
ΔP _{ fit }  0.0001  0.0055  0.0004 
Parameter matrix characteristics
Case A  Case B  Case C  

Data generation method  ANN  SS  GRLOT  ANN  SS  GRLOT  ANN  SS  GRLOT 
Uniform degradation rate  x    x x x  x x x  x    x x x  x x x  x x x  x x x  x x x 
Constant signal propagation  x     x x   x x  
Asymmetric signal branching  x x x   x x  x x x  x     x x   x x  
Asymmetric coregulation       x           x x       
Negative feedback  x x x  x x x  x x x  
Positive feedback  x x   x x x  x x x  
Uniform degradation rate  x    x x x  x x x  x x x  x x x  x x x  x x x  x x x  x x x 
Constant signal propagation  x     x x   x x  
Asymmetric signal branching  x x x   x x  x x x  x    x x   x x x  
Asymmetric coregulation  x  x  x x x  x  x  x    x x   x x x  
Negative feedback  x x x  x x x  x x x  
Positive feedback  x x x  x x x  x x x 
Step 5: Validation of reverseengineered GRN models
Input:

54 of the 60 GRN models from Step 3 corresponding to the models (A, B, and C) derived from nine sparse and nine detailed data sets using ANN, SS and GRLOT, respectively. Plus 9 reference GRN models, constructed in Step 1, corresponding to three ANN, three SS, and three GRLOT models, respectively.

The 6 remaining GRN models from Step 3, corresponding to three for each of model D and E.

The 3 switchon models from Step 3 for model F.
Output:

54 P_{ val }values indicating how well the reverseengineered dynamic GRN models function under new conditions (i.e., reproduce unseen data). The results of the validation experiments with the reverseengineered dynamic models are summarized in Table 3 and Table 4, respectively.

By reproducing data sets based on the 5gene Hlavacek model (model E) described with the SS method, we analyzed if the ANN and GRLOT methods can be used for reverseengineering these dynamics without supporting regulation of the degradation process. Accordingly, we analyzed the Vohradsky model (model D) to see if small variations of the external input parameter ϑ_{ i }, which lead to large effects in ANN models, could be reverseengineered into models based on the SS and GRLOT method. These results are summarized in Figure 4.

Table 6 describing the results of perturbation experiments performed on the 3 switchon data sets of model F.
Perturbation data for the models generated for the Cantone model F
Comp. to unperturbed network  Comp. to perturbed network  

Perturbation  ANN  GRLOT  SS  ANN to GRLOT  ANN to SS  GRLOT to SS  
CBF1  0.0045  1  1  1  0.951  0.461  0.337 
0.0135  0.656  0.556  0.556  0.949  0.431  0.345  
0.054  0.059  0.0397  0.0397  0.795  0.613  0.808  
0.216  0.0035  0.0022    0.0072      
GAL4  0.0324  1  1  1  0.951  0.461  0.337 
0.0972  0.940  0.549    0.542      
0.3888  0.059  0.0395    0.0606      
1.5552  0.00025  0.0023    0.0052      
SWI5  0.0075  1  1  1  0.951  0.461  0.337 
0.0225  0.992  0.994  0.893  0.952  0.310  0.222  
0.09  0.800  0.838  0.217  0.953  0.0672  0.0506  
0.36  0.180  0.221    0.953      
GAL80  0.0221  1  1  1  0.951  0.461  0.337 
0.0663  0.541  0.498  0.371  0.930  0.555  0.545  
0.2652  0.0208  0.0354  0.0192  0.915  0.581  0.545  
1.0608  2.09e05  0.0021    0.0022      
ASH1  0.012  1  1  1  0.951  0.461  0.337 
0.036  0.939  0.935  0.780  0.951  0.471  0.342  
0.144  0.338  0.328  0.212  0.950  0.476  0.344  
0.576  0.0272  0.0263  0.037  0.945  0.471  0.338 
2.2 Discussion
In order to ensure that our results based on the artificial data sets are fully applicable to biological systems, we first discuss the results from reverseengineering the biological data (model F). We then discuss scenarios where the same methods were used for both reverseengineering and data set creation: this allows us to check if the methods are able to accurately discover the original parameter matrices used to generate the data and to see if the discovery of the parameter matrices is a necessary condition to generate accurate and realistic dynamics. Next we highlight idiosyncrasies between the methods by discussing the results when different methods were used for reverseengineering and data set creation, and before concluding our discussion by considering specific features of the network models, we describe the results of our qualitative comparisons.
2.2.1 Reverseengineering biological data
Table 6 summarizes the results for the perturbation experiments performed on the reverseengineered models. The ANN and GRLOT are again in more accord with each other than they are with the SS method, and the SS method fails more frequently. Perturbing different genes influences the network dynamics in different ways, some giving a large change to the overall network dynamics, and some leaving them relatively unchanged. This is shown in the table by the comparing the perturbed network dynamics to the unperturbed state. When comparing the predicted network dynamics for the different methods, the results indicate that good agreement is possible between the ANN and GRLOT methods, especially for the smallest perturbations, but also sometimes even for the largest perturbations we explored: however, this is not always the case as there was little agreement between the methods when the GAL4 gene was perturbed. We believe that these results show that the mathematical methods are able to generate reasonable solutions for biological data sets. In subsequent sections we further elucidate characteristics of the methods using our more detailed artificial data sets.
2.2.2 Withinmethod estimation of model parameters
We examined the accuracy of model parameter estimation for the situation where the reference GRN model (constructed in Step 1) and the reverseengineered GRN model are based on the same modeling method (ANN, SS or GRLOT). To assess how accurately the original parameter matrices were discovered, we used the inferential power measure, P_{ inf } , defined by Equation 13.
Table 3 depicts the results we obtained from estimating withinmethod model parameters of the reverseengineered models. First, we observe that although the ANN and GRLOT methods achieved relatively low P_{ inf } scores (compared to the SS method), they still performed relatively well in terms of predicting the training data measured by the P_{ ver } scores. The GRLOT method is particularly notable here: For case C on the sparse data set, even though the P_{ inf } value is only 0.6046, the P_{ ver } value is 0.9999, indicating very accurate replication of the model dynamics even with significant errors in the parameter matrices. Indeed, this model also makes highly accurate predictions when it is perturbed by increasing and decreasing inputs. This is in marked contrast to models based on the SS method, which performed relatively well at discovering the original parameters (high P_{ inf } ), but made poor predictions (low P_{ val } ) when the model was perturbed. This indicates a lack of robustness of the SS method due to its extreme sensitivity. Compared to the SS method, models based on both the ANN and GRLOT methods are much more robust to variations in their model parameters and are thus able to make more accurate predictions (higher P_{ val } ) of model behavior.
The experiments with the detailed data sets confirmed these results. As expected, all methods perform better with the more detailed data set (providing more training data reflecting different initial conditions), and even the SS method improved the quality of its predictions, but it was still outperformed by the ANN and GRLOT methods.
2.2.3 Reverseengineering different method data
Looking at the situation when the methods reconstructed (predicted) all data sets generated by all the methods, we relied on the P_{ ver } measure as the methods were unable to reverseengineer the original parameter matrices in all cases. The results of these experiments are summarized in Table 4.
As expected, the quality of the models is lower when the methods predicted the data created by one of the other methods. The SS method is conspicuous in its poor performance. In Table 3, the P_{ inf } values for the SS method indicate that the parameter matrix was discovered with very high precision (the values are 0.9994 for sparse data and 0.9999 for detailed data). However, the ΔP_{ fit } values, which indicate the performance of the models when making predictions about network behavior under changes in input, are very poor: the values for cases B and C are 0.1358 and 0.2095, respectively, for the sparse data and 0.0118 and 0.0109, respectively, for the detailed data. Bearing in mind that a ΔP_{ fit } value of 0.1000 or above is of poor quality, with almost no visible relation to the original or expected dynamics, and that accurate models would require a ΔP_{ fit } value of about 0.0100 or less (see in Section 4.5.1), it then becomes clear that the ability of the SS method to make satisfactory predictions about network behavior is much less than that of the ANN and GRLOT methods, even when highly sampled data of nearperfect accuracy is used (i.e., artificial data created by the SS method). This inaccuracy is due to the exponential terms in the SS method, which are highly sensitive to its parameters even when they are accurate to three or four decimal places. A decrease in model input corresponds to an increase in the degradation rate k_{ 1 } (or β_{ 1 } ) on gene X_{ 1 } , as described in Section 4.4. Due to the nature of the dynamics, the k_{1} increase is a smaller perturbation than the decrease, corresponding to the differences in these average values for k_{1} = 0.5000 and k_{1} = 0.1000, respectively. The individual data for input increase and decrease predictions show that the SS method actually performed as well as the other two models when the input was decreased, the problems arose when the input was increased. The results therefore indicate that the SS models may be reasonably robust to small perturbations, but are highly sensitive to larger perturbations and consequently liable to lose all predictive ability.
It is also worth noting that the SS method performed poorer in reverseengineering wellperforming GRN models based on data generated by the GRLOT and ANN methods than the GRLOT and ANN methods in reverseengineering models based on the data created by the SS method. The ANN and GRLOT also performed slightly worse when reverseengineering from SS data than when reverseengineering from data created by the GRLOT or ANN method, respectively.
2.2.4 Qualitative comparison of network features
The results of the qualitative comparison between the methods, as described in Section 4.5.3, are summarized in Table 5. They show that the GRN models reverseengineered from the detailed data sets had a higher similarity to the original models, in terms of their underlying network features, than models based on the sparse data set. This result portrays the opposite of what is observed on the basis of P_{ ver } values, where the reverseengineered models score higher with the sparse data than the detailed data sets. This result is an indication of the problems that arise with discovering the underlying network features when the data availability is low.
In the following we consider different network features individually:
Uniform degradation rate
With reference to sparse data, Table 5 (block of top rows), the results show that the ANN method was highly successful in discovering uniform degradation rates in each of the three cases and with each data set. However, the other two methods performed better in predicting data generated by the SS or GRLOT method than by the ANN method. Even with detailed data, the SS and GRLOT methods failed to discover the uniform degradation rate as simulated by the ANN method, especially with case A, although all methods discovered this feature when generated by the SS and GRLOT methods.
Constant signal propagation
Only case A has the constant signal propagation feature. There is a clear difference between the methods: The ANN method was able to reproduce its own data, but not that of the SS or GRLOT method, whereas the SS and GRLOT could reproduce their own and each other's data, but not the data generated by the ANN method.
These findings can be explained by the different way in which the three methods process inputs. For example, constant signal propagation from one gene to another described with uniformly weighted input values (as realized by the ANN method) can certainly not be reproduced with constant signal propagation defined with uniformly exponentiated input values (SS method).
Asymmetry in signal branching and coregulation
Asymmetric signal branching and coregulation are features in the case studies B and C. For sparse data, the methods were less successful at discovering asymmetric coregulation than asymmetric signal branching, but for detailed data both features were modeled with similar success rates. As with the case of constant signal propagation, the different approaches to input processing of the three methods seem to explain this observation.
For sparse data, the asymmetric coregulation feature was not discovered at all when reverseengineering from data generated by the ANN and GRLOT method, respectively. In case B, coregulation involves the interplay between the two genes, X3 and X4, that control the expression of gene X5. By looking into the details of the models (data not shown here), the ANN and GRLOT models either described coregulation with interchanged asymmetry or contain, incorrectly, an inhibitory dependency of gene X5. The sparse data set was not sufficient to ensure that this characteristic was reflected in the reverseengineered models, but for detailed data the methods were more successful and in particular the ANN method could correctly reproduce coregulation for all the data sets.
Positive and negative feedback loops
These features are only found in the static GRN of case C. The methods have performed relatively well in reverseengineering these features. Specifically, all models reverseengineered from one timeseries (Table 5, block of rows at the top) contain the positive and the negative feedback loops of network case C, with the exception of one GRLOT model that was reverseengineered from ANN data. This GRLOT model discovered an extra negative feedback loop instead of the positive feedback loop.
All models reverseengineered from the detailed data rediscovered the feedback loops. Feedback loops play a key role in the model dynamics of network case C, and are therefore essentially contained in nearly all reverseengineered models.
2.2.5 Considering the different case studies
This section analyzes the different case studies in some detail.
Cases A, B, and C
Generally, the increasing complexity of the static GRN models from A to C had no significant impact on the predictive power of the reverseengineered models on the training data (P_{ ver } values). This, however, might be different in more complex reverseengineering settings where the network topologies are not known.
The Vohradsky case D: Oscillatory dynamics
While models based on all three methods were able to obtain high P_{ ver } values on reproduced non oscillatory dynamics, the SS and GRLOT models reverseengineered from the time series did not perform so well for oscillatory dynamics. The data sets used for these experiments are shown in the left and right panels of Figure 3.
To better understand the data fitting verification method (based on the training data), it is useful to compare the time series shown in Figure 6 with the P_{ ver } values given in Figure 4: the P_{ ver } values are still relatively high for the SS and GRLOT methods, despite their failure to reproduce the oscillatory dynamics seen in the left diagram of Figure 6.
The Hlavacek case E: singularities
The same conclusions can be drawn from the results obtained with the Hlavacek case. The SS time series used here was also strongly influenced by unique characteristics of the SS method. First, the underlying network model of the test data set [38] involves the regulation of the degradation process of X3. More importantly, though, is the fact that singularities have a significant impact on the dynamics generated with this test model. As the P_{ ver } values of Figure 4 show, this time series could therefore only be reproduced by the original SS method, whereas the ANN and GRLOT methods failed.
3 Conclusions
In this study we examined three systems of ordinary differential equations commonly applied in the modeling and reverseengineering of GRNs. Our goal was to assess how the characteristics of the individual mathematical methods affect the quality of the reverseengineered dynamic GRN models. Our results suggest that the methods are clearly not equivalent and interchangeable and that there are considerable differences in the way the methods process inputs. In particular, the SS method suffers from a relatively high degree of sensitivity to its parameters due to multiple input processing based on multiplied unrestricted exponentials. Thus, for the SS method, even when the original parameter matrix used to generate artificial test data sets was discovered with high accuracy the method sometimes failed to adequately reproduce the network dynamics.
Different methods seem to be suitable for modeling different types of dynamics or network features. For example, oscillatory dynamics may be modeled with the ANN method by using different values within its parameters. However, this may not possible with the SS and GRLOT models, which do not provide straightforward mechanisms for modeling oscillations.
Models reverseengineered from undersampled data, based on only a single experiment, are unlikely to discover the real parameter matrix, and should be regarded with considerable caution. Even when multiple, wellsampled time series data are available, it is does not seem wise to rely on models based on just a single modeling method, as it is likely to have some bias. The SS method, in particular, is prone to generating unrealistic dynamics, especially for relatively large input perturbations where our experiments showed its predicted output to be very inaccurate (see Section2.2.2). Even for very small gene networks, considerable amounts of highly timeresolved (many sampling points) and highly conditionresolved (based on multiple perturbations) data may be necessary to produce reliable dynamic GRN models. Furthermore, as the complexity (number of genes and regulatory interactions) increases, the computational complexity (compute power) required to reverseengineer dynamic GRN models becomes nontrivial, and may require nonstandard computing solutions such as clusters, supercomputers or other largescale computing solutions.
In the experiments we performed, SSbased dynamic GRN models generally yielded significantly less accurate predictions than the ANN and GRLOT methods. While some features such as positive and negative feedback loops were correctly modeled in almost all situations we studied, it was often the case, especially when using the sparse data sets, that the methods failed to reverseengineer features such as uniform degradation rates, constant signal propagation, asymmetry in signal branching and asymmetry in coregulation. Even with the detailed data sets, these features could not be consistently reverseengineered. Of the three methods, perhaps the ANN method deserves more attention due to its reliability and the relatively small number of parameters it requires. Indeed, although in this paper we have not presented results for the amount of computation required for each method, it is important to note that if the amount of computational power is limited, then the ANN method generally produces the most accurate and robust models for the least computation.
4 Methods
Before we describe the experimental design we used to compare the GRN modeling methods, we introduce some terminology.
4.1 Terminology
Static GRN model
The static model of a GRN represents the regulatory interaction structure or topology of a GRN. It describes whether or not there is a regulatory relationship between genes and what the nature of the interaction is, if it exists. The interaction nature is a binary property. It describes the influence of interacting genes, i.e., whether a gene has an activating (upregulating, inducing) or inhibiting (downregulating, repressing) influence on another gene. A static GRN model does not quantify gene expression amounts or regulatory influence. Static GRN models are easy to depict diagrammatically as a graph or network in which a node represents a gene and an edge a represents a regulatory interaction between two genes. The type and direction of an interaction is typically depicted symbolically: An arrow head represents activation, a bar represents inhibition. A diagrammatic depiction of the static GRN models used in our study is shown in Figure 2.
Dynamic GRN model
A dynamic GRN model refers to a representation that captures both the regulatory interaction structure (including the nature of the interactions) as well as quantitative rules that describe how gene expression amounts vary (in response to perturbations) over time.
Dynamic gene expression data
In general, dynamic gene expression data represent the timevariant expression levels (abundance of functional gene products: RNA, proteins) of the genes comprising a GRN. In our study we do not use dynamic gene expression data from real biological experiments. Instead, we generate dynamic gene expression data from dynamic GRN models with manually defined model parameters and (initial) expression levels.
Reverseengineering of GRNs
Reverseengineering of a GRN refers to the process of constructing (by automated means) a dynamic GRN model based on dynamic gene expression data. In particular, the reverseengineering of a dynamic GRN model requires the reconstruction (from data) of the underlying regulatory interaction structure as well as the parameters governing the quantitative regulatory influence among genes. We simplified the reverseengineering problem somewhat by avoiding the reverseengineering of regulatory interactions that are not present in the static GRN models used as case studies (Figure 2).
4.2 Model simulation: Generating the training data sets
Initial model inputs
X1  X2  X3  X4  X5  

Starting set 1  0.8  0.1  0.1  0.1  0.1 
Starting set 2  0.8  0.1  0.1  0.3  0.5 
Starting set 3  0.5  0.5  0.2  0.1  0.1 
Starting set 4  0.4  0.8  0.2  0.2  0.2 
Starting set 5  0.5  0.2  0.8  0.8  0.2 
The dynamics generated by the different modeling methods for the same GRN differ considerably. There are two main reasons for this: First, we chose model parameters producing different dynamics to provide a variety of expression profiles to facilitate the comparison of the three modeling methods. Here, the idea is to assess to what extent the reverseengineered dynamic GRN models are able to reproduce the dynamics generated by the reference models. This relates to the model verification Step 4 in our study design. Second, the fact that the different reference models generate different dynamics is also explained by idiosyncratic characteristics of the modeling methods.
 1.
Sparse data sets, comprising the dynamic gene product concentrations of all genes but based on only a single set of initial conditions, namely starting set 1 (see Table 7).
 2.
Detailed data sets, based on five sets of initial conditions, namely based on starting set 1 to starting set 5.
 3.
All time series consist of 200 data points, each data point describing the gene product concentration X_{ i } (t_{ k } ) at time point t_{ k } for each gene i = 1, ..., n.
The model parameters for the reference models of the Vohradsky model (case D), the Hlavacek model (case E), and the Cantone model (case F) were taken directly from the literature [15, 19, 38].
4.3 Reverseengineering of dynamic GRN models from timeseries data
Many approaches to reverseengineering of GRNs exist [12]. In our study reverseengineering came down to a parameter estimation or optimization problem (Step 3 in our study design). We adopted an evolutionary algorithm approach to address this task [20]. Different reverseengineered dynamic GRN models may be generated with different optimization techniques. However, we are confident that the parameter optimization technique we adopted was successful because, for the ANN and SS methods, we could demonstrate that the parameters of the reference models could be reverseengineered with high accuracy (see the P_{ inf } values in Table 3). Furthermore, the validation results (predictive power on unseen data) for some of the dynamic GRN models we reverseengineered with the evolutionary algorithm technique, show high levels of accuracy. Details of the optimization approach are discussed below.
4.3.1 The parameter optimization problem
where
X_{ k } (t) denotes the experimentally observed gene product concentration level of gene k at time point t.
${\hat{X}}_{k}(t)$ denotes the gene product concentration level of gene k at time point t predicted by the dynamic GRN model.
N denotes the number of genes in the network and T the number of sampling points of observed data.
This approach is based on the assumption that the minimum of function f corresponds to the set of parameters (i.e., parameter matrix) of the dynamic GRN model for which the model's outputs optimally fit the experimental data [28].
We performed optimization using evolutionary algorithms [20, 39–41], which are stochastic search methods that mimic, on a highly abstracted level, biological evolution and sexual reproduction. Each evolutionary algorithm works with a population of points on the problem space where points are represented by individuals. Transformations on the individuals will tend to assemble individuals in more favorable areas of the search space, i.e., areas with high evolutionary fitness. When all or most individuals are located within the same area then a local (and possibly global) optimum of the search space is found.
Our approach was realized as follows. Individuals are represented by realvalued vectors, and each value can evolve within a defined variabledependent interval. Here, the different genetic operators acting on those vectors ensure that the values remain with the specified intervals. First, we generated a random initial set or population of individuals. In this case, individuals are generated by setting random values to the unknown model parameters of the differential equations representing the dynamic GRN models (with the three modeling formalisms ANN, SS and GRLOT). A simulation was then performed for each individual and this was used to evaluate the fitness by comparing how well the simulated model output matches the timeseries expression data (see Equation 5). The individuals were then ranked according to their fitness and in the next step we selected individuals for genetic operations, including cloning, recombination and mutation. These processes of selection, recombination and evaluation were repeated until a designated termination criterion was met, such as the attainment of an acceptable fitness level, or if a certain number of generations were evaluated.
4.3.2 The twophase reverseengineering strategy
In this study, we applied a twophase strategy to reverseengineer dynamic GRN models from timeseries gene expression data created in Step 2 from the reference models. We refer to the first phase as bottomup phase and the second as topdown phase. We also employed the notions of a network model, denoted by m, and a node model, denoted by m_{ k } , to structure reverseengineering problem [42].
The number of parameters to estimate in all three modeling methods ANN, SS and GRLOT increases with O(N^{2}), where N is the number of genes in the GRN. This makes it very difficult to reverseengineer largescale GRN models. Recently, a variation on the "network" model of reverseengineering mentioned here was further developed based on the ANN method by Vohradsky and colleagues [43, 44]. Even the reverseengineering of relatively small GRNs, as presented by this study, turns out to be challenging. The advantage of node models, however, is that they require only $\frac{1}{N}th$ of the parameters required for the network model consisting of N genes. Thus, a considerable complexity reduction can be achieved by decomposing the parameter optimization problem of a GRN into the smaller problem of N node models with complexity O(N). This problem decomposition approach was proposed as a stepbystep strategy by Maki and colleagues [45] and successfully applied elsewhere [27, 33, 46].
where
$\frac{d{\hat{X}}_{k}}{dt}$ denotes the approximated gene product concentration changes of gene k,
X(t) denotes the concentration levels of observed gene product concentrations at time t as provided by the dynamic gene expression data,
w_{ k }is a vector representing the parameters of the node model m_{ k } .
Here $\hat{X}(t)$ describes the approximated concentration levels of the nodes (representing gene products) k in the network and X(0) specifies the initial condition or state of the network model (as obtained from the timeseries data).
In the bottomup phase of our twostage approach, we estimated the parameters w_{ k }of each individual node model, m_{ k } , independent from the parameters of other node models. Subsequently, in the topdown stage, we reverseengineered (optimized) the parameters, W, of the entire network model, m, by taking the results of the bottomup approach as a starting point.
Bottomup phase
This function corresponds to the fitness of the kth node model where ${\hat{X}}_{kt}$ is the calculated gene expression level of gene k and X_{ kt } is the given expression level of gene k at time t from the data set D.
With several repeated iterations of the bottomup process, the intention was to find one or more optimal parameter sets w_{ k }i which produce small error values according to Equation 9. By combining the fittest of the N identified node models form m_{ k } , it is then possible to construct a selfcontained network model, m, which can simulate the dynamic behavior of the network independently from the gene expression matrix
D. Consequently, and in contrast to the node models m_{ k } , a network model, m, is capable of simulating or predicting scenarios (gene product concentrations) that are not captured by the experimental data.
Topdown phase
As already discussed, it is possible to construct a network model, m, by merging of node models, m_{ k } , found in the bottomup phase. At this stage, however, the network model has not learned to work as a system that is aware of the interplay among the individual node models. Since the network model no longer derives its input from a static gene expression data matrix, D, but instead is estimating the concentration levels for each network node, it can show dynamics totally different from the dynamics generated by the individual node models. The reason for this discrepancy lies in error propagation, where even small variations at the beginning of a simulation can cause dramatic fluctuations at later stages. For this reason, a network model constructed from node models is not the final result but rather constitutes a good starting position for the global parameter optimization.
4.3.3 The parameter sets used
To identify optimal parameter sets for the different gene expression models, we applied an evolutionary algorithm in which the genotype is directly based on realvalued vectors [41]. The genetic operators applied in our study are BLXα crossover[47], with a crossover rate of 0.6, and real number creep mutation, with a mutation rate of 0.1. The selection method used for selecting individuals for the generation changes is roulettewheel selection with proportionality to the positions found in a rankbased fitness assignment operation. The number of individuals contained in one population was 500 and the number of generations was 300. Each parameter optimization for node models and for network models was repeated five times. This reverseengineering strategy was able to precisely reproduce (often to more than 4 decimal places) the original parameter matrices for situations where the modeling method of reference model (Step 1) and reverseengineered model (Step 3) were identical.
For consistency, we used the exact same approach (i.e., the same evolutionary algorithm parameters such as population size, number of generations, etc.) for all the reverseengineering experiments. The only exception to this was the Cantone model F that we worked on at a time when our usual infrastructure, described in Section 4.6, was not available. Instead we used a similar evolutionary algorithm on the Grid'5000 infrastructure [48].
The parameter space searched for the three methods was defined as follows:

The ANN method: w_{ ij }from 15.0 to 15.0, v_{i} from 0.0 to 3.0, ϑ_{ i }from 0.0 to 7.0, and k_{ i }from 0.0 to 2.0.

The SS method: g_{ ij }and h_{ ij }from 3.0 to 3.0, and α_{ i }and β_{ i }from 0.0 to 15.0.

The GRLOT method: I_{ j }, from 5.0 to 5.0, A_{ k }from 0.0 to 7.0, Ki_{ j }and Ka_{ k }from 0.0 to 40.0, and k_{ i }from 0.0 to 2.0.
4.4 Model validation (Step 5)
Finally, we performed a small number of experiments in which we first altered the initial conditions of the manually constructed reference models (Step 1), then simulated these models to generate dynamic gene expression data (serving as unseen validation data), and then investigated how the reverseengineered dynamic GRN (Step 3) models behaved when identical perturbations were applied.
In all models, we varied the dynamics of gene product X1, by modifying the degradation rate k_{1} in the GRLOT and ANN models, and the β_{1} value in the SS models. This manipulation could be viewed as a variation of network input or network stimulus, since in all the case study models, X1 did not depend on the products of other genes.
where, ${\hat{X}}_{kt}$ and X_{ kt } are the simulated mRNA levels at time t of molecule k. N is the number of genes in the network and p is the number of sampling points of observed data.
4.5 Measures to verify and validate the reverseengineered GRN models
Wessels and coworkers [28] defined measures for assessing the accuracy of reverseengineered GRN models. Following their approach, we used the following measures (see also Step 4 and 5 of our study design):

Predictive power, P_{ ver }, on training data. This measures how well a reverseengineered dynamic GRN model reproduces the dynamic gene expression data from which it was generated. So the steps involved are: (a) Create training data from reference models (Step 2); (b) Reverseengineer dynamic GRN models from training data (Step 3); (c) Execute or simulate reverseengineered models (resulting in predicted dynamic gene expression data) and calculate P_{ ver }by comparing the training data with the predicted data (Step 4).

Predictive power, P_{ val }, on unseen test data. This measures how well a reverseengineered dynamic GRN model reproduces unseen dynamic gene expression data generated by the reference models under initial conditions different to those used to generate the training data. So the steps involved are: (a) Rerun the reference models (Step 1) under altered initial conditions to generate unseen test data; (b) Apply the same changed initial conditions to the dynamic GRN models generated by Step 3 and rerun the models under these conditions to predict dynamic gene expression data and calculate P_{ val }by comparing the unseen test data with the predicted data (Step 5).

Predictive power, P_{ fit }, for data fitting. We use P_{ fit }when we refer to predictive power in a general way of fitting the data of a predicted data set with another (training or test) data set.

Inferential power, P_{ inf }, measures to what extent the parameters (representing GRN regulatory structure and quantitative influence) of a reverseengineered dynamic GRN model (Step 3) correspond to the parameters of a reference model (Step 1). We also use terms like matrix correlation to refer to P_{ inf }, as the calculation of P_{ inf }involves the comparison of the matrices representing the parameters of the reference (Step 1) and reverseengineered (Step 3) dynamic GRN models respectively.

Qualitative comparison, Q_{ com }, refers to the extent a reverseengineered GRN model corresponds with a reference model in terms of certain GRN features.
4.5.1 Data fitting and predictive power
where X_{ kt } denotes an experimentally observed gene product concentration level at time t of gene k and ${\hat{X}}_{kt}$ denotes a corresponding gene product concentration level estimated or predicted by the dynamic GRN model. N denotes the number of genes in the GRN and p the number of sampling points of the dynamic gene expression data.
Meaning of P_{ fit }values
P_{ fit }value  Max. RMS diff. 

0.40  244.0 
0.75  76.9 
0.92  8.47 
0.97  2.34 
0.992  0.80 
0.997  0.42 
0.9992  0.13 
0.9997  0.067 
P_{ fit } values over 0.99 start to converge with the original model dynamics and a P_{ fit } value of over 0.999 indicates a highly accurate accurate GRN model.
4.5.2 Matrix correlation: Inferential power
P_{ inf } is a deeper measure than P_{ fit } and its variants P_{ ver } and P_{ val } , respectively [28]. It describes more fundamental properties (model parameters) of the process that generates data which is the basis for calculating P_{ fit } . In our study, we use P_{ inf } to compare the parameter matrix, W, of reference models (used to generate the training and test data) with the parameter matrix, $\hat{W}$ of reverseengineered models.
where p denotes the Pearson productmoment correlation coefficient between W and $\hat{W}$.
P_{ inf } from Equation 13 assumes values from the unit interval; a value of 1 indicates an exact inference (estimation) of the model parameters. If there is an exact match between the parameter matrices of the reverseengineered and the reference GRN model, then the behavior of the two networks is identical under all conditions.
Since the three methods ANN, SS and GRLOT use different numbers of parameter values, we can only calculate P_{ inf } values for the cases where the method used for reverseengineering equals the method used for generating the training data. In all other cases we make a qualitative estimation of P_{ inf } .
4.5.3 Qualitative comparison
Q_{ com } refers to the similarity of a model with the true underlying system in terms of network features.
Primarily, network features refer to the network connectivity that captures the topology of a network and the connection "logic". However, in this study the network structure is given so that only network features such as the type of the effect (inhibitory or stimulatory) and its degree of influence need to be deduced or estimated. Network features are represented by the model's parameters. Since parameters of the three types of mathematical methods can not be directly compared to each other, we determine Q_{ com } by means of a qualitative comparison.
The qualitative analysis of the derived model parameters is performed according to the characteristics listed in Section 1.3.1. For example, when manually constructing the reference GRN models, we chose uniform parameter values to define degradation rates. Based on this and other characteristics described in Section 1.3.1, it is possible to estimate the accuracy of each model.
In this study, the characteristic uniform degradation rate is present in the reverseengineered parameter matrices if all degradation rates within a matrix are within the interval [0.2,0.4]; constant signal propagation is present when the ratio between the weakest and the strongest signal is smaller than 2; asymmetric signal branching and asymmetric coregulation is present when there is more than a 20% difference between the signals coming to or arising from the system variables. Finally, positive feedback and negative feedback are described in the dependencies between the variables X3 and X2, and X5 and X2 respectively.
4.6 A note on the technical infrastructure and computational complexity of the experiments
To perform the GRN reverseengineering experiments, we developed a module in Narrator [49] that provided an automatic interface to the Condor [50] distributed, highperformance computing system. This allowed computing tasks defined by Narrator to be automatically placed into the Condor scheduling queue. We used two pools of Condor computing clusters that were dedicated to our experiments: Pool 1 consisted of 23 Fujitsu Siemens E600 machines, each with a single Pentium 4 HT 3.06 GHz processor and 1 GB RAM; Pool 2 consisted of 10 Dell Optiplex GX 620 machines, each with a single Pentium 4 HT 3.4 GHz processor and 1 GB RAM. A typical evolutionary optimization would take about 35 minutes for the molecular models and 150 minutes for the network models. With the Condor pools we could repeat these runs a number of times, as this helped overcome problems due to the search process getting stuck in a local minimum. In total, it therefore took about 27 hours of compute time to perform a single reverseengineering experiment. (For a 5gene network with a detailed data set: 5 × 5 molecular model runs + 5 × 1 network model run = 25 × 35 min + 5 × 150 min = 875 min + 750 min = 1625 min.)
Declarations
Acknowledgments
The work in this paper was supported by EC grants DataMiningGrid IST FP6 004475 and QosCosGrid IST FP6 STREP 033883. Some of the experiments presented in this paper were carried out using the Grid'5000 experimental testbed, being developed under the INRIA ALADDIN development action with support from CNRS, RENATER and several Universities as well as other funding bodies (see https://www.grid5000.fr).
Authors’ Affiliations
References
 Davidson E, Levin M: Gene regulatory networks. Proceedings of the National Academy of Sciences of the United States of America. 2005, 102 (14): 493510.1073/pnas.0502024102.View ArticlePubMedPubMed CentralGoogle Scholar
 Hasty J, McMillen D, Isaacs F, Collins JJ: Computational studies of gene regulatory networks: In numero molecular biology. Nature Reviews Genetics. 2001, 2: 268279. 10.1038/35066056.View ArticlePubMedGoogle Scholar
 Spellman PT, Sherlock G, Zhang MQ, Iyer VR, Anders K, Eisen MB, Brown PO, Botstein D, Futcher B: Comprehensive identification of cell cycleregulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Molecular Biology of the Cell. 1998, 9: 32733297.View ArticlePubMedPubMed CentralGoogle Scholar
 Arbeitman MN, Furlong EEM, Imam F, Johnson E, Null BH, Baker BS, Krasnow MA, Scott MP, Davis RW, White KP: Gene expression during the life cycle of Drosophila melanogaster. Science. 2002, 297 (5590): 22702275. 10.1126/science.1072152.View ArticlePubMedGoogle Scholar
 Zvonic S, Ptitsyn AA, Conrad SA, Scott LK, Floyd ZE, Kilroy G, Wu X, Goh BC, Mynatt RL, Gimble JM: Characterization of peripheral circadian clocks in adipose tissues. Diabetes. 2006, 55: 962970. 10.2337/diabetes.55.04.06.db050873.View ArticlePubMedGoogle Scholar
 Ronen M, Rosenberg R, Shraiman BI, Alon U: Assigning numbers to the arrows: Parameterizing a gene regulation network by using accurate expression kinetics. Proceedings of the National Academy of Sciences of the United States of America. 2002, 99 (16): 1055510560. 10.1073/pnas.152046799.View ArticlePubMedPubMed CentralGoogle Scholar
 BarJoseph Z: Analyzing time series gene expression data. Bioinformatics. 2004, 20 (16): 24932503. 10.1093/bioinformatics/bth283.View ArticlePubMedGoogle Scholar
 Kauffman S: The origins of order: Selforganization and selection in evolution. 1993, Oxford University PressGoogle Scholar
 Liang S, Fuhrman S, Somogyi R: Reveal, a general reverse engineering algorithm for inference of genetic network architectures. Proceedings of Pacific Symposium Biocomputing. 1998, 1829.Google Scholar
 Akutsu T: Identification of genetic networks from a small number of gene expression patterns under the Boolean network model. Proceedings of Pacific Symposium Biocomputing. 1999, 1728.Google Scholar
 Perrin BE, Ralaivola L, Mazurie A, Bottani S, Mallet J, d'Alche Buc F: Gene networks inference using dynamic Bayesian networks. Bioinformatics. 2003, 19 (2): 138148. 10.1093/bioinformatics/btg1071.Google Scholar
 Cho KH, Choo SM, Jung SH, Kim JR, Choi HS, Kim J: Reverse engineering of gene regulatory networks. IET Systems Biology. 2007, 1: 149163. 10.1049/ietsyb:20060075.View ArticlePubMedGoogle Scholar
 Stetter M, Schurmann B, Dejori M: Systems level modeling of gene regulatory networks. Artificial intelligence methods and tools for systems biology. Edited by: Dubitzky W, Azuaje F. 2004, Springer, Netherlands, 175194. full_text.Google Scholar
 Savageau MA: Biochemical Systems analysis: A study of function and design in molecular biology. 1976, AddisonWesley, Reading, MassGoogle Scholar
 Vohradsky J: Neural network model of gene expression. The FASEB Journal: Official publication of the Federation of American Societies for Experimental Biology. 2001, 15 (3): 846854.View ArticleGoogle Scholar
 Mendes P, Sha W, Ye K: Artificial gene networks for objective comparison of analysis algorithms. Bioinformatics. 2003, 19 (90002): 122129. 10.1093/bioinformatics/btg1069.Google Scholar
 Quackenbush J: Computational analysis of microarray data. Nature Reviews Genetics. 2001, 2 (6): 418427. 10.1038/35076576.View ArticlePubMedGoogle Scholar
 knüpfer C, Dittrich P, Beckstein C: Artificial gene regulation: A data source for validation of reverse bioengineering. Proceedings of the 6th German Workshop on Artificial Life 2004. Edited by: Schaub H, Detje F, Brggemann U. 2004, 6675.Google Scholar
 Cantone I, Marucci L, Iorio F, Ricci MA, Belcastro V, Bansal M, Santini S, di Bernardo M, di Bernardo D, Cosma MP: A yeast synthetic network for in vivo assessment of reverseengineering and modeling approaches. Cell. 2009, 137: 172181. 10.1016/j.cell.2009.01.055.View ArticlePubMedGoogle Scholar
 Herrera F, Lozano M, Verdegay JL: Tackling realcoded genetic algorithms: Operators and rools for behavioural analysis. Artificial Intelligence Review. 1998, 12 (4): 265319. 10.1023/A:1006504901164.View ArticleGoogle Scholar
 Hlavacek W, Savageau M: Rules for coupled expression of regulator and effector genes in inducible circuits. Journal of Molecular Biology. 1996, 255: 121139. 10.1006/jmbi.1996.0011.View ArticlePubMedGoogle Scholar
 Crampin EJ, Schnell S, McSharry PE: Mathematical and computational techniques to deduce complex biochemical reaction mechanisms. Progress in Biophysics and Molecular Biology. 2004, 86: 77112. 10.1016/j.pbiomolbio.2004.04.002.View ArticlePubMedGoogle Scholar
 Almeida JS, Voit EO: Neuralnetworkbased parameter estimation in Ssystem models of biological networks. Genome Informatics. 2003, 14: 114123.PubMedGoogle Scholar
 Wildenhain J, Crampin EJ: Reconstructing gene regulatory networks: From random to scalefree connectivity. IEE Proceedings of Systems Biology. 2006, 156 (4): 247256. 10.1049/ipsyb:20050092.View ArticleGoogle Scholar
 Veitia R: A sigmoidal transcriptional response: cooperativity, synergy and dosage effects. Biological Reviews. 2003, 78 (01): 149170. 10.1017/S1464793102006036.View ArticlePubMedGoogle Scholar
 D'haeseleer P, Liang S, Somogyi R: Genetic network inference: From coexpression clustering to reverse engineering. Bioinformatics. 2000, 16 (8): 707726. 10.1093/bioinformatics/16.8.707.View ArticlePubMedGoogle Scholar
 Spieth C, Streichert F, Speer N, Zell A: A memetic inference method for gene regulatory networks based on Ssystems. Proceedings of the IEEE Congress on Evolutionary Computation. 2004, 152157.Google Scholar
 Wessels L, van Someren E, Reinders M: A comparison of genetic network models. Proceedings of the Pacific Symposium on Biocomputing. 2001, 508519.Google Scholar
 van Someren EP, Wessels LFA, Reinders MJT: Linear modeling of genetic networks from experimental data. Proceedings of the 8th International Conference on Intelligent Systems for Molecular Biology. 2000, 55366.Google Scholar
 Yuan S, Li KC: Contextdependent clustering for dynamic cellular state modeling of microarray gene expression. Bioinformatics. 2007, 23 (22): 30393047. 10.1093/bioinformatics/btm457.View ArticlePubMedGoogle Scholar
 Yang Y, Kim J, Song E, E K, Oh M, BG K: Finding new pathwayspecific regulators by clustering method using threshold standard deviation based on DNA chip data of Streptomyces coelicolor. Appl Microbiol Biotechnol. 2008, 80: 707717. 10.1007/s0025300815743.Google Scholar
 Nam D, Yoon SH, Kim JF: Ensemble learning of genetic networks from timeseries expression data. Bioinformatics. 2007, 23 (23): 32253231. 10.1093/bioinformatics/btm514.View ArticlePubMedGoogle Scholar
 Kimura S, Ide K, Kashihara A, Kano M, Hatakeyama M, Masui R, Nakagawa N, Yokoyama S, Kuramitsu S, Konagaya A: Inference of Ssystem models of genetic networks using a cooperative coevolutionary algorithm. Bioinformatics. 2005, 21 (7): 11541163. 10.1093/bioinformatics/bti071.View ArticlePubMedGoogle Scholar
 Guthke R, Moller U, Hoffmann M, Thies F, Topfer S: Dynamic network reconstruction from gene expression data applied to immune response during bacterial infection. Bioinformatics. 2005, 21 (8): 16261634. 10.1093/bioinformatics/bti226.View ArticlePubMedGoogle Scholar
 Milo R, ShenOrr S, Itzkovitz S, Kashtan N, Chklovskii D, Alon U: Network motifs: Simple building blocks of complex networks. Science. 2002, 298: 824827. 10.1126/science.298.5594.824.View ArticlePubMedGoogle Scholar
 Lee TI, Rinaldi NJ, Robert F, Odom DT, BarJoseph Z, Gerber GK, Hannett NM, Harbison CT, Thompson CM, Simon I, Zeitlinger J, Jennings EG, Murray HL, Gordon DB, Ren B, Wyrick JJ, Tagne JB, Volkert TL, Fraenkel E, Gifford DK, Young RA: Transcriptional regulatory networks in Saccharomyces cerevisiae. Science. 2002, 298 (5594): 799804. 10.1126/science.1075090.View ArticlePubMedGoogle Scholar
 Noman N, Iba H: Reverse engineering genetic networks using evolutionary computations. Genome Informatics. 2005, 16 (2): 205214.PubMedGoogle Scholar
 Iba H, Mimura A: Inference of a gene regulatory network by means of interactive evolutionary computing. Inf Sci Inf Comput Sci. 2002, 145 (34): 225236.Google Scholar
 Schwefel HP: Evolution and optimum seeking. 1995, Wiley InterscienceGoogle Scholar
 Koza JR, David Andre, Bennett FH, Keane M: Genetic programming 3: Darwinian invention and problem solving. 1999, Morgan KaufmanGoogle Scholar
 Michalewicz Z: Genetic algorithms + data structures = evolution programs. 1992, SpringerView ArticleGoogle Scholar
 Aho T: Simulation tool for genetic regulatory networks. 2003, Masters thesis Dept. of Information Technology, Tampere University of Technology, FinlandGoogle Scholar
 Vu TT, Vohradsky J: Nonlinear differential equation model for quantification of transcriptional regulation applied to microarray data of Saccharomyces cerevisiae. Nucleic Acids Research. 2007, 35: 279287. 10.1093/nar/gkl1001.View ArticlePubMedPubMed CentralGoogle Scholar
 Vu TT, Vohradsky J: Inference of active transcriptional networks by integration of gene expression kinetics modeling and multisource data. Genomics. 2009, 93 (5): 426433. 10.1016/j.ygeno.2009.01.006.View ArticlePubMedGoogle Scholar
 Maki Y, Ueda T, Okamoto M, Uematsu N, Inamura K, Uchida K, Takahashi Y, Eguchi Y: Inference of genetic network using the expression profile time course data of mouse P19 cells. Genome Informatics. 2002, 13: 382383.Google Scholar
 Kimura S, Hatakeyama M, Konagaya A: Inference of Ssystem models of genetic networks using a genetic local search. Proceedings of the 2003 Congress on Evolutionary Computation. Edited by: Sarker R, Reynolds R, Abbass H, Tan KC, McKay B, Essam D, Gedeon T. 2003, IEEE Press, 631638. full_text.Google Scholar
 Eshelman LJ, Schaffer JD: Realcoded genetic algorithms and interval schemata. Foundations of genetic algorithms 2. Edited by: Whitley LD. 1993, Morgan Kaufmann PublishersGoogle Scholar
 Bolze R, Cappello F, Caron E, Dayd'e M, Desprez F, Jeannot E, J'egou Y, Lanteri S, Leduc J, Melab N, Mornet G, Namyst R, Primet P, Quetier B, Richard O, Talbi EG, Touche I: Grid'5000: A large scale and highly reconfigurable experimental grid testbed. The International Journal of High Performance Computing Applications. 2006, 20 (4): 481494. 10.1177/1094342006070078.View ArticleGoogle Scholar
 Mandel JJ, Fuss H, Palfreyman NM, Dubitzky W: Modeling biochemical transformation processes and information processing with Narrator. BMC Bioinformatics. 2007, 8 (103):
 Thain D, Tannenbaum T, Livny M: Distributed computing in practice: The Condor experience. Concurrency  Practice and Experience. 2005, 17 (24): 323356. 10.1002/cpe.938.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.