 Software
 Open Access
 Published:
MEIGO: an opensource software suite based on metaheuristics for global optimization in systems biology and bioinformatics
BMC Bioinformaticsvolume 15, Article number: 136 (2014)
Abstract
Background
Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools.
Results
We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixedinteger programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a singlethread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other stateoftheart methods.
Conclusions
MEIGO provides a free, opensource platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods.
Background
Mathematical optimization plays a key role in systematic decision making processes, and is used virtually in all areas of science and technology where problems can be stated as finding the best among a set of feasible solutions. In bioinformatics and systems biology, there has been a plethora of successful applications of optimization during the last two decades (see reviews in [1–5]). Many problems in computational biology can be formulated as IP problems, such as sequence alignment, genome rearrangement and protein structure prediction problems [1, 3], or the design of synthetic biological networks [6]. Deterministic and stochastic/heuristic methods have been extensively applied to optimization problems in the area of machine learning [2]. In addition to combinatorial optimization, other important classes of optimization problems that have been extensively considered, especially in systems biology, are cNLP and mixedinteger dynamic optimization. Such problems arise in parameter estimation and optimal experimental design [5, 7].
A number of authors have stressed the need to use suitable global optimization methods due to the nonconvex (multimodal) nature of many of these problems [4, 8, 9]. Roughly speaking, global optimization methods can be classified into exact and stochastic approaches. Exact methods can guarantee convergence to global optimality, but the associated computational effort is usually prohibitive for realistic applications. In contrast, stochastic methods are often able to locate the vicinity of the global solution in reasonable computation times, but without guarantees of convergence. Metaheuristics (i.e. guided heuristics) are a particular class of stochastic methods that have been shown to perform very well in a broad range of applications [5].
Motivated by this, we developed the software suite MEIGO (MEtaheuristics for systems biology and bIoinformatics Global Optimization) which provides state of the art metaheuristics (eSS and VNS) in opensource R (here with the addition of the Bayesian inference method BayesFit) and Matlab versions (it is also available in Python via a wrapper for the R version). MEIGO covers the most important classes of problems, namely (i) problems with realvalued (cNLPs) and mixedinteger decision variables (MINLPs), and (ii) problems with integer and binary decision variables (IPs). Furthermore, MEIGO allows the user to apply parallel computation using cooperative strategies [10]. MEIGO can optimize arbitrary objective functions that are handled as blackboxes. Thus, it is applicable to optimize complex systems that may involve solving inner problems (e.g. simulations or even other optimization problems) to obtain explicit values for the objective function and/or the possible constraints. For example, CellNOpt [11], SBToolbox [12], AMIGO [13] and Potterswheel [14] use eSS for dynamic model calibration. Some recent successful applications of eSS in the field of systems biology can be found in [15–26]. It has also been shown that eSS outperformed the various optimization methods available in the Systems Biology Toolbox [27].
Methods
Enhanced Scatter Search (eSS)
Scatter search [28] is a populationbased metaheuristic which can be classified as an evolutionary optimization method. In contrast with other popular populationbased metaheuristics like, for example, genetic algorithms, the population size, N, in scatter search is small, and the combinations among its members are performed systematically, rather than randomly. The current population is commonly named the “Reference Set” (RefSet). The improvement method, which consists of a local search to increase the convergence to optimal solutions, can be applied with more or less frequency to the members of this RefSet. A set of improvements has been implemented in the enhanced scatter search method. Among the most remarkable changes, we can mention the replacement method. Unlike in the original scatter search scheme, which uses a μ+λ replacement (i.e. the new population or RefSet will consist in the best N solutions selected from the previous RefSet members and the new offspring solutions), the enhanced scatter search uses a 1+1 replacement, similar to the strategy used in a very efficient evolutionary method, Differential Evolution [29]. This means that a RefSet member can only be replaced by a solution that has been generated combining by the former and another RefSet member. In other words, an offspring solution can only replace the RefSet member that generated it, and not any other. This strategy enhances diversity and prevents the search from premature stagnation by not allowing too similar solutions to be present in the RefSet at the same time. The “gobeyond” strategy to exploit combinations which explore promising directions has also been implemented. This strategy analyzes the search directions defined by a RefSet member and their offspring. If an offspring solution outperforms its corresponding RefSet member (i.e. the RefSet member that generates it), then the method considers that the explored direction is promising and a new solution is generated within such direction, exploring an area beyond the segment defined by the RefSet member and its offspring solution. The process is repeated until the new generated solutions do not outperform the previous ones and it favours intensification in the current iteration. Additionally, the use of memory is also exploited to select the most efficient initial points to perform local searches, to avoid premature convergence and to perturb solution vectors which are stuck in stationary points. More details about the enhanced scatter search scheme can be found in [30].
Variable Neighbourhood Search (VNS)
Variable Neighbourhood Search is a trajectorybased metaheuristic for global optimization. It was introduced by Mladenović and Hansen [31] and has gained popularity in recent years in the field of global optimization. VNS performs a local search by evaluating the objective function around an incumbent solution and repeats the procedure visiting different neighbourhoods to locate different local optima, among which the global optimum is expected to be found. One of the key points of the algorithm is the strategy followed to change the current neighbourhood. VNS usually seeks a new neighbourhood by perturbing a set of decision variables using a distance criterion. Once a new solution has been created in the new neighbourhood, a new local search is performed. The typical scheme consists of visiting neighbourhoods close to the current one (i.e. perturbing a small set of solutions), until no further improvement is achieved. Then, more distant neighbourhoods are explored. Apart from this basic scheme, we have implemented advanced strategies to avoid cycles in the search (e.g. not repeating the perturbed decision variables in consecutive neighbourhood searches) in order to increase the efficiency when dealing with largescale problems (e.g. by allowing a maximum number of perturbed decision variables, like in the Variable Neighbourhood Decomposition Search strategy [32]). We have also modified the search aggressiveness to locate high quality solutions (even if they are not the global optimum) in short computational times if required. Other heuristics, like the “gobeyond” strategy (explained above), that is used to exploit promising directions during the local search, have been adapted from other metaheuristics for continuous optimization [30].
BayesFit
BayesFit is a Bayesian inference method for parameter estimation that uses Markov Chain Monte Carlo (MCMC) to sample the complete probability distributions of parameters. This accounts for both experimental error and model nonidentifiability. It is available in the R version of MEIGO and has been adapted from the Python package BayesSB [33]. The sampling of the probability distributions uses a multistart MCMC algorithm where the number of visits to a position in the parameter space is proportional to the posterior probability. The MCMC walk is punctuated by a Metropolis Hastings (MH) criterion that allows more distant neighbourhoods to be explored, based on a probabilistic calculation.
Cooperation
The cooperation scheme implemented in MEIGO is based on the following idea: to run, in parallel, several implementations or threads of an optimization algorithm, which may have different settings and/or random initializations, and exchange information between them. Since the nature of the optimization algorithms implemented in MEIGO is essentially different, we distinguish between eSS (the population based method) and VNS (the trajectory based method), following the classification proposed in [34] (currently there is no cooperation scheme for BayesFit):

1.
Information available for sharing: the best solution found and, optionally for eSS, the RefSet, which contains information about the diversity of solutions.

2.
Threads that share information: all of them.

3.
Frequency of information sharing: the threads exchange information at a fixed interval τ.

4.
Number of concurrent programs: η.
Each of the η threads has a fixed degree of aggressiveness. “Conservative” threads have an emphasis on diversification (global search) and are used to increase the probability of finding a feasible solution, even if the parameter space is rugged or weakly structured. “Aggressive” threads have an emphasis on intensification (local search) and they speed up the calculations in smoother areas. Communication, which takes place at fixed time intervals, enables each thread to benefit from the knowledge gathered by the others. Thus this strategy has several degrees of freedom that have to be fixed: the time between communication (τ), the number of threads (η), and the strategy adopted by each thread. These adjustments should be chosen carefully depending on the particular problem we want to solve. Some guidelines for doing this can be found in [10] and in the Additional files 1, 2, 3, 4 and 5 accompanying this paper.
Implementation
MEIGO runs on Windows, Mac, and Linux, and provides implementations in both Matlab and R. So far, MEIGO includes: (i) eSS (Enhanced Scatter Search, [30]), for solving cNLP and MINLP problems, and (ii) VNS (Variable Neighbourhood Search), following the implementation described in [35], to solve IP problems (see Figure 1). The R version of MEIGO also includes the Bayesian parameter inference method BayesFit. Cooperative parallel versions (CeSS, CVNS), which can run on multicore PCs or clusters, are also included. Cooperation enhances the efficiency of the methods, not only in terms of speed, but also in terms of range: the threads running in parallel are completely independent so they can be customized to cover a wide range of search options, from aggressive to robust. In a sense the cooperation, as it has been designed, acts as a combination of different metaheuristics since each of the threads may present a different search profile. Four different kernel functions per method are included depending on the programming language chosen and the parallelization capabilities. Parallel computation in Matlab is carried out making use of the jpar tool [36]. Parallel computation in R can be performed using the package snowfall [37].
The methods implemented in MEIGO consider the objective functions to be optimized as blackboxes, with no requirements with respect to their structure. The user must provide a function that can be externally called for evaluation, accepting as input the variables to be estimated, and providing as output the objective value, ϕ, as a function of the input parameters. For constrained problems, the values of the constraints are also provided as output so that penalization functions can be calculated. For eSS and VNS, the user must define a set of compulsory fields (e.g. the name of the objective function, the bounds in the parameters, the maximum number of function evaluations). Further options take default values or can be changed. After each optimization, all the necessary results are stored in data files for further analysis with the tools provided by the host platforms. BayesFit is similarly robust to the form of the problem; in this case the likelihood function is provided by the user and this is incorporated into the calculation for the posterior probability for the parameter set, given the data.
Importantly, MEIGO is an open optimization platform in which other optimization methods can be implemented regardless of their nature (e.g. exact, heuristic, probabilistic, singletrajectory, populationbased, etc.).
Illustrative examples
To illustrate the capabilities of the methods presented here, a set of optimization problems, including cases from systems biology and bioinformatics, have been solved and are presented as case studies. The examples include (i) a set of state of the art benchmark cases for global optimization (from the Competition on Large Scale Global Optimization, 2012 IEEE World Congress on Computational Intelligence), (ii) a metabolic engineering problem based on a constraintbased model of E. coli, (iii) training of logic models of signaling networks to phosphoproteomic data [38], and (iv) an additional toy logic model [22] to compare BayesFit to eSS. The corresponding code for these examples is included in the distribution of the MEIGO software.
Largescale continuous global optimization benchmark
These are benchmark functions used in the Special Session on Evolutionary Computation for Large Scale Global Optimization, which was part of the 2012 IEEE World Congress on Computational Intelligence (CEC@WCCI2012). These objective functions can be regarded as stateoftheart benchmark functions to test numerical methods for largescale (continuous) optimization. Information about the functions as well as computer codes can be downloaded from http://staff.ustc.edu.cn/~ketang/cec2012/lsgo_competition.htm. Some of these functions were previously solved in [10] using CeSS, a cooperative version of the Enhanced Scatter Search metaheuristic implemented in Matlab and available within MEIGO. Largescale calibration of systems biology models were also presented and solved in that paper. Here we present the solution of 3 of these functions (i.e. f10, f17 and f20) using the R version of CeSS used by MEIGO. The convergence curves for the solution of these benchmark functions in R are coherent with those presented in [10], which were solved with Matlab, and the results are also competitive with the reference results for these functions presented in http://staff.ustc.edu.cn/~ketang/cec2012/lsgo_competition.htm. The convergence curves corresponding to these results are presented in Figures 2, 3 and 4.
Integer optimization benchmark problems
A set of integer optimization problems arising in process engineering and coded in AMPL (A Modeling Language for Mathematical Programming) were solved using the Matlab version of VNS and making use of the AMPLMatlab interface files provided by Dr. Sven Leyffer, available at http://www.mcs.anl.gov/~leyffer/macminlp/. VNS solved all the problems and, in some cases, achieved a better solution than the best reported one. A summary of the tested problems is presented in Table 1. These benchmarks have been solved using the Matlab version of MEIGO under Windows only, since the dynamic library to access AMPL files runs on Windows.
Metabolic engineering example
In this section we illustrate the application of the VNS algorithm to a metabolic engineering problem. Here VNS was used to find a set of potential gene knockouts that will maximize the production of a given metabolite of interest. The objective function is given by fluxbalance analysis (FBA) where a steadystate model is simulated by means of linear programming (LP). The mathematical formulation is similar to that presented in [41]. FBA assumes that cells have a biological objective that is often considered as growth rate maximization, minimization of ATP consumption or both.
In this example we considered a small steadystate model from E. coli central carbon metabolism, available at http://gcrg.ucsd.edu/Downloads/EcoliCore. Here the metabolite of interest is succinate and we considered the biological objective as biomass maximization. To solve the inner FBA problem we used openCOBRA (http://opencobra.sourceforge.net/) with Gurobi as an LP solver (http://www.gurobi.com/). For the problem encoding, 5 integer variables were chosen as decision variables, one for each possible gene knockout. Each of these variables was allowed to vary from 0 (no knockout) to 52, the total number of possible genes to be knockedout. Repeated KOs were filtered by the objective function.
Additionally we also implemented and solved the problem with a genetic algorithm from the Matlab Global Optimization Toolbox. The point here was to crosscheck the VNS results, not to perform an extensive comparison between the performances of GA and VNS. However we found that for our particular problem and encoding, VNS achieved the optimal solution more often (see Figures 5 and 6). The Wilcoxon rank sum test with continuity correction for comparing means provides a pvalue of 0.068 (or 0.021 if we remove the outlier VNS solution) showing that the solutions provided by VNS are significantly better. Please note that the GA was used out of the box (with default settings). Results can vary when using other encodings and further tuning of the search parameters. In any case, the purpose was to illustrate how this class of problems can be easily solved using VNS.
Training of logic models of signalling networks to phosphoproteomic data
In this section we compare the performance of variable neighborhood search (VNS) and a discrete genetic algorithm (GA) implementation in training a logic model of a signalling network to phosphoproteomic data [38].
The problem is formulated as follows: one starts from a signed directed graph, containing the prior knowledge about a signaling network of interest. This graph contains directed edges among nodes (typically proteins) and their sign (activating or inhibitory). From this graph, one generates all possible AND and OR gates compatible with the graph. That means, if there are more than one edge arriving at a node, these are combined as OR and AND gates. Mathematically, this is encoded as an hyper graph, where edges with two or more inputs (hyperedges) represent a logical disjunction (AND gate). OR gates are encoded implicitly, by means of edges with only one input arriving at a node. See [38] for details.
To calibrate such models, the authors formulated the inference problem as a binary multiobjective problem, where the first objective corresponded to how well the model described the experimental data and the second consisted of a complexity penalty to avoid overfitting:
where ${\theta}_{f}\left(P\right)=\frac{1}{{n}_{E}}\sum _{k=1}^{s}\sum _{l=1}^{m}\sum _{t=1}^{n}{\left({B}_{k,l,t}^{M}\left(P\right){B}_{k,l,t}^{E}\right)}^{2}$ and ${\theta}_{s}\left(P\right)=\frac{1}{{v}_{e}^{s}}\sum _{e=1}^{r}{v}_{e}{P}_{e}$ such that B_{k,l,t}(P)∈{0,1} is the value (0 or 1) as predicted by computation of the model’s logical steady state [42] and ${B}_{k,l,t}^{E}\in [\phantom{\rule{0.3em}{0ex}}0,1)$ is the data value for readout l at time t under the kth experimental condition. θ_{ f }(P) is the mean squared error and α·θ_{ s }(P) is the product between a tunable parameter α and a function denoting the model complexity (each hyper edge receives a penalty proportional to the number of inputs. E.g. an AND gates with 3 inputs is penalised 3 times as a single edge. OR gates arise implicitly from the combination of single input edges.).
Noticeably, the binary implementation of this problem contains redundant solutions in the search space. This can be addressed by compressing the search space into a reduced set containing only the smallest nonredundant combinations of hyperedges [38] (equivalent to the Sperner hypergraph). By doing this, the problem is transformed from a binary to an integer programming problem that was solved in [38] using a genetic algorithm.
Here, we implemented this benchmark by using the Matlab version of CellNetOptimizer (CNO or CellNOpt, available at http://www.cellnopt.org/downloads.html). The priorknowledge network and dataset are also publicly available and thoroughly described at http://www.ebi.ac.uk/~cokelaer/cellnopt/data/ExtLiverPCB.html.
In order to assess the performance of both algorithms we solved each problem 100 times using VNS and the GA implementation from CNO. In the allowed time budget, VNS returned solutions that were on average better than those found by the GA (see Figure 7). The Welch Two Sample ttest for comparing means provides a pvalue of 3.5·10^{−14}, which clearly shows that VNS outperforms GA for this problem. Since both methods are sensitive to the tuning parameters, we tried to tune both algorithms fairly. Also, we note that the solution of this problem in its original, binary implementation can be solved using deterministic methods based either on Integer Linear Programming [43, 44] or Answer Set Programming [45].
Training of logic ODEs to insilico generated data
Here, in order to demonstrate the additional information that can be derived from the probability distributions of optimized parameters, we compared the R implementations of BayesFit and eSS. The problem is again based on a logic model where, this time, the topology of the model is known and the goal is to optimize the parameters of the transfer functions used to generate a continuous simulation of the model. The parameters were optimized to reduce the distance between the model simulation and in silico generated data. This example is as described in section 6 from [22]; the only difference is that the model used here is the compressed model used to generate the in silico data in [22]. BayesFit produced a good fit to the data, comparable to that of eSS (Mean Squared Error: BayesFit, 0.007; eSS, 0.005). One of the advantages of estimating parameters by Bayesian inference is that parameter identifiability can be deduced from the marginal distributions for each parameter. For example Figure 8 shows two parameters of a single interaction between the species “egf” and “sos” in the model; these parameters n and k, control the shape of the transfer function between the two species [22]. From this figure, the covariation between the 2 parameters is evident. The best fit parameters (red line) lie in one region of high probability. However, there are additional correlated peaks in the marginal distributions of the two parameters, which suggests different parameter values could also produce a strong fit to the data.
Conclusions
Here, we present MEIGO, a free, opensource and flexible package to perform global optimization in R, Matlab, and Python. It includes advanced metaheuristic methods. Furthermore, its modular nature (Figure 1), enables the connection to existing optimization methods.
Availability and requirements
Project name: Metaheuristics for global optimization in systems biology and bioinformatics (MEIGO)Project home page:http://www.iim.csic.es/~gingproc/meigo.html Operating system(s): Windows, Linux, Mac OS X Programming language: Matlab 7.5 or higher and R 2.15 or higher Licence: GPLv3
References
 1.
Greenberg HJ, Hart WE, Lancia G: Opportunities for combinatorial optimization in computational biology. Informs J Computing. 2004, 16 (3): 211231. 10.1287/ijoc.1040.0073.
 2.
Larrañaga P, Calvo B, Santana R, Bielza C, Galdiano J, Inza I, Lozano JA, Armañanzas R, Santafé G, Pérez A, Robles V: Machine learning in bioinformatics. Brief Bioinform. 2006, 7: 86112. 10.1093/bib/bbk007.
 3.
Festa P: On some optimization problems in molecular biology. Math Biosci. 2007, 207 (2): 219234. 10.1016/j.mbs.2006.11.012.
 4.
Banga JR: Optimization in computational systems biology. BMC Syst Biol. 2008, 2: 4710.1186/17520509247.
 5.
Sun J, Garibaldi JM, Hodgman C: Parameter estimation using Metaheuristics in systems biology: a comprehensive review. IEEE/ACM Trans Comput Biol Bioinform. 2012, 9: 185202.
 6.
Marchisio M: Stelling J: Computational design tools for synthetic biology. Curr Opin Biotechnol. 2009, 20 (4): 47985. 10.1016/j.copbio.2009.08.007.
 7.
Banga JR, BalsaCanto E: Parameter estimation and optimal experimental design. Essays Biochem. 2008, 45: 19510.1042/BSE0450195.
 8.
Moles CG, Mendes P, Banga JR: Parameter estimation in biochemical pathways: a comparison of global optimization methods. Genome Res. 2003, 13 (11): 24672474. 10.1101/gr.1262503.
 9.
Ashyraliyev M, FomekongNanfack Y, Kaandorp JA, Blom JG: Systems biology: parameter estimation for biochemical models. FEBS J. 2008, 276 (4): 886902.
 10.
Villaverde AF, Egea JA, Banga JR: A cooperative strategy for parameter estimation in large scale systems biology models. BMC Syst Biol. 2012, 6: 7510.1186/17520509675.
 11.
Terfve C, Cokelaer T, MacNamara A, Henriques D, Goncalves E, Morris MK, van Iersel M, Lauffenburger DA, SaezRodriguez J: CellNOptR: a flexible toolkit to train protein signaling networks to data using multiple logic formalisms. BMC Syst Biol. 2012, 6: 133+10.1186/175205096133.
 12.
Schmidt H, Jirstrand M: Systems biology toolbox for MATLAB: a computational platform for research in systems biology. Bioinformatics. 2006, 22 (4): 514515. 10.1093/bioinformatics/bti799.
 13.
BalsaCanto E, Banga JR: AMIGO, a toolbox for advanced model identification in systems biology using global optimization. Bioinformatics. 2011, 27 (16): 23112313. 10.1093/bioinformatics/btr370.
 14.
Maiwald T, Eberhardt O, Blumberg J: Mathematical modeling of biochemical systems with PottersWheel. Computational Modeling of Signaling Networks, Series: Methods in Molecular Biology, Volume 880. Edited by: Liu X. 2012, Betterton MD. New York: Humana Press, 119138.
 15.
BalsaCanto E, Alonso A, Banga JR: An iterative identification procedure for dynamic modeling of biochemical networks. BMC Syst Biol. 2010, 4: 11+10.1186/17520509411.
 16.
Skanda D, Lebiedz D: An optimal experimental design approach to model discrimination in dynamic biochemical systems. Bioinformatics. 2010, 26 (7): 939945. 10.1093/bioinformatics/btq074.
 17.
Yuraszeck TM, Neveu P, RodriguezFernandez M, Robinson A, Kosik KS, Doyle FJ III: Vulnerabilities in the Tau network and the role of ultrasensitive points in Tau pathophysiology. PLoS Comput Biol. 2010, 6 (11): e100099710.1371/journal.pcbi.1000997.
 18.
Jia G, Stephanopoulos G, Gunawan R: Parameter estimation of kinetic models from metabolic profiles: twophase dynamic decoupling method. Bioinformatics. 2011, 27 (14): 19641970. 10.1093/bioinformatics/btr293.
 19.
Heldt F, Frensing T, Reichl U: Modeling the intracellular dynamics of influenza virus replication to understand the control of viral RNA synthesis. J Virol. 2012, 86 (15): 78067817. 10.1128/JVI.0008012.
 20.
Higuera C, Villaverde AF, Banga JR, Ross J, Morán F: Multicriteria optimization of regulation in metabolic networks. PLoS ONE. 2012, 7 (7): e4112210.1371/journal.pone.0041122.
 21.
Jia G, Stephanopoulos G, Gunawan R: Incremental parameter estimation of kinetic metabolic network models. BMC Syst Biol. 2012, 6: 142+10.1186/175205096142.
 22.
MacNamara A, Terfve C, Henriques D, PeñalverBernabé B, SaezRodriguez J: Statetime spectrum of signal transduction logic models. Phys Biol. 2012, 9 (4): 04500310.1088/14783975/9/4/045003.
 23.
Sriram K, RodriguezFernandez M, Doyle FJ III: Modeling cortisol dynamics in the neuroendocrine axis distinguishes normal, depression, and posttraumatic stress disorder (PTSD) in humans. PLoS Comput Biol. 2012, 8 (2): e100237910.1371/journal.pcbi.1002379.
 24.
Sriram K, RodriguezFernandez M, Doyle FJ: A detailed modular analysis of heatshock protein dynamics under acute and chronic stress and its implication in anxiety disorders. PLoS ONE. 2012, 7 (8): e42958+10.1371/journal.pone.0042958.
 25.
Freund S, Rath A, Barradas OP, Skerhutt E, Scholz S, Niklas J, Sandig V, Rose T, Heinzle E, Noll T, Pörtner R, Zeng AP, Reichl U: Batchtobatch variability of two human designer cell lines – AGE1.HN and AGE1.HN.AAT – carried out by different laboratories under defined culture conditions using a mathematical model. Eng Life Sci. 2013, 00: 113.
 26.
Francis F, García MR, Middleton RH: A single compartment model of pacemaking in dissasociated Substantia nigra neurons. J Comput Neurosci. 2013, 35 (3): 295316. 10.1007/s1082701304539.
 27.
Egea JA, Schmidt H, Banga JR: A new tool for parameter estimation in nonlinear dynamic biological systems using global optimization. Poster. 9th International Conference on Systems Biology, ICSB 2008. Goteborg (Sweden), 2228 August 2008. Available at http://www.iim.csic.es/gingproc/Poster_ICSB2008.pdf.,
 28.
Glover F, Laguna M, Martí R: Fundamentals of scatter search and path relinking. Control Cybernet. 2000, 39 (3): 653684.
 29.
Storn R, Price K: Differential evolution  a simple and efficient heuristic for global optimization over continuous spaces. J Global Optimization. 1997, 11 (4): 341359. 10.1023/A:1008202821328.
 30.
Egea JA, Martí R, Banga JR: An evolutionary method for complexprocess optimization. Comput Oper Res. 2010, 37 (2): 315324. 10.1016/j.cor.2009.05.003.
 31.
Mladenović N, Hansen P: Variable neighborhood search. Comput Oper Res. 1997, 24: 10971100. 10.1016/S03050548(97)000312.
 32.
Hansen P, Mladenović N, PerezBrito D: Variable neighborhood decomposition search. J Heuristics. 2001, 7: 335350. 10.1023/A:1011336210885.
 33.
Eydgahi H, Chen WW, Muhlich JL, Vitkup D, Tsitsiklis JN, Sorger PK: Properties of cell death models calibrated and compared using Bayesian approaches. Mol Syst Biol. 2013, 9: 644
 34.
Toulouse M, Crainic TG, Sanso B: Systemic behavior of cooperative search algorithms. Parallel Comput. 2004, 30: 5779. 10.1016/j.parco.2002.07.001.
 35.
Hansen P, Mladenović N, MorenoPérez JA: Variable neighbourhood search: methods and applications. Ann Oper Res. 2010, 175: 367407. 10.1007/s1047900906576.
 36.
Karbowski A, Majchrowski M, Trojanek P: jPar–a simple, free and lightweight tool for parallelizing Matlab calculations on multicores and in clusters. 9th International Workshop on StateoftheArt in Scientific and Parallel Computing (PARA 2008), May 13–16 2008, Trondheim (Norway),
 37.
Knaus J: Developing parallel programs using snowfall. 2010, [http://cran.rproject.org/web/packages/snowfall/vignettes/snowfall.pdf],
 38.
SaezRodriguez J, Alexopoulos LG, Epperlein J, Samaga R, Lauffenburger DA, Klamt S, Sorger PK: Discrete logic modelling as a means to link protein signalling networks with functional analysis of mammalian signal transduction. Mol Syst Biol. 2009, 5: 331
 39.
Sandgren E: Nonlinear integer and discrete programming in mechanical design optimization. J Mech Des. 1990, 112 (2): 223229. 10.1115/1.2912596.
 40.
Harjunkoski I, Westerlund T, Pörn R, Skrifvars H: Different transformations for solving non–convex trim loss problems by MINLP. Eur J Oper Res. 1998, 105: 594603. 10.1016/S03772217(97)000660.
 41.
Burgard AP, Pharkya P, Maranas CD: Optknock: a bilevel programming framework for identifying gene knockout strategies for microbial strain optimization. Biotechnol Bioeng. 2003, 84 (6): 647657. 10.1002/bit.10803.
 42.
Klamt S, SaezRodriguez J, Lindquist J, Simeoni L, Gilles ED: A methodology for the structural and functional analysis of signaling and regulatory networks. BMC Bioinformatics. 2006, 7: 5610.1186/14712105756.
 43.
Mitsos A, Melas IN, Siminelakis P, Chairakaki AD, SaezRodriguez J, Alexopoulos LG: Identifying drug effects via pathway alterations using an integer linear programming optimization formulation on phosphoproteomic data. PLoS Comput Biol. 2009, 5 (12): e100059110.1371/journal.pcbi.1000591.
 44.
Sharan R, Karp RM: Reconstructing boolean models of signaling. Proceedings of the 16th Annual International Conference on Research in Computational Molecular Biology, RECOMB’12. 2012, Berlin, Heidelberg: SpringerVerlag, 261271.
 45.
Guziolowski C, Videla S, Eduati F, Thiele S, Cokelaer T, Siegel A, SaezRodriguez J: Exhaustively characterizing feasible logic models of a signaling network using Answer Set Programming. Bioinformatics. 2013, 29 (18): 23202326. 10.1093/bioinformatics/btt393.
Acknowledgements
The authors thank Alexandra Vatsiou for testing MEIGO. We acknowledge the funding received from the EU through projects “BioPreDyn” (FP7KBBE grant 289434) and “NICHE” (FP7ITN grant 289384), from Ministerio de Economía y Competitividad (and the FEDER) through the projects “MultiScales” (DPI201128112C0403 and DPI201128112C0404), and from the CSIC intramural project “BioREDES” (PIE201170E018).
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
JSR and JRB conceived, designed and coordinated the project. JAE implemented the metaheuristic methods included in MEIGO. AFV and DH developed and implemented the methods for parallel computation. JAE, AFV and DH performed the numerical computations for the metaheuristic methods. AMN and DPD implemented and performed the computational test of the bayesian method. TC designed and developed the Python wrapper and helped with many technical aspects. All authors contributed to the writing of the manuscript. All authors read and approved the final manuscript.
Electronic supplementary material
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Received
Accepted
Published
DOI
Keywords
 Local Search
 Markov Chain Monte Carlo
 Variable Neighbourhood Search
 Benchmark Function
 Scatter Search