Biochemical systems identification by a random drift particle swarm optimization approach.

BACKGROUND
Finding an efficient method to solve the parameter estimation problem (inverse problem) for nonlinear biochemical dynamical systems could help promote the functional understanding at the system level for signalling pathways. The problem is stated as a data-driven nonlinear regression problem, which is converted into a nonlinear programming problem with many nonlinear differential and algebraic constraints. Due to the typical ill conditioning and multimodality nature of the problem, it is in general difficult for gradient-based local optimization methods to obtain satisfactory solutions. To surmount this limitation, many stochastic optimization methods have been employed to find the global solution of the problem.


RESULTS
This paper presents an effective search strategy for a particle swarm optimization (PSO) algorithm that enhances the ability of the algorithm for estimating the parameters of complex dynamic biochemical pathways. The proposed algorithm is a new variant of random drift particle swarm optimization (RDPSO), which is used to solve the above mentioned inverse problem and compared with other well known stochastic optimization methods. Two case studies on estimating the parameters of two nonlinear biochemical dynamic models have been taken as benchmarks, under both the noise-free and noisy simulation data scenarios.


CONCLUSIONS
The experimental results show that the novel variant of RDPSO algorithm is able to successfully solve the problem and obtain solutions of better quality than other global optimization methods used for finding the solution to the inverse problems in this study.


Background
Evolutionary algorithms (EAs) have been widely used for data mining tasks in Bioinformatics and Computational Biology [1,2]. They are random search methods inspired by natural mechanisms existing in the biological world [1,2]. EAs originally comprised four types of paradigms, namely, genetic algorithms (GAs), genetic programming (GP), evolution strategies (ES), and evolutionary programming (EP), with GAs being the most popular one. Data analysis tools traditionally used in Bioinformatics were mainly based on statistical techniques, such as regression and estimation, and EAs played significant roles in handling large biological data sets in a robust and computationally efficient manner [2].
Currently, evolutionary computing techniques mostly comprise conventional EAs (GAs, GP, ES and EP), swarm intelligence algorithms, artificial immune systems, differential evolution, as the main representative classes of evolutionary computing approaches [3]. Swarm intelligence is a class of evolutionary computing techniques simulating natural systems composed of many individuals that coordinate one another using decentralized control and selforganization. Two most influential and classical examples of swarm intelligence approaches are particle swarm optimization (PSO) and ant colony optimization (ACO) algorithms, which have been widely used in many different fields [3][4][5][6][7]. Particularly, PSO algorithms have shown their effectiveness in data mining tasks in bioinformatics due to their performance in solving difficult optimisation tasks [8][9][10].
Biochemical modelling can be considered a generic data-driven regression problem on the given experimental data. The goal of biochemical modeling is to build the mathematical formulations that quantitatively describe the dynamical behaviour of biochemical processes. For example, metabolic reactions are formulated as rate laws and described as a system of differential equations, the kinetic parameters of which are identified from a set of experimental data. Finding the solution of the parameter estimation problem, thus, plays a key role in building a dynamic model for a biochemical process, which, in turn, can help understand the functionality of the signalling pathways at the system level [11,12].
Since solving the inverse problem in biochemical process modelling involves a task of nonlinear programming, many numerical optimization methods have been used to determine the parameters of biochemical models. These methods can be generally classified into two categories, namely, local optimization methods and global optimization methods [13]. The widely used local optimization tools for inverse problems are those based on gradient descent methods, the most popular being the Newton method [14]. This type of approaches, however, cannot be applied to non-smooth problems, since the objective functions of the problems are discontinuous or have discontinuous derivatives. Direct search methods, such as the Hooke-Jeeves method, the Needler-Mead simplex algorithm and the Downhill simplex algorithm, are also a kind of local optimization techniques that could be used to find a local minimum without the information from derivatives [13]. Normally, most local optimization approaches are used as single shooting methods. For each of them, the path of its optimization process leading to a final solution is determined by the initial conditions for the state variables. Therefore, the algorithm will lead to a wrong minimum, particularly if the initial conditions depend on model parameters. To overcome this shortcoming, one can adopt multiple shooting methods in which the time interval is partitioned and new initial conditions are used at the start of each time interval part [15]. The methods can offer the possibility to circumvent local optima by enlarging the parameter space during the optimization process.
The aforementioned local search methods are generally less efficient for the inverse problems of biochemical models, which are multimodal and high-dimensional. In order to solve these hard inverse problems efficiently, one can turn to global optimization methods, most of which incorporate stochastic search strategies to prevent the search process from being stuck into the local optimal or suboptimal solutions. The branch-and-bound approach is a global optimization method that converts the inverse problem into a convex optimization problem so that a global optimal solution can be obtained [16]. This method requires a finite search space that can be divided into smaller subspaces. A remarkable disadvantage is that it is applicable only if the lower and upper bounds of the objective function can be computed. Simulated annealing (SA) can be effectively used for parameter estimation from time-course biochemical data as shown in [17]. However, it has a slow convergence speed and high computational cost, and is not easy to be parallelized. Genetic algorithms (GAs) represent a widely used global search technique that could be employed to predict the parameters of dynamic models [18]. Nevertheless, GAs are always complained of slow convergence speed and high computation cost. The evolutionary strategy (ES) approach showed its ability to successfully solve inverse problems in a performance comparison made by Moles et al. [19] among a number of global optimization techniques on biochemical system identification problems. In contrast to SA, evolutionary algorithms, including ES and GAs, can be implemented as self-tuning methods and can be parallelizable, with the stochastic ranking evolutionary strategy (SRES) method being a very successful example [19][20][21]. Scatter search (SS) is known as a population-based random search approach that was proposed to identify the appropriate parameters for nonlinear dynamic biological systems [22,23]. As an evolutionary algorithm method, the SS method, as well as its hybrid with a local search step after the recombination operation, showed to be efficient in solving inverse problems. Particle swarm optimization (PSO), also a population-based optimization technique from swarm intelligence and evolutionary computation area, has demonstrated its better performance than GAs in solving inverse problems [24,25]. Hybrids of PSO with other methods have also shown their effectiveness in modelling biochemical dynamic systems [26][27][28]. However, PSO shows to be sensitive to the neighbourhood topology of the swarm, as commented in [29].
Other methods for parameter estimation include the Newton-flow analysis [30], the alternating regression technique [31], decoupling approaches [32,33], the collocation method [20,34], the decomposing method [35,36]. These approximation techniques, when incorporated into an optimization algorithm, can help reduce the number of objective function evaluations, which are very computationally expensive. Additionally, radial basis function neural networks [37] and a quantitative inference method [38] have also been employed to solve inverse problems in biochemical process modelling.
In all of the above cases, the optimization approach is used to minimize to the residual error of an inferred model against experimental data. Smaller error means that the model describes the dynamic behaviour of the biochemical system better and has more justification to be accepted as a valid mathematical representation of the system. Theoretically, the prediction error diminishes with the accuracy of the model increasing. This study focuses on developing an efficient optimization method for parameter estimation of a given dynamic biochemical model. However, since parameter estimation problems of complex dynamic systems (generally with many parameters and many differential equations) are high-dimensional, multimodal and more challenging to solve, but which allow to depict more complex biochemical processes, our goal in this study is to develop an efficient global optimization method for solving such inverse problems of complex biochemical dynamic systems.
After extensive and in-depth study, we selected the PSO algorithm as a candidate to be modified in order to achieve our goal of solving complex inverse problems. The reason why PSO attracted us is that PSO has many advantages, such as faster convergence speed, lower computational need, as well as being easily parallelizable and having fewer parameters to adjust. However, PSO has the following shortcomings. First of all, it was theoretically proven that the PSO is not a global convergent algorithm, even not a local convergent one, against the convergence criteria given in [39]. Practically, the algorithm is more prone to be trapped into local optimal or suboptimal points for a high-dimensional problem, due to the weakening of its global search ability during the mid and later stages of the search process. Next, PSO is widely known to be sensitive to its search parameters including upper limits of the velocity, and even to the "swarm topology", so that users may feel awkward when selecting the parameters and the topologies when using the algorithm [40]. Finally, the performance of PSO appears to be very sensitive to the setting of upper and lower bounds of the search scope [40]. If the global optimal solution is located near the boundary of the search scope, the algorithm may have little chance to catch it. We have found that these shortcomings are mainly attributed to the velocity update equation, which is the essence of the PSO algorithm, and where it seems to be much room for improvement so as to boost the global search ability of the PSO.
In this study, inspired by the free electron model in metal conductors placed in an external electric field [41], we propose to use a variant of the PSO algorithm, called the random drift particle swarm optimization (RDPSO), in order to achieve our goal of effectively estimating the parameters of complex biochemical dynamical systems. The motivation of the RDPSO algorithm is to improve the search ability of the PSO algorithm by fundamentally modifying the update equation of the particle's velocity, instead of by revising the algorithm based on the original equation so as to probably increase the complexity of the algorithmic implementation as well as its computational cost. It is different from the drift particle swarm optimization (DPSO) proposed by us in [42,43] in that it can make a better balance between the global search and the local search of the particle swarm.
The original and basic RDPSO version was recently introduced by us in [44], which was used for solving other problems in [45,46]. A novel variant of RDPSO algorithm is being proposed in this work to solve the parameter identification problem for two biochemical systems. The novel variant proposed here is different from the original one in that it employs an exponential distribution for sampling the velocity of the particles, whilst the original one used the Gaussian distribution.
The novel RDPSO variant is used for estimating the parameters of two benchmark models, one of which describes the thermal isomerization of α-pinene with 5 parameters [22,47], the other of which has a three-step pathway with 36 parameters [19]. The results of RDPSO and some other well-known global optimization algorithms are then compared and discussed. It should be noted that although this paper is focused on the parameter estimation for biochemical modelling, just as PSO and other EAs, the proposed RDPSO variant can be employed as a general-purpose tool for optimization problems in data miming tasks, such as clustering, classification, regression, and so forth, which widely exist in bioinformatics and computational biology [1,2,8,48,49].

Problem statement
The inverse problem of a nonlinear dynamic system involves finding proper parameters so as to minimize the cost function of the model with respect to an experimental data set, with some given differential equality constraints as well as other algebraic constraints. Such a data-driven regression problem can be approached with statistical techniques, using the given experimental data and the proposed models with unknown parameters. As stated by Moles et al. [19], the problem can be mathematically formulated as a nonlinear programming problem (NLP) with differential-algebraic constraints, whose goal is to find θ so as to minimize where J is the cost function of the model, θ is a vector of model parameters to be estimated, y msd (t) is the experimental measure of a subset of the output state variables, y(θ , t) is the prediction of those outputs by the model, x is the differential state variables and v is a vector of other (usually time-invariant) parameters that are not to be estimated. In Equation (1), W(t) is the weighting (or scaling) matrix, and the equation can be discretized into a weighted least-square estimator. In Equation (2), f is the set of differential and algebraic equality constraints describing the system dynamics (i.e., the nonlinear process model). Equation (3) gives the initial value of x. In Equations (4) and (5), h and g are equality and inequality path and point constraints on system performance. In addition, θ is subject to upper and lower bounds, which are described by inequality constraints (6).
The above defined inverse problem is generally a multimodal (non-convex) optimization problem with multiple local optima due to the nonlinearity and constraints of the system dynamics. Even though many local and global optimization methods have been proposed to solve the problem as mentioned in Introduction, it is still challenging and very necessary to develop efficient optimization algorithms to deal with the parameter estimation problems, especially those for the dynamic systems with many parameters and many equations. Therefore this study focuses on the optimization approach for the inverse problem using the proposed variant of random drift particle swarm optimization (RDPSO) and other global optimization methods.

Particle swarm optimization
The original PSO algorithm was introduced by Kennedy and Eberhart in [50]. The algorithm was inspired by the observed social behavior of bird flocks or fish schooling, and it roots its methodology both in evolutionary computing and artificial life. It shares many similarities with EAs, in that both the PSO and the EAs are initialized randomly with a population of candidate solutions and then update the population iteratively, in order to approximate the global optimal solution to the given problem. However, unlike EAs, PSO has no evolution operators such as crossover and mutation, but perform optimization tasks by updating the particles' position (potential solutions) according to a set of discrete differential equations. It was shown that the PSO algorithm has comparable and even better performance than GAs [51].
In the PSO with m particles, each particle i ( 1 ≤ i ≤ m ), representing a potential solution of the given problem in a D-dimensional space, has three vectors at the k th iteration, namely, the current position X k i = (X k i,1 , X k i,2 , · · · , X k i,D ), the velocity 2 , · · · , P k i,D ) and its personal best (pbest) position P k i = (P k i,1 , P k i,2 , · · · , P k i,D ), which is defined as the position with the best objective function value found by the particle since initialization. A vector , called the global best (gbest) position, is used to record the position with the best objective function value found by the all the particles in the particle swarm since initialization. With the above specification, the update equations for each particle's velocity and current position are given by: for i = 1, 2, · · · m; j = 1, 2 · · · , D , where c 1 and c 2 are known as acceleration coefficients and w is called the inertia weight, which can be adjusted to balance the exploration and exploitation ability of each particle [52]. Without loss of generality, we assume that the PSO is used to solve the following minimization problem: where f (X) is an objective function (or fitness function) and S denotes the feasible space. Consequently, P k i can be updated by: and G k can be found by: In Equation (7), r k i,j and R k i,j are the sequences of two different random numbers with uniform distribution on the interval (0, 1), namely, r k i,j , R k i,j ∼U(0, 1). In order to prevent the particle from flying away out of the search scope, where V max is also a user-specified algorithmic parameter.

The proposed random drift particle swarm optimization
In [57], it was demonstrated that if the acceleration coefficients are properly valued, each particle converges to its local attractor, p k i = (p k i,1 , p k i,2 , · · · p k i,D ), so that the convergence of the whole particle swarm can be achieved. Each coordinate of P k i is given by: which can be restated as where In the PSO algorithm, c 1 and c 2 are set to be equal, and thus φ k i,j is a random number with uniform distribution on the interval (0, 1), i.e. φ k i,j ∼U(0, 1). During the search process of the PSO, as particles' current position are converging to their own local attractor, their current positions, pbest positions, local attractors and the gbest positions are all converging to one single point. The directional movement of each particle i towards P k i resembles the drift motion of an electron in metal conductors placed in an external electric field. According to the free electron model [37], the electron has not only drift motion caused by the external electric field, but also a thermal motion, which appears to be a random movement. The superposition of the drift thermal motions makes the electron careen towards the location of the minimum potential energy. Thus, if the position of an electron in the metal is regarded as a candidate solution and the potential energy function as the objective function to be minimized, the movement of the electron resembles the process finding the minimum solution of the minimization problem.
The above facts can lead to a novel variant of PSO if the particle in PSO is assumed to behave like an electron moving in a metal conductor in an external electric field. More specifically, it can be assumed that at the kth iteration, each particle i has drift motion towards P k i as well as a thermal motion, with their velocities in each dimension j denoted as V1 k+1 i,j and V2 k+1 i,j , respectively. As a result, the velocity of the particle is given by V k+1 i,j = V1 k+1 i,j + V2 k+1 i,j . In the drift particle swarm optimization proposed in [42,43], we assume that V1 k+1 i,j follows a Maxwell distribution, say a Gaussian probability distribution, and the V2 k+1 i,j is given by the social part plus the cognitive part in Equation (7). This velocity update equation appears to add some effectiveness to the search performance of the particle swarm but has some shortcomings. Firstly, the Gaussian distribution has a thin tail so that it has less opportunity to generate outliers. As a result, the thermal motion of the particle has less randomness and cannot significantly improve the particle's global search ability. Secondly, although the update equation of V2 k+1 i,j : can guarantee the particle to converge towards its local attractor, the two random scaling coefficients add randomness to its motion, which means that the particle's position is sampled at uniformly random positions within the hyper-rectangle around the gbest position and its personal best position. It is not able to enhance the particle's global search ability because of the finite scope of the hyper-rectangle, but it may weaken its local search ability, which is the responsibility of the directional motion. Therefore, the velocity of the particle, which is given by the sum of V1 k+1 i,j and V2 k+1 i,j , may not be able to make a good balance between the global search and the local search of the particle. In the present study, we employ a new way of determining V1 k+1 i,j and V2 k+1 i,j . Here, we assume that the velocity of the thermal motion V1 k+1 i,j follows a double exponential distribution, whose probability density function and probability distribution function are and respectively, where v represents the value of the random variable V1 k+1 i,j and σ k i,j is the standard deviation of the distribution. By employing a stochastic simulation method, we can express V1 k+1 i,j as where where s and u k i,j are two different random numbers uniformly distributed on the interval (0,1), i.e. C k = (C k 1 , C k 2 , · · · , C k D ) . As for the value of σ k i,j , an adaptive strategy is adopted to determine σ k i,j by is called the mean best (mbest) position, defined as the mean of the pbest positions of all the particles, i.e.
. The velocity of the drift motion V2 k+1 i,j may have many possible forms. However, the following simple linear expression is adopted in this study: where p k i,j is determined by (20) can indeed guarantee that the particle move directionally to P k i as an overall result. With the definitions of the thermal motion and the drift motion of the particle, we can obtain a novel set of update equations for the particle: where α is called the thermal coefficient and β is called the drift coefficient. The PSO with Equations (22) and (23) is a novel variant of RDPSO, which employs a double exponential distribution instead of a Gaussian one. The procedure of this RDPSO variant is outlined below.
Step 0: Randomly initialize the current positions and the pbest position of all the particles; Step 1: Set k = 0; Step 2: While the termination condition is not met, do the following steps; Step 3: Set k=k+1 and compute the mbest position C k , which is the centroid of the pbest positions of all the particles at iteration k; Step 4: From i = 1, carry out the following steps; Step 5: Evaluate the objective function value f (X k i ) , and update P k i and G k according to Equation (10) and Equation (11), respectively; Step 6: Update the components of the velocity and current position of particle i in each dimension, respectively, according to Equations (21), (22) and (23); Step 7: Set i=i+1, and return to Step 5 until i = m ; Step 8: Return to Step 2; In the RDPSO algorithm, in addition to the population size m, α and β are two very important user-specified algorithmic parameters, which play the same roles as the inertia weight w and acceleration coefficients in the basic PSO algorithm. That is, the can be tuned to balance the exploration and exploitation ability of the particle. How to select the values of these parameters to prevent the particles from explosion is an open problem. Here, we performed the stochastic simulations for the one dimensional case, in which the local attractor was fixed at the origin and the mbest position was at x = 0.1. The results of two simulations are visualized in Figure 1 and Figure 2, in which the logarithmic value of the absolute of X k was recorded as ordinate, and the iteration number was the abscissa. Figure 1 shows that the particle's position was bounded when α = 1 and β = 1.5. However, when α = 1.8 and β = 1.5, the particle diverged to infinity. To obtain the sufficient and necessary condition for the particle to be bounded, we will focus our attention on a theoretical analysis in terms of probability measure in future.
Setting large values for α and β implies better global search ability of the algorithm, while setting small values means better local search. When the RDPSO is used for solving a problem, a good balance between the global search and the local search of the algorithm is crucial for the algorithmic performance. However, in order to find out how to tune the parameters to generate generally good algorithmic performance we need a large number of experiments on benchmark functions, which will be performed in our future tasks. Here, we recommend that when the RDPSO is used, α should be set to be no larger than 1.0 and β to be no larger than 1.5. More specifically, when the problem at hand is complex, the values of the two parameters should be set to be relatively large in order to make the particles search more globally, and on the other hand, when the problem is simple, relatively smaller values should be selected for the parameters, for the purpose of faster convergence speed of the algorithm. In the present study, the value of α and β were set to be 0.75 and 1.0, respectively.
In addition, the population size and the maximum number of iterations (MaxIter) can also affect the performance of a population-based technique. Just as for other PSO variants, it is suggested that the population size should be larger than 20 for the RDPSO as well. The value of the MaxIter depends on the complexity of the problem. Generally, a smaller MaxIter value is used for simple problems, while a larger one is used for complex problems.
Moreover, V k i,j is also restricted within the interval [−V max , V max ] during the search process of the RDPSO algorithm, just as in the original PSO algorithm.

The optimization methods compared
Besides the PSO and RDPSO algorithms, the Differential Evolution (DE), Scatter Search (SS) method and two versions of Evolution Strategies were also used to solve the selected inverse problems, for performance comparison purposes. The DE method, as presented by Storn and Price [58], is an evolutionary computing method, which has a faster convergence speed than GAs and can find the global optimal solution of a multidimensional and multimodal function effectively [58]. Figure 1 The result of the stochastic simulation with α = 1 and β = 1.5. It is shown that, with this parameter setting, the particle's position is bounded. Figure 2 The result of the stochastic simulation with α = 1.8 and β = 1.5. It is shown that, with this parameter setting, the particle's position diverges as the iteration number increases. The SS method is also a population-based search techniques originally developed by Glover [59]. In [22], a novel meta-heuristic method, which is the combination of the original SS method with a local search technique, was proposed to solve inverse problems. It was shown that the local search operator can accelerate the convergence speed significantly. Thus, in our experiments, we used this novel SS method for performance comparison.
Evolutionary strategy (ES) is an important paradigm of EAs, which imitates the effects that genetics produces on the phenotype, rather than the genotype as in GAs [60]. The two canonical versions of ES we used in this study are denoted by (μ, l)-ES and (μ + l)-ES, where μ denotes the number of parents and l the number of offspring. In the (μ, l)-ES, the parents are deterministically selected from offspring (μ <l must hold), while in (μ + l)-ES, the parents are selected from both the parents and offspring.
In addition, the performances of the above mentioned algorithms, including the RDPSO, are also compared with those of the SRES method. The SRES is a version of (μ, l)-ES that uses stochastic ranking to handle the constraints, by adjusting the balance between the objective function and the penalty function on the course of the search [19], [61].

Case studies
Two case studies involving two benchmark systems were carried out. For each system, we performed two groups of numerical experiments, one with noise-free simulation data, and the other with noisy simulation data.

Case study 1
The goal of this case study is to estimate the five rate constants of the homogeneous biochemical reaction describing the thermal isomerization of α-pinene, which is an organic compound of the terpene class, one of two isomers of pinene [22], [23]. The mathematical model of this process is formulated with the following linear equations: where (p 1 , p 2 · · · , p 5 ) is the vector of unknown coefficients to be estimated, y 1, y 2, y 3, y 4 and y 5 denote the concentrations of the α-pinene, dipentene, alloocimen, β-pyronene and a dimer, respectively.

Case study 2
This case study involves the inverse problem to identify 36 kinetic parameters of a nonlinear biochemical dynamic model formed by the following 8 ordinary differential equations that describe the variation of the metabolite concentrates with time 1 [19].
where M 1 , M 2 , E 1 , E 2 , E 3 , G 1 , G 2 and G 3 are the state variables representing the concentrations of the species involved in different biochemical reactions, and S and P are controlling parameters which are kept fixed at the initial values for each experiment. The inverse problem is then reduced to the optimization problem that fits the remaining 36 parameters represented by θ = (θ 1 , θ 2 , · · · , θ 36 ).

Objective functions
The objective function (or fitness function) for the inverse problem in either of the two case studies is the discretization of Equation (1), which is formulated as the weighted sum of squares of the differences between the experimental and the predicted values of the state variables: where n is the number of data for each experiment, l is the number of experiments, y exp is the vector of experimental values of the state variables, and y pred is the vector of the values of state variables predicted by the model with a given set of parameters. In Case Study 1, each w ij was set to be 1 [22], while in Case Study 2, was set as w ij = {1/ max [y pred (i)] j } 2 , which was used to normalize the contributions of each term [19].

Obtaining simulation data
In order to evaluate the performances of the global optimization methods in finding the solution of the inverse problems, we chose a set of parameters for each model, which are considered as the true or nominal values. For Case Study 1, the true values of the parameters are p 1 = 5.93e-5, p 2 = 2.96e-5, p 3 = 2.05e-5, p 4 = 27.5e-5 and p 5 = 4.00e-5. For Case Study 2, the nominal values of the model parameters are shown in Table 1.
The pseudo-experimental data (essentially the simulation data) in either case were generated by substituting the chosen parameters into the dynamic model and performing fourth order Runge-Kutta method on the corresponding differential equations. For Case Study 2, the pseudo-measurements of the concentrations of metabolites, proteins, and messenger RNA species were the results of 16 different pseudo-experiments, in which, with the given nominal values for the parameters, the initial concentrates of the pathway substrate S and product P were varied for each experiment (simulation) as shown in Table 1. These simulated data represent the exact experimental results devoid of measurement noise  (P and S are initial concentrations of the pathway substrate and product, and were varied and combined to generate a total of 16 sets of pseudo-experimental (simulation) measurements) and they were used as noise-free data for the first group of numerical experiments in each case study. In order to test the optimization methods for noisy data, we added a white noise to each of the original noise-free data: z = z + σ ε (38) where z and z represents the original noise-free data and the resulting noisy data, respectively, ε is an random number with standard normal distribution, namely, ε∼N(0, 1), and σ is the standard deviation of the white noise. In our case studies, σ was set to 0.01 for both systems.

Initial problem solver used
During the search of each global optimization algorithm, each potential solution (i.e., a set of estimation values for the parameters) was substituted into the dynamic model. Then, the fourth order Runge-Kutta method was performed on the corresponding system of differential equations to generate a set of predicted values of the output state variables, from which the objective function value (or fitness value) of the potential solution could be evaluated according to Equation (37) with the obtained pseudo-experimental (simulation) data. This process is known as the solution to the forward problem, which was embedded in the iterations of the search during the solving of the inverse problem with the algorithm.

Experimental settings
For the sake of performance comparison, all the tested global optimization methods except the SS method (i.e., PSO, RDPSO, DE, (μ, l)-ES, and (μ + l)-ES) were programmed in C++ on a VC++6.0 platform in Windows XP environment, and implemented on Intel Pentium Dual-Core E5300 2.6GHz PCs, each with 2 MB cache and 4 GB main memory. The SS method was implemented in Matlab 7.0, on the same platform, for the purpose of calling the local solver SQP in Matlab during the search process. The software for SS for inverse problems can be found on http://www.iim.csic.es/~gingproc/ssmGO.html.
The configuration of the algorithm parameters including the population sizes are listed in Table 2. In Case Study 1, each optimization algorithm ran 20 times with each run executed for 500 iterations; that is, the maximum number of iterations (MaxIter) is 500. In Case Study 2, each algorithm also ran 20 times with each run executed for 2250000 function evaluations, which is the same as that for the DE in [19]. Since the population size of each algorithm was 100, the value of MaxIter was 22500 in Case Study 2. For the SS method, the initial population size was 100, and 10 individuals were selected to perform the iterative search after initialization. Other parameters were selected according to recommendations from the corresponding references and/or our preliminary runs. For all the algorithms tested on the inverse problems, the statistical values Table 2 Configuration of search parameters in different algorithms (F is an algorithmic control parameter used to control the size of the perturbation in the mutation operator for DE, CR is the crossover constant in DE, varphi is a parameter determining the standard deviation of the mutation in the evolution strategy).

Results
For Case Study 1, the statistical values of J from 20 search runs with 500 iterations by each algorithm are listed in Table 3. The best value of J (J = 1.3740e-014) for the numerical experiments with noise-free simulation data was obtained by using our proposed RDPSO algorithm after running for 0.0348h (about 2 minutes). For the experiment with noisy data, the RDPSO generated the best J value (J = 0.2023) as well. The proposed algorithm also showed the best performance on average among all tested methods, as shown by the mean value of J over 20 runs. In this case study, the basic PSO algorithm showed good performance on low-dimensional inverse problems.
The convergence process of each tested algorithm averaged over 20 runs in the numerical experiment with noisy data in Case Study 1 is shown by the convergence curve in Figure 3, which is plotted in the log-log scale with objective function values versus the iteration number. Evidently, the SS method showed a better convergence property than   Figure 4 and Figure 5 show good fits between the experimental data (simulation data) and the predicted data, with the best obtained parameters in the experiments with both noise-free and noisy data. For Case Study 2, the obtained values of J resulted from 20 runs with 22500 iterations by each algorithm are listed in Table 4. For the numerical experiments with noise-free data, the best result of J (J = 0.7358e-06) was obtained by using the SS method, which also had the better average performance than any other compared algorithm, as shown by the mean value of J over the 20 runs of the algorithm. The second best method was the RDPSO algorithm, which could converge to a value of J = 0.009124 and had a mean value of J = 0.178881 over 20 runs. Table 5 lists the estimated values of the model parameters corresponding to the best J value (J=0.009124) found by the RDPSO algorithm. The results also show that (μ, l)-ES is the winner in this inverse problem compared to (μ + l)-ES, whose   best and mean results of J were 0.123209 and 2.141820, respectively. The PSO and DE are two well-known efficient population-based optimization methods, which, however, could not arrive at the vicinity of the aforementioned solutions. When the experimental data (simulation data) was noisy, the SS method and RPSO obtained similar results for the best J value over 20 runs, but the former had a better average algorithmic performance. Table 6 shows the identified model parameters corresponding to the best J value (J = 0.2313) obtained by the RDPSO for the noisy data. We plotted in Figure 6 the convergence curve of each method averaged over 20 runs. It is shown that the SS method had a remarkably better convergence rate than others, probably due to its local solver that can enhance the local search ability of the algorithm significantly. Figures 7a and Figure 8 show comparisons between the predicted data and the experimental (simulation) data for the decision vectors found by the RDPSO in both groups of numerical experiments (with the noise-free and noisy data, respectively). It can be observed that there is good correlation between the experimental and predicted data.