Skip to main content

Biochemical systems identification by a random drift particle swarm optimization approach

Abstract

Background

Finding an efficient method to solve the parameter estimation problem (inverse problem) for nonlinear biochemical dynamical systems could help promote the functional understanding at the system level for signalling pathways. The problem is stated as a data-driven nonlinear regression problem, which is converted into a nonlinear programming problem with many nonlinear differential and algebraic constraints. Due to the typical ill conditioning and multimodality nature of the problem, it is in general difficult for gradient-based local optimization methods to obtain satisfactory solutions. To surmount this limitation, many stochastic optimization methods have been employed to find the global solution of the problem.

Results

This paper presents an effective search strategy for a particle swarm optimization (PSO) algorithm that enhances the ability of the algorithm for estimating the parameters of complex dynamic biochemical pathways. The proposed algorithm is a new variant of random drift particle swarm optimization (RDPSO), which is used to solve the above mentioned inverse problem and compared with other well known stochastic optimization methods. Two case studies on estimating the parameters of two nonlinear biochemical dynamic models have been taken as benchmarks, under both the noise-free and noisy simulation data scenarios.

Conclusions

The experimental results show that the novel variant of RDPSO algorithm is able to successfully solve the problem and obtain solutions of better quality than other global optimization methods used for finding the solution to the inverse problems in this study.

Background

Evolutionary algorithms (EAs) have been widely used for data mining tasks in Bioinformatics and Computational Biology [1, 2]. They are random search methods inspired by natural mechanisms existing in the biological world [1, 2]. EAs originally comprised four types of paradigms, namely, genetic algorithms (GAs), genetic programming (GP), evolution strategies (ES), and evolutionary programming (EP), with GAs being the most popular one. Data analysis tools traditionally used in Bioinformatics were mainly based on statistical techniques, such as regression and estimation, and EAs played significant roles in handling large biological data sets in a robust and computationally efficient manner [2].

Currently, evolutionary computing techniques mostly comprise conventional EAs (GAs, GP, ES and EP), swarm intelligence algorithms, artificial immune systems, differential evolution, as the main representative classes of evolutionary computing approaches[3]. Swarm intelligence is a class of evolutionary computing techniques simulating natural systems composed of many individuals that coordinate one another using decentralized control and self-organization. Two most influential and classical examples of swarm intelligence approaches are particle swarm optimization (PSO) and ant colony optimization (ACO) algorithms, which have been widely used in many different fields [3–7]. Particularly, PSO algorithms have shown their effectiveness in data mining tasks in bioinformatics due to their performance in solving difficult optimisation tasks [8–10].

Biochemical modelling can be considered a generic data-driven regression problem on the given experimental data. The goal of biochemical modeling is to build the mathematical formulations that quantitatively describe the dynamical behaviour of biochemical processes. For example, metabolic reactions are formulated as rate laws and described as a system of differential equations, the kinetic parameters of which are identified from a set of experimental data. Finding the solution of the parameter estimation problem, thus, plays a key role in building a dynamic model for a biochemical process, which, in turn, can help understand the functionality of the signalling pathways at the system level [11, 12].

Since solving the inverse problem in biochemical process modelling involves a task of nonlinear programming, many numerical optimization methods have been used to determine the parameters of biochemical models. These methods can be generally classified into two categories, namely, local optimization methods and global optimization methods [13]. The widely used local optimization tools for inverse problems are those based on gradient descent methods, the most popular being the Newton method [14]. This type of approaches, however, cannot be applied to non-smooth problems, since the objective functions of the problems are discontinuous or have discontinuous derivatives. Direct search methods, such as the Hooke-Jeeves method, the Needler-Mead simplex algorithm and the Downhill simplex algorithm, are also a kind of local optimization techniques that could be used to find a local minimum without the information from derivatives [13]. Normally, most local optimization approaches are used as single shooting methods. For each of them, the path of its optimization process leading to a final solution is determined by the initial conditions for the state variables. Therefore, the algorithm will lead to a wrong minimum, particularly if the initial conditions depend on model parameters. To overcome this shortcoming, one can adopt multiple shooting methods in which the time interval is partitioned and new initial conditions are used at the start of each time interval part [15]. The methods can offer the possibility to circumvent local optima by enlarging the parameter space during the optimization process.

The aforementioned local search methods are generally less efficient for the inverse problems of biochemical models, which are multimodal and high-dimensional. In order to solve these hard inverse problems efficiently, one can turn to global optimization methods, most of which incorporate stochastic search strategies to prevent the search process from being stuck into the local optimal or suboptimal solutions. The branch-and-bound approach is a global optimization method that converts the inverse problem into a convex optimization problem so that a global optimal solution can be obtained [16]. This method requires a finite search space that can be divided into smaller subspaces. A remarkable disadvantage is that it is applicable only if the lower and upper bounds of the objective function can be computed. Simulated annealing (SA) can be effectively used for parameter estimation from time-course biochemical data as shown in [17]. However, it has a slow convergence speed and high computational cost, and is not easy to be parallelized. Genetic algorithms (GAs) represent a widely used global search technique that could be employed to predict the parameters of dynamic models [18]. Nevertheless, GAs are always complained of slow convergence speed and high computation cost. The evolutionary strategy (ES) approach showed its ability to successfully solve inverse problems in a performance comparison made by Moles et al. [19] among a number of global optimization techniques on biochemical system identification problems. In contrast to SA, evolutionary algorithms, including ES and GAs, can be implemented as self-tuning methods and can be parallelizable, with the stochastic ranking evolutionary strategy (SRES) method being a very successful example [19–21]. Scatter search (SS) is known as a population-based random search approach that was proposed to identify the appropriate parameters for nonlinear dynamic biological systems [22, 23]. As an evolutionary algorithm method, the SS method, as well as its hybrid with a local search step after the recombination operation, showed to be efficient in solving inverse problems. Particle swarm optimization (PSO), also a population-based optimization technique from swarm intelligence and evolutionary computation area, has demonstrated its better performance than GAs in solving inverse problems [24, 25]. Hybrids of PSO with other methods have also shown their effectiveness in modelling biochemical dynamic systems [26–28]. However, PSO shows to be sensitive to the neighbourhood topology of the swarm, as commented in [29].

Other methods for parameter estimation include the Newton-flow analysis [30], the alternating regression technique [31], decoupling approaches [32, 33], the collocation method [20, 34], the decomposing method [35, 36]. These approximation techniques, when incorporated into an optimization algorithm, can help reduce the number of objective function evaluations, which are very computationally expensive. Additionally, radial basis function neural networks [37] and a quantitative inference method [38] have also been employed to solve inverse problems in biochemical process modelling.

In all of the above cases, the optimization approach is used to minimize to the residual error of an inferred model against experimental data. Smaller error means that the model describes the dynamic behaviour of the biochemical system better and has more justification to be accepted as a valid mathematical representation of the system. Theoretically, the prediction error diminishes with the accuracy of the model increasing. This study focuses on developing an efficient optimization method for parameter estimation of a given dynamic biochemical model. However, since parameter estimation problems of complex dynamic systems (generally with many parameters and many differential equations) are high-dimensional, multimodal and more challenging to solve, but which allow to depict more complex biochemical processes, our goal in this study is to develop an efficient global optimization method for solving such inverse problems of complex biochemical dynamic systems.

After extensive and in-depth study, we selected the PSO algorithm as a candidate to be modified in order to achieve our goal of solving complex inverse problems. The reason why PSO attracted us is that PSO has many advantages, such as faster convergence speed, lower computational need, as well as being easily parallelizable and having fewer parameters to adjust. However, PSO has the following shortcomings. First of all, it was theoretically proven that the PSO is not a global convergent algorithm, even not a local convergent one, against the convergence criteria given in [39]. Practically, the algorithm is more prone to be trapped into local optimal or suboptimal points for a high-dimensional problem, due to the weakening of its global search ability during the mid and later stages of the search process. Next, PSO is widely known to be sensitive to its search parameters including upper limits of the velocity, and even to the "swarm topology", so that users may feel awkward when selecting the parameters and the topologies when using the algorithm [40]. Finally, the performance of PSO appears to be very sensitive to the setting of upper and lower bounds of the search scope [40]. If the global optimal solution is located near the boundary of the search scope, the algorithm may have little chance to catch it. We have found that these shortcomings are mainly attributed to the velocity update equation, which is the essence of the PSO algorithm, and where it seems to be much room for improvement so as to boost the global search ability of the PSO.

In this study, inspired by the free electron model in metal conductors placed in an external electric field [41], we propose to use a variant of the PSO algorithm, called the random drift particle swarm optimization (RDPSO), in order to achieve our goal of effectively estimating the parameters of complex biochemical dynamical systems. The motivation of the RDPSO algorithm is to improve the search ability of the PSO algorithm by fundamentally modifying the update equation of the particle's velocity, instead of by revising the algorithm based on the original equation so as to probably increase the complexity of the algorithmic implementation as well as its computational cost. It is different from the drift particle swarm optimization (DPSO) proposed by us in [42, 43] in that it can make a better balance between the global search and the local search of the particle swarm.

The original and basic RDPSO version was recently introduced by us in [44], which was used for solving other problems in [45, 46]. A novel variant of RDPSO algorithm is being proposed in this work to solve the parameter identification problem for two biochemical systems. The novel variant proposed here is different from the original one in that it employs an exponential distribution for sampling the velocity of the particles, whilst the original one used the Gaussian distribution.

The novel RDPSO variant is used for estimating the parameters of two benchmark models, one of which describes the thermal isomerization of α-pinene with 5 parameters [22, 47], the other of which has a three-step pathway with 36 parameters [19]. The results of RDPSO and some other well-known global optimization algorithms are then compared and discussed. It should be noted that although this paper is focused on the parameter estimation for biochemical modelling, just as PSO and other EAs, the proposed RDPSO variant can be employed as a general-purpose tool for optimization problems in data miming tasks, such as clustering, classification, regression, and so forth, which widely exist in bioinformatics and computational biology [1, 2, 8, 48, 49].

Methods

Problem statement

The inverse problem of a nonlinear dynamic system involves finding proper parameters so as to minimize the cost function of the model with respect to an experimental data set, with some given differential equality constraints as well as other algebraic constraints. Such a data-driven regression problem can be approached with statistical techniques, using the given experimental data and the proposed models with unknown parameters. As stated by Moles et al. [19], the problem can be mathematically formulated as a nonlinear programming problem (NLP) with differential-algebraic constraints, whose goal is to find θ so as to minimize

J = ∫ 0 t f ( y msd ( t ) - y ( θ , t ) ) T W ( t ) ( y msd ( t ) - y ( θ , t ) ) d t
(1)

subject to

f d x d t , x , y , θ , v , t = 0
(2)
x( t 0 )= x 0
(3)
h (x,y , θ , t )=0
(4)
g (x,y , θ , t )≤0
(5)
θ L ≤ θ ≤ θ U
(6)

where J is the cost function of the model, θ is a vector of model parameters to be estimated, y msd ( t ) is the experimental measure of a subset of the output state variables, y ( θ , t ) is the prediction of those outputs by the model, x is the differential state variables and v is a vector of other (usually time-invariant) parameters that are not to be estimated. In Equation (1), W ( t ) is the weighting (or scaling) matrix, and the equation can be discretized into a weighted least-square estimator. In Equation (2), f is the set of differential and algebraic equality constraints describing the system dynamics (i.e., the nonlinear process model). Equation (3) gives the initial value of x. In Equations (4) and (5), h and g are equality and inequality path and point constraints on system performance. In addition, θ is subject to upper and lower bounds, which are described by inequality constraints (6).

The above defined inverse problem is generally a multimodal (non-convex) optimization problem with multiple local optima due to the nonlinearity and constraints of the system dynamics. Even though many local and global optimization methods have been proposed to solve the problem as mentioned in Introduction, it is still challenging and very necessary to develop efficient optimization algorithms to deal with the parameter estimation problems, especially those for the dynamic systems with many parameters and many equations. Therefore this study focuses on the optimization approach for the inverse problem using the proposed variant of random drift particle swarm optimization (RDPSO) and other global optimization methods.

Particle swarm optimization

The original PSO algorithm was introduced by Kennedy and Eberhart in [50]. The algorithm was inspired by the observed social behavior of bird flocks or fish schooling, and it roots its methodology both in evolutionary computing and artificial life. It shares many similarities with EAs, in that both the PSO and the EAs are initialized randomly with a population of candidate solutions and then update the population iteratively, in order to approximate the global optimal solution to the given problem. However, unlike EAs, PSO has no evolution operators such as crossover and mutation, but perform optimization tasks by updating the particles' position (potential solutions) according to a set of discrete differential equations. It was shown that the PSO algorithm has comparable and even better performance than GAs [51]

In the PSO with m particles, each particle i ( 1 ≤ i ≤ m ), representing a potential solution of the given problem in a D-dimensional space, has three vectors at the kth iteration, namely, the current position X i k = ( X i , 1 k , X i , 2 k , ⋯ , X i , D k ) , the velocity V i k = ( V i , 1 k , V i , 2 k , ⋯ , V i , D k ) and its personal best (pbest) position P i k = ( P i , 1 k , P i , 2 k , ⋯ , P i , D k ) , which is defined as the position with the best objective function value found by the particle since initialization. A vector G k = ( G 1 k , G 2 k , ⋯ , G D k ) , called the global best (gbest) position, is used to record the position with the best objective function value found by the all the particles in the particle swarm since initialization. With the above specification, the update equations for each particle's velocity and current position are given by:

V i , j k + 1 = w â‹… V i , j k + c 1 r i , j k ( P i , j k - X i , j k ) + c 2 R i , j k ( G j k - X i , j k )
(7)
X i , j k + 1 = X i , j k + V i , j k + 1
(8)

for i = 1 , 2 , ⋯ m ; j = 1 , 2 ⋯ , D , where c 1 and c 2 are known as acceleration coefficients and w is called the inertia weight, which can be adjusted to balance the exploration and exploitation ability of each particle [52]. Without loss of generality, we assume that the PSO is used to solve the following minimization problem:

M i n i m i z e f ( X ) , s . t . X ∈ S ⊆ R D
(9)

where f ( X ) is an objective function (or fitness function) and S denotes the feasible space. Consequently, P i k can be updated by:

P i k = { X i k     if f ( X i k ) < f ( P i k − 1 ) P i k − 1 if f ( X i k ) ≥ f ( P i k − 1 )
(10)

and G k can be found by:

G k = P g k , where  g = arg min 1 ≤ i ≤ m [ f ( P i k ) ]
(11)

In Equation (7), r i , j k and R i , j k are the sequences of two different random numbers with uniform distribution on the interval (0, 1), namely, r i , j k , R i , j k ~ U ( 0 , 1 ) . In order to prevent the particle from flying away out of the search scope, V i , j k is restricted on the interval [ - V max , V max ] , where V max is also a user-specified algorithmic parameter.

Many researchers have proposed different variants of PSO in order to improve the search performance of the algorithm and proved this through empirical simulation [3–7, 52–56]

The proposed random drift particle swarm optimization

In [57], it was demonstrated that if the acceleration coefficients are properly valued, each particle converges to its local attractor, p i k = ( p i , 1 k , p i , 2 k , ⋯ p i , D k ) , so that the convergence of the whole particle swarm can be achieved. Each coordinate of P i k is given by:

p i , j k = c 1 r i , j k P i , j k + c 2 R i , j k G j k c 1 r i , j k + c 2 R i , j k , 1 ≤ j ≤ D
(12)

which can be restated as

p i , j k = ϕ i , j k P i , j k + ( 1 - ϕ i , j k ) , 1 ≤ j ≤ D
(13)

where

Ï• i , j k = c 1 r i , j k c 1 r i , j k + c 2 R i , j k
(14)

In the PSO algorithm, c1 and c2 are set to be equal, and thus Ï• i , j k is a random number with uniform distribution on the interval (0, 1), i.e. Ï• i , j k ~ U ( 0 , 1 ) .

During the search process of the PSO, as particles' current position are converging to their own local attractor, their current positions, pbest positions, local attractors and the gbest positions are all converging to one single point. The directional movement of each particle i towards P i k resembles the drift motion of an electron in metal conductors placed in an external electric field. According to the free electron model [37], the electron has not only drift motion caused by the external electric field, but also a thermal motion, which appears to be a random movement. The superposition of the drift thermal motions makes the electron careen towards the location of the minimum potential energy. Thus, if the position of an electron in the metal is regarded as a candidate solution and the potential energy function as the objective function to be minimized, the movement of the electron resembles the process finding the minimum solution of the minimization problem.

The above facts can lead to a novel variant of PSO if the particle in PSO is assumed to behave like an electron moving in a metal conductor in an external electric field. More specifically, it can be assumed that at the k th iteration, each particle i has drift motion towards P i k as well as a thermal motion, with their velocities in each dimension j denoted as V 1 i , j k + 1 and V 2 i , j k + 1 , respectively. As a result, the velocity of the particle is given by V i , j k + 1 = V 1 i , j k + 1 + V 2 i , j k + 1 . In the drift particle swarm optimization proposed in [42, 43], we assume that V 1 i , j k + 1 follows a Maxwell distribution, say a Gaussian probability distribution, and the V 2 i , j k + 1 is given by the social part plus the cognitive part in Equation (7). This velocity update equation appears to add some effectiveness to the search performance of the particle swarm but has some shortcomings. Firstly, the Gaussian distribution has a thin tail so that it has less opportunity to generate outliers. As a result, the thermal motion of the particle has less randomness and cannot significantly improve the particle's global search ability. Secondly, although the update equation of V 2 i , j k + 1 :

V 2 i , j k + 1 = c 1 r i , j k ( P i , j k - X i , j k ) + c 2 R i , j k ( G j k - X i , j k )
(15)

can guarantee the particle to converge towards its local attractor, the two random scaling coefficients add randomness to its motion, which means that the particle's position is sampled at uniformly random positions within the hyper-rectangle around the gbest position and its personal best position. It is not able to enhance the particle's global search ability because of the finite scope of the hyper-rectangle, but it may weaken its local search ability, which is the responsibility of the directional motion. Therefore, the velocity of the particle, which is given by the sum of V 1 i , j k + 1 and V 2 i , j k + 1 , may not be able to make a good balance between the global search and the local search of the particle. In the present study, we employ a new way of determining V 1 i , j k + 1 and V 2 i , j k + 1 .

Here, we assume that the velocity of the thermal motion V 1 i , j k + 1 follows a double exponential distribution, whose probability density function and probability distribution function are

f V 1 i , j k + 1 ( v ) = 1 σ i , j k e - 2 | v | σ i , j k
(16)

and

F V 1 i , j k + 1 ( v ) = 1 - e - 2 | v | σ i , j k
(17)

respectively, where v represents the value of the random variable V 1 i , j k + 1 and σ i , j k is the standard deviation of the distribution. By employing a stochastic simulation method, we can express V 1 i , j k + 1 as

V 1 i , j k + 1 = σ i , j k 2 ϕ i , j k
(18)
ϕ i , j k = { + ln ( 1 / u i , j k ) i f s > 0.5 − ln ( 1 / u i , j k ) i f s ≤ 0.5 ,
(19)

where

where s and u i , j k are two different random numbers uniformly distributed on the interval (0,1), i.e. s , u i , j k ~ U ( 0 , 1 ) . As for the value of σ i , j k , an adaptive strategy is adopted to determine σ i , j k by σ i , j k = 2 α | C j k - X i , j k | , where C k = ( C 1 k , C 2 k , ⋯ , C D k ) is called the mean best (mbest) position, defined as the mean of the pbest positions of all the particles, i.e. C j k = ( 1 / m ) ∑ i = 1 m P i , j k ( 1 ≤ j ≤ D ) .

The velocity of the drift motion V 2 i , j k + 1 may have many possible forms. However, the following simple linear expression is adopted in this study:

V 2 i , j k + 1 = β ( p i , j k - X i , j k )
(20)

where p i , j k is determined by

p i , j k = ϕ i , j k P i , j k + ( 1 − ϕ i , j k ) , ϕ i , j k ~ U ( 0 , 1 ) , 1 ≤ j ≤ D
(21)

It can be immediately proven that if V i , j k + 1 = V 2 i , j k + 1 , when k → ∞ , X i , j k → p i , j k . Therefore the expression of V 2 i , j k + 1 in Equation (20) can indeed guarantee that the particle move directionally to P i k as an overall result.

With the definitions of the thermal motion and the drift motion of the particle, we can obtain a novel set of update equations for the particle:

V i , j k + 1 = α | C j k - X i , j k | ϕ i , j k + β ( p i , j k - X i , j k )
(22)
X i , j k + 1 = X i , j k + V i , j k + 1
(23)

where α is called the thermal coefficient and β is called the drift coefficient. The PSO with Equations (22) and (23) is a novel variant of RDPSO, which employs a double exponential distribution instead of a Gaussian one. The procedure of this RDPSO variant is outlined below.

Step 0: Randomly initialize the current positions and the pbest position of all the particles;

Step 1: Set k = 0;

Step 2: While the termination condition is not met, do the following steps;

Step 3: Set k=k+1 and compute the mbest position C k , which is the centroid of the pbest positions of all the particles at iteration k;

Step 4: From i = 1 , carry out the following steps;

Step 5: Evaluate the objective function value f ( X i k ) , and update P i k and G k according to Equation (10) and Equation (11), respectively;

Step 6: Update the components of the velocity and current position of particle i in each dimension, respectively, according to Equations (21), (22) and (23);

Step 7: Set i=i+1, and return to Step 5 until i = m ;

Step 8: Return to Step 2;

In the RDPSO algorithm, in addition to the population size m, α and β are two very important user-specified algorithmic parameters, which play the same roles as the inertia weight w and acceleration coefficients in the basic PSO algorithm. That is, the can be tuned to balance the exploration and exploitation ability of the particle. How to select the values of these parameters to prevent the particles from explosion is an open problem. Here, we performed the stochastic simulations for the one dimensional case, in which the local attractor was fixed at the origin and the mbest position was at x = 0.1. The results of two simulations are visualized in Figure 1 and Figure 2, in which the logarithmic value of the absolute of X k was recorded as ordinate, and the iteration number was the abscissa. Figure 1 shows that the particle's position was bounded when α = 1 and β = 1 . 5 . However, when α = 1 . 8 and β = 1 . 5 , the particle diverged to infinity. To obtain the sufficient and necessary condition for the particle to be bounded, we will focus our attention on a theoretical analysis in terms of probability measure in future.

Figure 1
figure 1

The result of the stochastic simulation with α = 1 and β = 1 . 5 .It is shown that, with this parameter setting, the particle's position is bounded.

Figure 2
figure 2

The result of the stochastic simulation with α = 1 . 8 and β = 1 . 5 .It is shown that, with this parameter setting, the particle's position diverges as the iteration number increases.

Setting large values for α and β implies better global search ability of the algorithm, while setting small values means better local search. When the RDPSO is used for solving a problem, a good balance between the global search and the local search of the algorithm is crucial for the algorithmic performance. However, in order to find out how to tune the parameters to generate generally good algorithmic performance we need a large number of experiments on benchmark functions, which will be performed in our future tasks. Here, we recommend that when the RDPSO is used, α should be set to be no larger than 1.0 and β to be no larger than 1.5. More specifically, when the problem at hand is complex, the values of the two parameters should be set to be relatively large in order to make the particles search more globally, and on the other hand, when the problem is simple, relatively smaller values should be selected for the parameters, for the purpose of faster convergence speed of the algorithm. In the present study, the value of α and β were set to be 0.75 and 1.0, respectively.

In addition, the population size and the maximum number of iterations (MaxIter) can also affect the performance of a population-based technique. Just as for other PSO variants, it is suggested that the population size should be larger than 20 for the RDPSO as well. The value of the MaxIter depends on the complexity of the problem. Generally, a smaller MaxIter value is used for simple problems, while a larger one is used for complex problems.

Moreover, V i , j k is also restricted within the interval [ - V max , V max ] during the search process of the RDPSO algorithm, just as in the original PSO algorithm.

The optimization methods compared

Besides the PSO and RDPSO algorithms, the Differential Evolution (DE), Scatter Search (SS) method and two versions of Evolution Strategies were also used to solve the selected inverse problems, for performance comparison purposes. The DE method, as presented by Storn and Price [58], is an evolutionary computing method, which has a faster convergence speed than GAs and can find the global optimal solution of a multidimensional and multimodal function effectively [58].

The SS method is also a population-based search techniques originally developed by Glover [59]. In [22], a novel meta-heuristic method, which is the combination of the original SS method with a local search technique, was proposed to solve inverse problems. It was shown that the local search operator can accelerate the convergence speed significantly. Thus, in our experiments, we used this novel SS method for performance comparison.

Evolutionary strategy (ES) is an important paradigm of EAs, which imitates the effects that genetics produces on the phenotype, rather than the genotype as in GAs [60]. The two canonical versions of ES we used in this study are denoted by (μ, λ)-ES and (μ + λ)-ES, where μ denotes the number of parents and λ the number of offspring. In the (μ, λ)-ES, the parents are deterministically selected from offspring (μ <λ must hold), while in (μ + λ)-ES, the parents are selected from both the parents and offspring.

In addition, the performances of the above mentioned algorithms, including the RDPSO, are also compared with those of the SRES method. The SRES is a version of (μ, λ)-ES that uses stochastic ranking to handle the constraints, by adjusting the balance between the objective function and the penalty function on the course of the search [19], [61].

Case studies

Two case studies involving two benchmark systems were carried out. For each system, we performed two groups of numerical experiments, one with noise-free simulation data, and the other with noisy simulation data.

Case study 1

The goal of this case study is to estimate the five rate constants of the homogeneous biochemical reaction describing the thermal isomerization of α-pinene, which is an organic compound of the terpene class, one of two isomers of pinene [22], [23]. The mathematical model of this process is formulated with the following linear equations:

d y 1 d t = - ( p 1 + p 2 ) y 1
(24)
d y 2 d t = p 1 y 1
(25)
d y 3 d t = p 2 y 1 - ( p 3 + p 4 ) y 3 + p 5 y 5
(26)
d y 4 d t = p 3 y 3
(27)
d y 5 d t = p 4 y 3 - p 5 y 5
(28)

where ( p 1 , p 2 ⋯ , p 5 ) is the vector of unknown coefficients to be estimated, y 1 , y 2 , y 3 , y 4 and y 5 denote the concentrations of the α-pinene, dipentene, alloocimen, β-pyronene and a dimer, respectively.

Case study 2

This case study involves the inverse problem to identify 36 kinetic parameters of a nonlinear biochemical dynamic model formed by the following 8 ordinary differential equations that describe the variation of the metabolite concentrates with time 1 [19].

d G 1 d t = V 1 1 + P K i 1 n i 1 + K a 1 S n a 1 - k 1 â‹… G 1
(29)
d G 2 d t = V 2 1 + P K i 2 n i 2 + K a 2 M 1 n a 2 - k 2 â‹… G 2
(30)
d G 3 d t = V 3 1 + P K i 3 n i 3 + K a 3 M 2 n a 3 - k 3 â‹… G 3
(31)
d E 1 d t = V 4 â‹… G 1 K 4 + G 1 - k 4 â‹… E 1
(32)
d E 2 d t = V 5 â‹… G 2 K 5 + G 2 - k 5 â‹… E 2
(33)
d E 3 d t = V 6 â‹… G 3 K 6 + G 3 - k 6 â‹… E 3
(34)
d M 1 d t = k c a t 1 â‹… E 1 â‹… 1 K m 1 â‹… ( S - M 1 ) 1 + S K m 1 + M 1 K m 2 - k c a t 2 â‹… E 2 â‹… 1 K m 3 â‹… ( M 1 - M 2 ) 1 + M 1 K m 3 + M 2 K m 4
(35)
d M 2 d t = k c a t 2 â‹… E 2 â‹… 1 K m 3 â‹… ( M 1 - M 2 ) 1 + M 1 K m 3 + M 2 K m 4 - k c a t 3 â‹… E 3 â‹… 1 K m 5 â‹… ( M 2 - P ) 1 + M 2 K m 5 + P K m 6
(36)

where M 1 , M 2 , E 1 , E 2 , E 3 , G 1 , G 2 and G 3 are the state variables representing the concentrations of the species involved in different biochemical reactions, and S and P are controlling parameters which are kept fixed at the initial values for each experiment. The inverse problem is then reduced to the optimization problem that fits the remaining 36 parameters represented by θ = ( θ 1 , θ 2 , ⋯ , θ 36 ) .

Objective functions

The objective function (or fitness function) for the inverse problem in either of the two case studies is the discretization of Equation (1), which is formulated as the weighted sum of squares of the differences between the experimental and the predicted values of the state variables:

J = ∑ i = 1 n ∑ j = 1 l w i j { [ y pred ( i ) - y exp ( i ) ] j } 2
(37)

where n is the number of data for each experiment, l is the number of experiments, y exp is the vector of experimental values of the state variables, and y pred is the vector of the values of state variables predicted by the model with a given set of parameters. In Case Study 1, each w i j was set to be 1 [22], while in Case Study 2, w i j was set as w i j = { 1 / max [ y pred ( i ) ] j } 2 , which was used to normalize the contributions of each term [19].

Obtaining simulation data

In order to evaluate the performances of the global optimization methods in finding the solution of the inverse problems, we chose a set of parameters for each model, which are considered as the true or nominal values. For Case Study 1, the true values of the parameters are p1 = 5.93e-5, p2 = 2.96e-5, p3 = 2.05e-5, p4 = 27.5e-5 and p5 = 4.00e-5. For Case Study 2, the nominal values of the model parameters are shown in Table 1.

Table 1 Experiment generation (simulation) and the nominal value.

The pseudo-experimental data (essentially the simulation data) in either case were generated by substituting the chosen parameters into the dynamic model and performing fourth order Runge-Kutta method on the corresponding differential equations. For Case Study 2, the pseudo-measurements of the concentrations of metabolites, proteins, and messenger RNA species were the results of 16 different pseudo-experiments, in which, with the given nominal values for the parameters, the initial concentrates of the pathway substrate S and product P were varied for each experiment (simulation) as shown in Table 1. These simulated data represent the exact experimental results devoid of measurement noise and they were used as noise-free data for the first group of numerical experiments in each case study. In order to test the optimization methods for noisy data, we added a white noise to each of the original noise-free data:

z ′ = z + σ ε
(38)

where z and z ′ represents the original noise-free data and the resulting noisy data, respectively, ε is an random number with standard normal distribution, namely, ε ~ N ( 0 , 1 ) , and σ is the standard deviation of the white noise. In our case studies, σ was set to 0.01 for both systems.

Initial problem solver used

During the search of each global optimization algorithm, each potential solution (i.e., a set of estimation values for the parameters) was substituted into the dynamic model. Then, the fourth order Runge-Kutta method was performed on the corresponding system of differential equations to generate a set of predicted values of the output state variables, from which the objective function value (or fitness value) of the potential solution could be evaluated according to Equation (37) with the obtained pseudo-experimental (simulation) data. This process is known as the solution to the forward problem, which was embedded in the iterations of the search during the solving of the inverse problem with the algorithm.

Experimental settings

For the sake of performance comparison, all the tested global optimization methods except the SS method (i.e., PSO, RDPSO, DE, (μ, λ)-ES, and (μ + λ)-ES) were programmed in C++ on a VC++6.0 platform in Windows XP environment, and implemented on Intel Pentium Dual-Core E5300 2.6GHz PCs, each with 2 MB cache and 4 GB main memory. The SS method was implemented in Matlab 7.0, on the same platform, for the purpose of calling the local solver SQP in Matlab during the search process. The software for SS for inverse problems can be found on http://www.iim.csic.es/~gingproc/ssmGO.html.

The configuration of the algorithm parameters including the population sizes are listed in Table 2. In Case Study 1, each optimization algorithm ran 20 times with each run executed for 500 iterations; that is, the maximum number of iterations (MaxIter) is 500. In Case Study 2, each algorithm also ran 20 times with each run executed for 2250000 function evaluations, which is the same as that for the DE in [19]. Since the population size of each algorithm was 100, the value of MaxIter was 22500 in Case Study 2. For the SS method, the initial population size was 100, and 10 individuals were selected to perform the iterative search after initialization. Other parameters were selected according to recommendations from the corresponding references and/or our preliminary runs. For all the algorithms tested on the inverse problems, the statistical values of J were figured out and the results with best values of J were selected, processed and visualized with Matlab 7.0.

Table 2 Configuration of search parameters in different algorithms (F is an algorithmic control parameter used to control the size of the perturbation in the mutation operator for DE, CR is the crossover constant in DE, varphi is a parameter determining the standard deviation of the mutation in the evolution strategy).

Results

For Case Study 1, the statistical values of J from 20 search runs with 500 iterations by each algorithm are listed in Table 3. The best value of J (J = 1.3740e-014) for the numerical experiments with noise-free simulation data was obtained by using our proposed RDPSO algorithm after running for 0.0348h (about 2 minutes). For the experiment with noisy data, the RDPSO generated the best J value (J = 0.2023) as well. The proposed algorithm also showed the best performance on average among all tested methods, as shown by the mean value of J over 20 runs. In this case study, the basic PSO algorithm showed good performance on low-dimensional inverse problems.

Table 3 J values and computational time for the global optimization methods in Case Study 1

The convergence process of each tested algorithm averaged over 20 runs in the numerical experiment with noisy data in Case Study 1 is shown by the convergence curve in Figure 3, which is plotted in the log-log scale with objective function values versus the iteration number. Evidently, the SS method showed a better convergence property than other algorithms. The best solution vector corresponding to the best value of J (J = 1.3740e-014) obtained by the RDPSO in the numerical experiment with noise-free data was p1 = 5.930000e-005, p2 = 2.960000e-005, p3 = 2.750002e-004, p4 = 2.750002e-004, p5 = 4.000561e-005, extremely close to the real values of the parameters, and the best solution vector obtained by the RDPSO for the numerical experiment with noisy data was (when J = 0.2023) p1 = 5.928818e-005, p2 = 2.959918e-005, p3 = 1.712580e-005, p4 = 3.147252e-004, p5 = 1.537512e-003. Figure 4and Figure 5 show good fits between the experimental data (simulation data) and the predicted data, with the best obtained parameters in the experiments with both noise-free and noisy data.

Figure 3
figure 3

The figure visualizes the convergence curves of objective function values of all the algorithms averaged over 20 runs in the numerical experiments with noisy data for Case Study 1. It is shown that the RDPSO, PSO and SS methods had better convergence properties than other methods.

Figure 4
figure 4

The figure shows the noise-free experimental data (marker) versus the predicted model (continuous line) for Case Study 1. It is shown that the predicted model obtained by RDPSO fits the experimental data well

Figure 5
figure 5

The figure shows the noisy experimental data (marker) versus the predicted model (continuous line) for Case Study 1. It is also shown that the predicted model obtained by RDPSO fits the experimental data well.

For Case Study 2, the obtained values of J resulted from 20 runs with 22500 iterations by each algorithm are listed in Table 4. For the numerical experiments with noise-free data, the best result of J (J = 0.7358e-06) was obtained by using the SS method, which also had the better average performance than any other compared algorithm, as shown by the mean value of J over the 20 runs of the algorithm. The second best method was the RDPSO algorithm, which could converge to a value of J = 0.009124 and had a mean value of J = 0.178881 over 20 runs. Table 5 lists the estimated values of the model parameters corresponding to the best J value (J= 0.009124) found by the RDPSO algorithm. The results also show that (μ, λ)-ES is the winner in this inverse problem compared to (μ + λ)-ES, whose best and mean results of J were 0.123209 and 2.141820, respectively. The PSO and DE are two well-known efficient population-based optimization methods, which, however, could not arrive at the vicinity of the aforementioned solutions. When the experimental data (simulation data) was noisy, the SS method and RPSO obtained similar results for the best J value over 20 runs, but the former had a better average algorithmic performance. Table 6 shows the identified model parameters corresponding to the best J value (J = 0.2313) obtained by the RDPSO for the noisy data.

Table 4 J values and computational time of the global optimization methods for Case Study 2, including the results for the noise-free and noisy data.
Table 5 Decision vector for the solution found by RDPSO in the experiment with noise-free data for Case Study 2.
Table 6 Decision vector for the solution found by RDPSO in the experiment with noisy data for Case Study 2

We plotted in Figure 6 the convergence curve of each method averaged over 20 runs. It is shown that the SS method had a remarkably better convergence rate than others, probably due to its local solver that can enhance the local search ability of the algorithm significantly. Figures 7a and Figure 8 show comparisons between the predicted data and the experimental (simulation) data for the decision vectors found by the RDPSO in both groups of numerical experiments (with the noise-free and noisy data, respectively). It can be observed that there is good correlation between the experimental and predicted data.

Figure 6
figure 6

The figure visualizes the convergence curves of objective function values of all the algorithms averaged over 20 runs of the numerical experiments with noisy data in Case Study 2. It is shown that the SS method had the fastest convergence speed and the RDPSO had the second fastest one.

Figure 7
figure 7

The figure shows the predicted concentration provided by the RDPSO (continuous line) and noise-free experimental (marker) data in Case Study 2 when P = 1.0, S = 2.1544. It is shown that the predicted concentration had good correlation with the experimental data.

Figure 8
figure 8

The figure shows the predicted concentration provided by the RDPSO (continuous line) and noisy experimental (marker) data in Case Study 2 when P = 1.0, S = 2.1544. It is shown that the predicted concentration had good correlation with the experimental data.

Conclusions

In this paper, a variant of RDPSO algorithm was proposed and showed to be able to successfully solve two inverse problems associated with the thermal isomerization of α-pinene and a three-step pathway, respectively. The results indicate that the proposed RDPSO algorithm outperformed its PSO predecessors and some other competitors in the first problem, and also had the second best algorithmic performance among all the compared algorithms.

Like other stochastic optimization methods, a possible drawback of the RDPSO method is the computational effort required. This is mainly because most of the computational time was spent on solving the forward problem. One measure that can be taken is to incorporate the local search technique into the algorithm in order to accelerate its convergence speed. Another is to develop a parallelized RDPSO implementation to solve inverse problems on computer clusters to reduce the computational cost to a reasonable level. Our future tasks will focus on these two ways of improving the algorithmic effectiveness of the RDPSO algorithm.

Availability and requirements

In the additional file 1 the source codes of five of the tested algorithms on the two benchmark systems are provided. It includes two file folds, one for benchmark system 1 and the other one for benchmark system 2. All the algorithms are programmed with C++ in Microsoft Visual C++ 6.0.

In additional file 2 the data files for the five algorithms used in the two case studies are provided. For each case, the data corresponding to the best results generated by 20 runs of each algorithm are provided in a .txt file.

References

  1. Fogel GB, Corne DW: Evolutionary computation in bioinformatics. Morgan Kaufmann. 2002

    Google Scholar 

  2. Pal SK, Bandyopadhyay S, Ray SS: Evolutionary computation in bioinformatics: A review. IEEE Transactions on Systems, Man, and Cybernetics-Part C: Applications and Reviews. 2006, 56 (5): 601-615.

    Article  Google Scholar 

  3. Sun J, Lai C-H, Wu X: Particle swarm optimization: Classical and quantum perspectives. CRC Press. 2011

    Google Scholar 

  4. Sun J, Fang W, Wu X, Palade V, Xu W: Quantum-behaved particle swarm optimization: Analysis of individual particle behaviour and parameter selection. Evolutionary Computation. 2012, 20 (3): 349-393. 10.1162/EVCO_a_00049.

    Article  PubMed  Google Scholar 

  5. Sun J, Wu X, Fang W, Palade V, Lai CH, Xu W: Convergence analysis and improvements of quantum-behaved particle swarm optimization. Information Sciences. 193: 81-103.

  6. Sun J, Fang W, Palade V, Wu X, Xu W: Quantum-behaved particle swarm optimization with Gaussian distributed local attractor point. Applied Mathematics and Computation. 2011, 218 (7): 3763-3775. 10.1016/j.amc.2011.09.021.

    Article  Google Scholar 

  7. Sun J, Fang W, Wu X, Xie Z, Xu W: QoS multicast routing using a quantum-behaved particle swarm optimization algorithm. Engineering Applications of Artificial Intelligence. 2011, 24 (1): 123-131. 10.1016/j.engappai.2010.08.001.

    Article  Google Scholar 

  8. Das S, Abraham A, Konar A: Swarm Intelligence Algorithms in Bioinformatics. Computational Intelligence in Bioinformatics, Studies in Computational Intelligence. 2008, 94: 113-147. 10.1007/978-3-540-76803-6_4.

    Article  Google Scholar 

  9. Sun J, Chen W, Fang W, Wu X, Xu W: Gene expression data analysis with the clustering method based on an improved quantum-behaved particle swarm optimization. Engineering Applications of Artificial Intelligence. 2012, 25 (2): 376-391. 10.1016/j.engappai.2011.09.017.

    Article  Google Scholar 

  10. Sun J, Wu X, Fang W, Ding Y, Long H, Xu W: Multiple sequence alignment using the Hidden Markov Model trained by an improved quantum-behaved particle swarm optimization. Information Sciences. 2012, 182 (1): 93-114. 10.1016/j.ins.2010.11.014.

    Article  Google Scholar 

  11. Cho KH, Shin SY, Kim HW, Wolkenhauer O, McFerran B, Kolch W: Mathematical modelling of the influence of RKIP on the ERK signalling pathway. In Proceedings of the First International Workshop on Computational Methods in Systems Biology. 2003, Springer-Verlag, 127-141.

    Google Scholar 

  12. Swameye I, Muller TG, Timmer J, Sandra O, Klingmuller U: Identification of nucleocytoplasmic cycling as a remote sensor in cellular signalling by databased modelling. Proc Natl Acad Sci USA 100. 2003, 1028-1033.

    Google Scholar 

  13. Ashyraliyev M, Fomekong-Nanfack Y, Kaandorp JA, Blom JG: Systems biology: parameter estimation for biochemical models. FEBS J. 2009, 276: 886-902. 10.1111/j.1742-4658.2008.06844.x.

    Article  CAS  PubMed  Google Scholar 

  14. Mendes P, Kell DB: Non-linear optimization of biochemical pathways: applications to metabolic engineering and parameter estimation. Bioinformatics. 1998, 14: 869-883. 10.1093/bioinformatics/14.10.869.

    Article  CAS  PubMed  Google Scholar 

  15. Peifer M, Timmer J: Parameter estimation in ordinary differential equations for biochemical processes using the method of multiple shooting. IET Syst Biol. 2007, 1: 78-88. 10.1049/iet-syb:20060067.

    Article  CAS  PubMed  Google Scholar 

  16. Polisetty PK, Voit EO, Gatzke EP: Yield optimization of saccharomyces cerevisiae using a GMA model and a MILP-based piecewise linear relaxation method. Santa Barbara, CA. 2005

    Google Scholar 

  17. Gonzalez OR, Kuper C, Jung K, Naval PC, Mendoza E: Parameter estimation using Simulated Annealing for S-system models of biochemical networks. Bioinformatics. 2007, 23: 480-486. 10.1093/bioinformatics/btl522.

    Article  CAS  PubMed  Google Scholar 

  18. Kikuchi S, Tominaga D, Arita M, Takahashi K, Tomita M: Dynamic modelling of genetic networks using genetic algorithm and S-system. Bioinformatics. 2003, 19: 643-650. 10.1093/bioinformatics/btg027.

    Article  CAS  PubMed  Google Scholar 

  19. Moles CG, Mendes P, Banga JR: Parameter estimation in biochemical pathways: a comparison of global optimization methods. Genome Res. 2003, 13: 2467-2474. 10.1101/gr.1262503.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  20. Tsai KY, Wang FS: Evolutionary optimization with data collocation for reverse engineering of biological networks. Bioinformatics. 2005, 21: 1180-1188. 10.1093/bioinformatics/bti099.

    Article  CAS  PubMed  Google Scholar 

  21. Zi Z, Klipp E, SBML-PET: A systems biology markup language-based parameter estimation tool. Bioinformatics. 2006, 22: 2704-2705. 10.1093/bioinformatics/btl443.

    Article  CAS  PubMed  Google Scholar 

  22. Rodriguez-Fernandez M, Egea JA, Banga JR: Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems. BMC Bioinformatics. 2006, 7:

    Google Scholar 

  23. Egea JA, Rodriguez-Fernandez M, Banga JR, and Marti R: Scatter search for chemical and bioprocess optimization. Journal of Global Optimization. 2007, 37: 481-530. 10.1007/s10898-006-9075-3.

    Article  Google Scholar 

  24. Besozzi D, Cazzaniga P, Mauri G, Pescini D, Vanneschi L: A Comparison of Genetic Algorithms and Particle Swarm Optimization for Parameter Estimation in Stochastic Biochemical Systems. 7th European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics. Edited by: Pizzuti, C., Ritchie, M.D. and Giacobini, M. 2009, Tubingen, GERMANY, 116-127.

    Chapter  Google Scholar 

  25. Dräger A, Kronfeld M, Ziller MJ, Supper J, Planatscher H, Magnus JB, Oldiges M, Kohlbacher O, Zell A: Modeling metabolic networks in C. glutamicum: a comparison of rate laws in combination with various parameter optimization strategies. BMC Systems Biology. 2009, 3: 5-10.1186/1752-0509-3-5.

    Article  PubMed Central  PubMed  Google Scholar 

  26. Nakatsui M, Horimoto K, Okamoto M, Tokumoto Y, Miyake J: Parameter optimization by using differential elimination: a general approach for introducing constraints into objective functions. BMC Systems Biology. 2010, 4 (Suppl 2): S9-10.1186/1752-0509-4-S2-S9.

    Article  PubMed Central  PubMed  Google Scholar 

  27. Zhang Y, Xuan J, de los Reyes BG, Clarke R, Ressom HW: Reverse engineering module networks by PSO-RNN hybrid modelling. BMC Genomics. 2009, 10 (Suppl 1): S15-

    Article  PubMed Central  PubMed  Google Scholar 

  28. Xu R, Venayagamoorthy GK, Donald C, Wunsch I: Modeling of gene regulatory networks with hybrid differential evolution and particle swarm optimization. Neural Networks. 2007, 20 (8): 917-927. 10.1016/j.neunet.2007.07.002.

    Article  PubMed  Google Scholar 

  29. Katare S, Kalos A, West D: A hybrid swarm optimizer for efficient parameter estimation. In Proceedings of Congress on Evolutionary Computation. 2004, 309-315.

    Google Scholar 

  30. Kutalik Z, Tucker W, Moulton V: S-system parameter estimation for noisy metabolic profiles using Newton-flow analysis. Iet Systems Biology. 2007, 1: 174-180. 10.1049/iet-syb:20060064.

    Article  CAS  PubMed  Google Scholar 

  31. Chou IC, Martens H, Voit EO: Parameter estimation in biochemical systems models with alternating regression. Theor Biol Med Model. 2006, 3: 25-10.1186/1742-4682-3-25.

    Article  PubMed Central  PubMed  Google Scholar 

  32. Vilela M, Borges CCH, Vinga S, Vasconcelos ATR, Santos H, Voit EO, Almeida JS: Automated smoother for the numerical decoupling of dynamics models. BMC Bioinformatics. 2007, 8-

    Google Scholar 

  33. Voit EO, Almeida J: Decoupling dynamical systems for pathway identification from metabolic profiles. Bioinformatics. 2004, 20: 1670-1681. 10.1093/bioinformatics/bth140.

    Article  CAS  PubMed  Google Scholar 

  34. Liu PK, Wang FS: Inference of biochemical network models in S-system using multiobjective optimization approach. Bioinformatics. 2008, 24: 1085-1092. 10.1093/bioinformatics/btn075.

    Article  PubMed  Google Scholar 

  35. Gennemark P, Wedelin D: Efficient algorithms for ordinary differential equation model identification of biological systems. Iet Systems Biology. 2007, 1: 120-129. 10.1049/iet-syb:20050098.

    Article  CAS  PubMed  Google Scholar 

  36. Maki Y, Ueda T, Okamoto M, Uematsu N, Inamura K, Uchida K, Takahashi Y, Eguchi Y: Inference of genetic network using the expression profile time course data of mouse P19 cells. Genome Informatics. 2002, 13: 382-383.

    CAS  Google Scholar 

  37. Matsubara Y, Kikuchi S, Sugimoto M, Tomita M: Parameter estimation for stiff equations of biosystems using radial basis function networks. BMC Bioinformatics. 2006, 7-

    Google Scholar 

  38. Chang WC, Li CW, Chen BS: Quantitative inference of dynamic regulatory pathways via microarray data. BMC Bioinformatics. 2005, 6: 44-10.1186/1471-2105-6-44.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  39. van den Bergh F: An analysis of particle swarm optimizers. 2002, Ph.D. dissertation, University of Pretoria, Pretoria, South Africa

    Google Scholar 

  40. Kennedy J: Some issues and practices for particle swarms. In Proceedings of the IEEE Swarm Intelligence Symposium. 2007, 801-808.

    Google Scholar 

  41. Omar MA, Addison Wesley, Hunter WG, MacGregor JF, Grjavec J: Elementary solid state physics: principles and applications Some problems associated with the analysis of multiresponse data 1993.\Box GEP. Technometrics. 1973, 15: 33-51. 10.1080/00401706.1973.10489009.

    Article  Google Scholar 

  42. Sun J, Zhao J, Wu X, Fang W, Cai Y, Xu W: Parameter Estimation for Chaotic Systems with a Drift Particle Swarm Optimization Method. Physics Letters A. 2010, 374 (28): 2816-2822. 10.1016/j.physleta.2010.04.071.

    Article  CAS  Google Scholar 

  43. Sun J, Fang W, Lai C-H, Xu W: Solving the Multi-Stage Portfolio Optimization Problem with A Novel Particle Swarm Optimization. Expert Systems with Applications. 2011, 38 (6): 6727-6735. 10.1016/j.eswa.2010.11.061.

    Article  Google Scholar 

  44. Sun J, Wu X, Palade V, Fang W, Shi Y: Random Drift Particle Swarm Optimization. arXiv preprint. 2013, arXiv:1306.2863

    Google Scholar 

  45. Sun J, Palade V, Wu X, Fang W, Wang Z: Solving the Power Economic Dispatch Problem with Generator Constraints by Random Drift Particle Swarm Optimization. IEEE Transactions on Industrial Informatics. 2013, 10 (1): 222-232.

    Article  Google Scholar 

  46. Sun J, Palade V, Wu X, Fang W: Multiple Sequence Alignment with Hidden Markov Models Learned by Random Drift Particle Swarm Optimization. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2014, 10.1109/TCBB.2013.148

    Google Scholar 

  47. Box GEP, Hunter WG, MacGregor JF, Grjavec J: Some problems associated with the analysis of multiresponse data. Technometrics. 1973, 15: 33-51. 10.1080/00401706.1973.10489009.

    Article  Google Scholar 

  48. Holzinger A: On Knowledge Discovery and interactive intelligent visualization of biomedical data: Challenges in Human-Computer Interaction & Biomedical Informatics. DATA-International Conference on Data Technologies and Applications. 2012, 5-16.

    Google Scholar 

  49. Holzinger A, Yildirim P, Geier M, Simonic KM: Quality-based knowledge discovery from medical text on the Web. Example of computational methods in Web intelligence. Qual Issues in the Management of Web Information ISRL. 2013, 50: 145-158. 10.1007/978-3-642-37688-7_7.

    Article  Google Scholar 

  50. Kennedy J, Eberhart R: Particle swarm optimization. In Proceedings of IEEE International Conference on Neural Networks (ICNN 95). 1995, Perth, Australia, 1942-1948.

    Chapter  Google Scholar 

  51. Angeline PJ: Evolutionary optimization versus particle swarm optimization: philosophy and performance differences. In Proceedings of the 7th International Conference on Evolutionary Programming VII, Springer-Verlag. 1998, 601-610.

    Google Scholar 

  52. Shi YH, Eberhart R: A modified particle swarm optimizer. In Proceedings of IEEE International Conference on Evolutionary Computation. 1998, Anchorage, Ak, 69-73.

    Google Scholar 

  53. Clerc M: The swarm and the queen: towards a deterministic and adaptive particle swarm optimization. In Proceedings of the 1999 Congress on Evolutionary Computation. 1999, 1951-1957.

    Google Scholar 

  54. Suganthan PN: Particle swarm optimiser with neighbourhood operator. In Proceedings of the 1999 Congress on Evolutionary Computation. 1999, 1962-1967.

    Google Scholar 

  55. Kennedy J: Bare bones particle swarms. In Proceedings of the IEEE Swarm Intelligence Symposium. 2003, 80-87.

    Google Scholar 

  56. Janson S, Middendorf M: A hierarchical particle swarm optimizer and its adaptive variant. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics. 2005, 35: 1272-1282.

    Article  Google Scholar 

  57. Clerc M, Kennedy J: The particle swarm - explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation. 2002, 6: 58-73. 10.1109/4235.985692.

    Article  Google Scholar 

  58. Storn R, Price K: Differential evolution - A simple and efficient heuristic for global optimization over continuous spaces. J of Global Optimization. 1997, 11: 341-359. 10.1023/A:1008202821328.

    Article  Google Scholar 

  59. Glover F: Heuristics for integer programming using surrogate constraints. Decision Sciences. 1977, 8 (1): 156-166. 10.1111/j.1540-5915.1977.tb01074.x.

    Article  Google Scholar 

  60. Beyer HG, Schwefel HP: Evolution strategies - A comprehensive introduction. Natural Computing: an international journal. 2002, 1: 3-52. 10.1023/A:1015059928466.

    Article  Google Scholar 

  61. Runarrson TP, Yao X: Stochastic ranking for constrained evolutionary optimization. IEEE Transactions on Evolutionary Computation. 2000, 4: 284-294. 10.1109/4235.873238.

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the Natural Science Foundation of Jiangsu Province, China (Project Number: BK2010143), by the Natural Science Foundation of China (Project Numbers 61170119, 61105128, 61373055), by the Program for New Century Excellent Talents in University (Project Number: NCET-11-0660), by the RS-NSFC International Exchange Programme (Project Number: 61311130141), and by the Key grant Project of Chinese Ministry of Education (Project Number: 311024).

Declarations

Funding for the publication of this article comes from the Natural Science Foundation of China (NSFC) grant 61170119.

This article has been published as part of BMC Bioinformatics Volume 15 Supplement 6, 2014: Knowledge Discovery and Interactive Data Mining in Bioinformatics. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcbioinformatics/supplements/15/S6.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jun Sun or Vasile Palade.

Additional information

Authors' contributions

JS and VP designed the algorithm and drafted the manuscript. YC processed the experimental results. WF and XW revised the manuscript critically. All of the authors have read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Electronic supplementary material

12859_2014_6390_MOESM1_ESM.zip

Additional file 1: Source. CodeThis file includes the source code of all tested algorithms on the two benchmark problems, programmed in C++ on Microsoft Visual VC++ 6.0. All the source codes are compressed into a single .rar file. (ZIP 683 KB)

12859_2014_6390_MOESM2_ESM.zip

Additional file 2: Data. This file includes the data of the best solution out of 20 runs of each tested algorithm. (ZIP 63 KB)

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, J., Palade, V., Cai, Y. et al. Biochemical systems identification by a random drift particle swarm optimization approach. BMC Bioinformatics 15 (Suppl 6), S1 (2014). https://doi.org/10.1186/1471-2105-15-S6-S1

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-15-S6-S1

Keywords