- Research
- Open Access
Biochemical systems identification by a random drift particle swarm optimization approach
- Jun Sun^{1}Email author,
- Vasile Palade^{2}Email author,
- Yujie Cai^{3},
- Wei Fang^{1} and
- Xiaojun Wu^{1}
https://doi.org/10.1186/1471-2105-15-S6-S1
© Sun et al.; licensee BioMed Central Ltd. 2014
Published: 16 May 2014
Abstract
Background
Finding an efficient method to solve the parameter estimation problem (inverse problem) for nonlinear biochemical dynamical systems could help promote the functional understanding at the system level for signalling pathways. The problem is stated as a data-driven nonlinear regression problem, which is converted into a nonlinear programming problem with many nonlinear differential and algebraic constraints. Due to the typical ill conditioning and multimodality nature of the problem, it is in general difficult for gradient-based local optimization methods to obtain satisfactory solutions. To surmount this limitation, many stochastic optimization methods have been employed to find the global solution of the problem.
Results
This paper presents an effective search strategy for a particle swarm optimization (PSO) algorithm that enhances the ability of the algorithm for estimating the parameters of complex dynamic biochemical pathways. The proposed algorithm is a new variant of random drift particle swarm optimization (RDPSO), which is used to solve the above mentioned inverse problem and compared with other well known stochastic optimization methods. Two case studies on estimating the parameters of two nonlinear biochemical dynamic models have been taken as benchmarks, under both the noise-free and noisy simulation data scenarios.
Conclusions
The experimental results show that the novel variant of RDPSO algorithm is able to successfully solve the problem and obtain solutions of better quality than other global optimization methods used for finding the solution to the inverse problems in this study.
Keywords
Background
Evolutionary algorithms (EAs) have been widely used for data mining tasks in Bioinformatics and Computational Biology [1, 2]. They are random search methods inspired by natural mechanisms existing in the biological world [1, 2]. EAs originally comprised four types of paradigms, namely, genetic algorithms (GAs), genetic programming (GP), evolution strategies (ES), and evolutionary programming (EP), with GAs being the most popular one. Data analysis tools traditionally used in Bioinformatics were mainly based on statistical techniques, such as regression and estimation, and EAs played significant roles in handling large biological data sets in a robust and computationally efficient manner [2].
Currently, evolutionary computing techniques mostly comprise conventional EAs (GAs, GP, ES and EP), swarm intelligence algorithms, artificial immune systems, differential evolution, as the main representative classes of evolutionary computing approaches[3]. Swarm intelligence is a class of evolutionary computing techniques simulating natural systems composed of many individuals that coordinate one another using decentralized control and self-organization. Two most influential and classical examples of swarm intelligence approaches are particle swarm optimization (PSO) and ant colony optimization (ACO) algorithms, which have been widely used in many different fields [3–7]. Particularly, PSO algorithms have shown their effectiveness in data mining tasks in bioinformatics due to their performance in solving difficult optimisation tasks [8–10].
Biochemical modelling can be considered a generic data-driven regression problem on the given experimental data. The goal of biochemical modeling is to build the mathematical formulations that quantitatively describe the dynamical behaviour of biochemical processes. For example, metabolic reactions are formulated as rate laws and described as a system of differential equations, the kinetic parameters of which are identified from a set of experimental data. Finding the solution of the parameter estimation problem, thus, plays a key role in building a dynamic model for a biochemical process, which, in turn, can help understand the functionality of the signalling pathways at the system level [11, 12].
Since solving the inverse problem in biochemical process modelling involves a task of nonlinear programming, many numerical optimization methods have been used to determine the parameters of biochemical models. These methods can be generally classified into two categories, namely, local optimization methods and global optimization methods [13]. The widely used local optimization tools for inverse problems are those based on gradient descent methods, the most popular being the Newton method [14]. This type of approaches, however, cannot be applied to non-smooth problems, since the objective functions of the problems are discontinuous or have discontinuous derivatives. Direct search methods, such as the Hooke-Jeeves method, the Needler-Mead simplex algorithm and the Downhill simplex algorithm, are also a kind of local optimization techniques that could be used to find a local minimum without the information from derivatives [13]. Normally, most local optimization approaches are used as single shooting methods. For each of them, the path of its optimization process leading to a final solution is determined by the initial conditions for the state variables. Therefore, the algorithm will lead to a wrong minimum, particularly if the initial conditions depend on model parameters. To overcome this shortcoming, one can adopt multiple shooting methods in which the time interval is partitioned and new initial conditions are used at the start of each time interval part [15]. The methods can offer the possibility to circumvent local optima by enlarging the parameter space during the optimization process.
The aforementioned local search methods are generally less efficient for the inverse problems of biochemical models, which are multimodal and high-dimensional. In order to solve these hard inverse problems efficiently, one can turn to global optimization methods, most of which incorporate stochastic search strategies to prevent the search process from being stuck into the local optimal or suboptimal solutions. The branch-and-bound approach is a global optimization method that converts the inverse problem into a convex optimization problem so that a global optimal solution can be obtained [16]. This method requires a finite search space that can be divided into smaller subspaces. A remarkable disadvantage is that it is applicable only if the lower and upper bounds of the objective function can be computed. Simulated annealing (SA) can be effectively used for parameter estimation from time-course biochemical data as shown in [17]. However, it has a slow convergence speed and high computational cost, and is not easy to be parallelized. Genetic algorithms (GAs) represent a widely used global search technique that could be employed to predict the parameters of dynamic models [18]. Nevertheless, GAs are always complained of slow convergence speed and high computation cost. The evolutionary strategy (ES) approach showed its ability to successfully solve inverse problems in a performance comparison made by Moles et al. [19] among a number of global optimization techniques on biochemical system identification problems. In contrast to SA, evolutionary algorithms, including ES and GAs, can be implemented as self-tuning methods and can be parallelizable, with the stochastic ranking evolutionary strategy (SRES) method being a very successful example [19–21]. Scatter search (SS) is known as a population-based random search approach that was proposed to identify the appropriate parameters for nonlinear dynamic biological systems [22, 23]. As an evolutionary algorithm method, the SS method, as well as its hybrid with a local search step after the recombination operation, showed to be efficient in solving inverse problems. Particle swarm optimization (PSO), also a population-based optimization technique from swarm intelligence and evolutionary computation area, has demonstrated its better performance than GAs in solving inverse problems [24, 25]. Hybrids of PSO with other methods have also shown their effectiveness in modelling biochemical dynamic systems [26–28]. However, PSO shows to be sensitive to the neighbourhood topology of the swarm, as commented in [29].
Other methods for parameter estimation include the Newton-flow analysis [30], the alternating regression technique [31], decoupling approaches [32, 33], the collocation method [20, 34], the decomposing method [35, 36]. These approximation techniques, when incorporated into an optimization algorithm, can help reduce the number of objective function evaluations, which are very computationally expensive. Additionally, radial basis function neural networks [37] and a quantitative inference method [38] have also been employed to solve inverse problems in biochemical process modelling.
In all of the above cases, the optimization approach is used to minimize to the residual error of an inferred model against experimental data. Smaller error means that the model describes the dynamic behaviour of the biochemical system better and has more justification to be accepted as a valid mathematical representation of the system. Theoretically, the prediction error diminishes with the accuracy of the model increasing. This study focuses on developing an efficient optimization method for parameter estimation of a given dynamic biochemical model. However, since parameter estimation problems of complex dynamic systems (generally with many parameters and many differential equations) are high-dimensional, multimodal and more challenging to solve, but which allow to depict more complex biochemical processes, our goal in this study is to develop an efficient global optimization method for solving such inverse problems of complex biochemical dynamic systems.
After extensive and in-depth study, we selected the PSO algorithm as a candidate to be modified in order to achieve our goal of solving complex inverse problems. The reason why PSO attracted us is that PSO has many advantages, such as faster convergence speed, lower computational need, as well as being easily parallelizable and having fewer parameters to adjust. However, PSO has the following shortcomings. First of all, it was theoretically proven that the PSO is not a global convergent algorithm, even not a local convergent one, against the convergence criteria given in [39]. Practically, the algorithm is more prone to be trapped into local optimal or suboptimal points for a high-dimensional problem, due to the weakening of its global search ability during the mid and later stages of the search process. Next, PSO is widely known to be sensitive to its search parameters including upper limits of the velocity, and even to the "swarm topology", so that users may feel awkward when selecting the parameters and the topologies when using the algorithm [40]. Finally, the performance of PSO appears to be very sensitive to the setting of upper and lower bounds of the search scope [40]. If the global optimal solution is located near the boundary of the search scope, the algorithm may have little chance to catch it. We have found that these shortcomings are mainly attributed to the velocity update equation, which is the essence of the PSO algorithm, and where it seems to be much room for improvement so as to boost the global search ability of the PSO.
In this study, inspired by the free electron model in metal conductors placed in an external electric field [41], we propose to use a variant of the PSO algorithm, called the random drift particle swarm optimization (RDPSO), in order to achieve our goal of effectively estimating the parameters of complex biochemical dynamical systems. The motivation of the RDPSO algorithm is to improve the search ability of the PSO algorithm by fundamentally modifying the update equation of the particle's velocity, instead of by revising the algorithm based on the original equation so as to probably increase the complexity of the algorithmic implementation as well as its computational cost. It is different from the drift particle swarm optimization (DPSO) proposed by us in [42, 43] in that it can make a better balance between the global search and the local search of the particle swarm.
The original and basic RDPSO version was recently introduced by us in [44], which was used for solving other problems in [45, 46]. A novel variant of RDPSO algorithm is being proposed in this work to solve the parameter identification problem for two biochemical systems. The novel variant proposed here is different from the original one in that it employs an exponential distribution for sampling the velocity of the particles, whilst the original one used the Gaussian distribution.
The novel RDPSO variant is used for estimating the parameters of two benchmark models, one of which describes the thermal isomerization of α-pinene with 5 parameters [22, 47], the other of which has a three-step pathway with 36 parameters [19]. The results of RDPSO and some other well-known global optimization algorithms are then compared and discussed. It should be noted that although this paper is focused on the parameter estimation for biochemical modelling, just as PSO and other EAs, the proposed RDPSO variant can be employed as a general-purpose tool for optimization problems in data miming tasks, such as clustering, classification, regression, and so forth, which widely exist in bioinformatics and computational biology [1, 2, 8, 48, 49].
Methods
Problem statement
where J is the cost function of the model, $\theta $ is a vector of model parameters to be estimated, ${y}_{\text{msd}}\left(t\right)$ is the experimental measure of a subset of the output state variables, $y\left(\theta ,t\right)$ is the prediction of those outputs by the model, x is the differential state variables and v is a vector of other (usually time-invariant) parameters that are not to be estimated. In Equation (1), $W\left(t\right)$ is the weighting (or scaling) matrix, and the equation can be discretized into a weighted least-square estimator. In Equation (2), f is the set of differential and algebraic equality constraints describing the system dynamics (i.e., the nonlinear process model). Equation (3) gives the initial value of x. In Equations (4) and (5), h and g are equality and inequality path and point constraints on system performance. In addition, $\theta $ is subject to upper and lower bounds, which are described by inequality constraints (6).
The above defined inverse problem is generally a multimodal (non-convex) optimization problem with multiple local optima due to the nonlinearity and constraints of the system dynamics. Even though many local and global optimization methods have been proposed to solve the problem as mentioned in Introduction, it is still challenging and very necessary to develop efficient optimization algorithms to deal with the parameter estimation problems, especially those for the dynamic systems with many parameters and many equations. Therefore this study focuses on the optimization approach for the inverse problem using the proposed variant of random drift particle swarm optimization (RDPSO) and other global optimization methods.
Particle swarm optimization
The original PSO algorithm was introduced by Kennedy and Eberhart in [50]. The algorithm was inspired by the observed social behavior of bird flocks or fish schooling, and it roots its methodology both in evolutionary computing and artificial life. It shares many similarities with EAs, in that both the PSO and the EAs are initialized randomly with a population of candidate solutions and then update the population iteratively, in order to approximate the global optimal solution to the given problem. However, unlike EAs, PSO has no evolution operators such as crossover and mutation, but perform optimization tasks by updating the particles' position (potential solutions) according to a set of discrete differential equations. It was shown that the PSO algorithm has comparable and even better performance than GAs [51]
In Equation (7), ${r}_{i,j}^{k}$ and ${R}_{i,j}^{k}$ are the sequences of two different random numbers with uniform distribution on the interval (0, 1), namely, ${r}_{i,j}^{k},{R}_{i,j}^{k}\text{~}U\left(0,1\right)$. In order to prevent the particle from flying away out of the search scope, ${V}_{i,j}^{k}$ is restricted on the interval $\left[-{V}_{\text{max}},{V}_{\text{max}}\right]$, where ${V}_{\text{max}}$ is also a user-specified algorithmic parameter.
Many researchers have proposed different variants of PSO in order to improve the search performance of the algorithm and proved this through empirical simulation [3–7, 52–56]
The proposed random drift particle swarm optimization
In the PSO algorithm, c_{1} and c_{2} are set to be equal, and thus ${\varphi}_{i,j}^{k}$ is a random number with uniform distribution on the interval (0, 1), i.e. ${\varphi}_{i,j}^{k}~U\left(0,1\right)$.
During the search process of the PSO, as particles' current position are converging to their own local attractor, their current positions, pbest positions, local attractors and the gbest positions are all converging to one single point. The directional movement of each particle i towards ${P}_{i}^{k}$ resembles the drift motion of an electron in metal conductors placed in an external electric field. According to the free electron model [37], the electron has not only drift motion caused by the external electric field, but also a thermal motion, which appears to be a random movement. The superposition of the drift thermal motions makes the electron careen towards the location of the minimum potential energy. Thus, if the position of an electron in the metal is regarded as a candidate solution and the potential energy function as the objective function to be minimized, the movement of the electron resembles the process finding the minimum solution of the minimization problem.
can guarantee the particle to converge towards its local attractor, the two random scaling coefficients add randomness to its motion, which means that the particle's position is sampled at uniformly random positions within the hyper-rectangle around the gbest position and its personal best position. It is not able to enhance the particle's global search ability because of the finite scope of the hyper-rectangle, but it may weaken its local search ability, which is the responsibility of the directional motion. Therefore, the velocity of the particle, which is given by the sum of $V{1}_{i,j}^{k+1}$ and$V{2}_{i,j}^{k+1}$, may not be able to make a good balance between the global search and the local search of the particle. In the present study, we employ a new way of determining $V{1}_{i,j}^{k+1}$ and$V{2}_{i,j}^{k+1}$.
where
where s and ${u}_{i,j}^{k}$ are two different random numbers uniformly distributed on the interval (0,1), i.e. $\begin{array}{cc}\hfill s,\hfill & \hfill {u}_{i,j}^{k}\hfill \end{array}~U\left(0,1\right)$. As for the value of ${\sigma}_{i,j}^{k}$, an adaptive strategy is adopted to determine ${\sigma}_{i,j}^{k}$ by ${\sigma}_{i,j}^{k}=2\alpha |{C}_{j}^{k}-{X}_{i,j}^{k}|$, where ${C}^{k}=\left({C}_{1}^{k},{C}_{2}^{k},\cdots \phantom{\rule{0.3em}{0ex}},{C}_{D}^{k}\right)$ is called the mean best (mbest) position, defined as the mean of the pbest positions of all the particles, i.e. $\begin{array}{cc}{C}_{j}^{k}=(1/m){\displaystyle {\sum}_{i=1}^{m}{P}_{i,j}^{k}}& (1\le j\le D)\end{array}$.
It can be immediately proven that if ${V}_{i,j}^{k+1}=V{2}_{i,j}^{k+1}$, when $k\to \infty $, ${X}_{i,j}^{k}\to {p}_{i,j}^{k}$. Therefore the expression of $V{2}_{i,j}^{k+1}$ in Equation (20) can indeed guarantee that the particle move directionally to ${P}_{i}^{k}$ as an overall result.
where $\alpha $ is called the thermal coefficient and $\beta $ is called the drift coefficient. The PSO with Equations (22) and (23) is a novel variant of RDPSO, which employs a double exponential distribution instead of a Gaussian one. The procedure of this RDPSO variant is outlined below.
Step 0: Randomly initialize the current positions and the pbest position of all the particles;
Step 1: Set k = 0;
Step 2: While the termination condition is not met, do the following steps;
Step 3: Set k=k+1 and compute the mbest position ${C}^{k}$, which is the centroid of the pbest positions of all the particles at iteration k;
Step 4: From $i=1$, carry out the following steps;
Step 5: Evaluate the objective function value $f\left({X}_{i}^{k}\right)$, and update ${P}_{i}^{k}$ and ${G}^{k}$ according to Equation (10) and Equation (11), respectively;
Step 6: Update the components of the velocity and current position of particle i in each dimension, respectively, according to Equations (21), (22) and (23);
Step 7: Set i=i+1, and return to Step 5 until $i=m$;
Step 8: Return to Step 2;
Setting large values for $\alpha $ and $\beta $ implies better global search ability of the algorithm, while setting small values means better local search. When the RDPSO is used for solving a problem, a good balance between the global search and the local search of the algorithm is crucial for the algorithmic performance. However, in order to find out how to tune the parameters to generate generally good algorithmic performance we need a large number of experiments on benchmark functions, which will be performed in our future tasks. Here, we recommend that when the RDPSO is used, $\alpha $ should be set to be no larger than 1.0 and $\beta $ to be no larger than 1.5. More specifically, when the problem at hand is complex, the values of the two parameters should be set to be relatively large in order to make the particles search more globally, and on the other hand, when the problem is simple, relatively smaller values should be selected for the parameters, for the purpose of faster convergence speed of the algorithm. In the present study, the value of $\alpha $ and $\beta $ were set to be 0.75 and 1.0, respectively.
In addition, the population size and the maximum number of iterations (MaxIter) can also affect the performance of a population-based technique. Just as for other PSO variants, it is suggested that the population size should be larger than 20 for the RDPSO as well. The value of the MaxIter depends on the complexity of the problem. Generally, a smaller MaxIter value is used for simple problems, while a larger one is used for complex problems.
Moreover, ${V}_{i,j}^{k}$ is also restricted within the interval $\left[-{V}_{\text{max}},{V}_{\text{max}}\right]$during the search process of the RDPSO algorithm, just as in the original PSO algorithm.
The optimization methods compared
Besides the PSO and RDPSO algorithms, the Differential Evolution (DE), Scatter Search (SS) method and two versions of Evolution Strategies were also used to solve the selected inverse problems, for performance comparison purposes. The DE method, as presented by Storn and Price [58], is an evolutionary computing method, which has a faster convergence speed than GAs and can find the global optimal solution of a multidimensional and multimodal function effectively [58].
The SS method is also a population-based search techniques originally developed by Glover [59]. In [22], a novel meta-heuristic method, which is the combination of the original SS method with a local search technique, was proposed to solve inverse problems. It was shown that the local search operator can accelerate the convergence speed significantly. Thus, in our experiments, we used this novel SS method for performance comparison.
Evolutionary strategy (ES) is an important paradigm of EAs, which imitates the effects that genetics produces on the phenotype, rather than the genotype as in GAs [60]. The two canonical versions of ES we used in this study are denoted by (μ, λ)-ES and (μ + λ)-ES, where μ denotes the number of parents and λ the number of offspring. In the (μ, λ)-ES, the parents are deterministically selected from offspring (μ <λ must hold), while in (μ + λ)-ES, the parents are selected from both the parents and offspring.
In addition, the performances of the above mentioned algorithms, including the RDPSO, are also compared with those of the SRES method. The SRES is a version of (μ, λ)-ES that uses stochastic ranking to handle the constraints, by adjusting the balance between the objective function and the penalty function on the course of the search [19], [61].
Case studies
Two case studies involving two benchmark systems were carried out. For each system, we performed two groups of numerical experiments, one with noise-free simulation data, and the other with noisy simulation data.
Case study 1
where $\left({p}_{1},{p}_{2}\cdots \phantom{\rule{0.3em}{0ex}},{p}_{5}\right)$ is the vector of unknown coefficients to be estimated, ${y}_{1}$, ${y}_{2}$, ${y}_{3}$, ${y}_{4}$ and ${y}_{5}$ denote the concentrations of the α-pinene, dipentene, alloocimen, β-pyronene and a dimer, respectively.
Case study 2
where${M}_{1}$, ${M}_{2}$, ${E}_{1}$, ${E}_{2}$, ${E}_{3}$, ${G}_{1}$, ${G}_{2}$ and ${G}_{3}$ are the state variables representing the concentrations of the species involved in different biochemical reactions, and S and P are controlling parameters which are kept fixed at the initial values for each experiment. The inverse problem is then reduced to the optimization problem that fits the remaining 36 parameters represented by$\theta =\left({\theta}_{1},{\theta}_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\theta}_{36}\right)$.
Objective functions
where n is the number of data for each experiment, l is the number of experiments, ${y}_{\text{exp}}$ is the vector of experimental values of the state variables, and ${y}_{\text{pred}}$ is the vector of the values of state variables predicted by the model with a given set of parameters. In Case Study 1, each ${w}_{ij}$ was set to be 1 [22], while in Case Study 2, ${w}_{ij}$ was set as ${w}_{ij}={\left\{1/\text{max}{\left[{y}_{\text{pred}}\left(i\right)\right]}_{j}\right\}}^{2}$, which was used to normalize the contributions of each term [19].
Obtaining simulation data
Experiment generation (simulation) and the nominal value.
P | 0.05 | 0.13572 | 0.36840 | 1.0 | |
---|---|---|---|---|---|
S | 0.1 | 0.46416 | 2.1544 | 10 | |
Parameter | Element of decision variables vector | Nominal value | Parameter | Element of decision variables vector | Nominal value |
V _{1} | θ _{1} | 1 | V _{4} | θ _{19} | 0.1 |
Ki _{1} | θ _{2} | 1 | K _{4} | θ _{20} | 1 |
ni _{1} | θ _{3} | 2 | k _{4} | θ _{21} | 0.1 |
Ka _{1} | θ _{4} | 1 | V _{5} | θ _{22} | 0.1 |
na _{1} | θ _{5} | 2 | K _{5} | θ _{23} | 1 |
k _{1} | θ _{6} | 1 | k _{5} | θ _{24} | 0.1 |
V _{2} | θ _{7} | 1 | V _{6} | θ _{25} | 0.1 |
Ki _{2} | θ _{8} | 1 | K _{6} | θ _{26} | 1 |
ni _{2} | θ _{9} | 2 | k _{6} | θ _{27} | 0.1 |
Ki _{2} | θ _{10} | 1 | kcat _{1} | θ _{28} | 1 |
na _{2} | θ _{11} | 2 | Km _{1} | θ _{29} | 1 |
k _{2} | θ _{12} | 1 | Km _{2} | θ _{30} | 1 |
V _{3} | θ _{13} | 1 | kcat _{2} | θ _{31} | 1 |
Ki _{3} | θ _{14} | 1 | Km _{3} | θ _{32} | 1 |
ni _{3} | θ _{15} | 2 | Km _{4} | θ _{33} | 1 |
Ka _{3} | θ _{16} | 1 | Kcat _{3} | θ _{34} | 1 |
na _{3} | θ _{17} | 2 | Km _{5} | θ _{35} | 1 |
k _{3} | θ _{18} | 1 | Km _{6} | θ _{36} | 1 |
where z and ${z}^{\prime}$ represents the original noise-free data and the resulting noisy data, respectively, $\epsilon $ is an random number with standard normal distribution, namely, $\epsilon ~N\left(0,1\right)$, and $\sigma $ is the standard deviation of the white noise. In our case studies, $\sigma $ was set to 0.01 for both systems.
Initial problem solver used
During the search of each global optimization algorithm, each potential solution (i.e., a set of estimation values for the parameters) was substituted into the dynamic model. Then, the fourth order Runge-Kutta method was performed on the corresponding system of differential equations to generate a set of predicted values of the output state variables, from which the objective function value (or fitness value) of the potential solution could be evaluated according to Equation (37) with the obtained pseudo-experimental (simulation) data. This process is known as the solution to the forward problem, which was embedded in the iterations of the search during the solving of the inverse problem with the algorithm.
Experimental settings
For the sake of performance comparison, all the tested global optimization methods except the SS method (i.e., PSO, RDPSO, DE, (μ, λ)-ES, and (μ + λ)-ES) were programmed in C++ on a VC++6.0 platform in Windows XP environment, and implemented on Intel Pentium Dual-Core E5300 2.6GHz PCs, each with 2 MB cache and 4 GB main memory. The SS method was implemented in Matlab 7.0, on the same platform, for the purpose of calling the local solver SQP in Matlab during the search process. The software for SS for inverse problems can be found on http://www.iim.csic.es/~gingproc/ssmGO.html.
Configuration of search parameters in different algorithms (F is an algorithmic control parameter used to control the size of the perturbation in the mutation operator for DE, CR is the crossover constant in DE, varphi is a parameter determining the standard deviation of the mutation in the evolution strategy).
Algorithm | RDPSO | SS method | PSO | DE | (μ, λ)-ES | (μ+ λ)-ES |
---|---|---|---|---|---|---|
Search parameters | Population Size = 100 α = 0.75 β = 1.0 | Initial Population Size = 100, 10 selected individual after initialization | Population Size = 100 w = 0.729 c1 = 1.49;c2 = 1.49 | Population Size = 100 F = 0.5 CR = 0.55 | λ = 100 μ = 10 varphi = 1 | λ = 100 μ = 10 varphi = 1 |
Results
J values and computational time for the global optimization methods in Case Study 1
Results for the Experiments (simulations) with Noise-free Data | ||||||
---|---|---|---|---|---|---|
Algorithms | RDPSO | SS method | ( μ, λ )-ES | PSO | DE | ( μ+ λ )-ES |
Best Value of J | 1.3740e-14 | 0.3930 | 301.9941 | 3.4461e-005 | 258.892 | 339.4941 |
Mean Value of J | 2.5845e-004 | 0.5703 | 2.8522e+06 | 0.0225 | 2.3197e+03 | 1.9103e+06 |
Standard Deviation of J | 3.5978e-04 | 0.1348 | 2.2673e+06 | 0.0183 | 704.7776 | 2.2118e+06 |
CPU time(h) | 0.0347 | -- | 0.0419 | 0.0330 | 0.0353 | 0.0419 |
Results for the Experiments (simulations) with Noisy Data | ||||||
Algorithms | RDPSO | SS method | ( μ , λ )-ES | PSO | DE | ( μ + λ )-ES |
Best Value of J | 0.2023 | 0.7195 | 388.9242 | 0.2028 | 1.0273e+003 | 52.4817 |
Mean Value of J | 0.2026 | 0.8361 | 2.1952e+05 | 0.2083 | 2.5488e+003 | 8.2498e+05 |
Standard Deviation of J | 4.2918e-04 | 0.0743 | 3.2308e+05 | 0.0126 | 5393.6832 | 1.6245e+06 |
CPU time(h) | 0.0349 | -- | 0.0423 | 0.0338 | 0.0351 | 0.0422 |
J values and computational time of the global optimization methods for Case Study 2, including the results for the noise-free and noisy data.
Results for the Experiments with Noise-free Data | ||||||
---|---|---|---|---|---|---|
Algorithms | RDPSO | SS method | ( μ, λ )-ES | PSO | DE | ( μ + λ )-ES |
Best Value of J | 0.009124 | 7.1358e-07 | 0.022858 | 7.140163 | 10.168989 | 0.123209 |
Mean Value of J | 0.178881 | 3.4.274e-06 | 0.736311 | 10.3859 | 17.701876 | 2.141820 |
Standard Deviation of J | 0.252749 | 1.3649e-06 | 0.960729 | 3.1927 | 4.112377 | 1.692726 |
CPU time(h) | 52.4 | -- | 54.9 | 49.2 | 53.8 | 53.3 |
Results for the Experiments with Noisy Data | ||||||
Algorithms | RDPSO | SS method | ( μ , λ )-ES | PSO | DE | ( μ + λ )-ES |
Best Value of J | 0.2313 | 0.2337 | 2.5957 | 7.7433 | 11.7900 | 5.1490 |
Mean Value of J | 0.3459 | 0.3106 | 3.6029 | 11.2353 | 18.5928 | 10.8691 |
Standard Deviation of J | 0.1268 | 0.1325 | 0.1730 | 3.2921 | 3.3616 | 3.7065 |
CPU time(h) | 52.4 | -- | 54.9 | 49.2 | 53.8 | 53.3 |
Decision vector for the solution found by RDPSO in the experiment with noise-free data for Case Study 2.
Elements of best vector | ||||
---|---|---|---|---|
θ_{1}-θ_{4} | 0.890001 | 0.996928 | 1.990488 | 1.000000 |
θ_{5}-θ_{8} | 1.998655 | 0.885005 | 1.000085 | 1.000213 |
θ_{9}-θ_{12} | 1.898600 | 0.999900 | 2.002120 | 1.010091 |
θ_{13}-θ_{16} | 0.998763 | 1.005851 | 2.000021 | 0.995512 |
θ_{17}-θ_{20} | 2.000150 | 0.996318 | 0.100025 | 1.000013 |
θ_{21}-θ_{24} | 0.100211 | 0.099875 | 1.000361 | 0.100021 |
θ_{25}-θ_{28} | 0.100004 | 1.000300 | 0.100008 | 1.000750 |
θ_{29}-θ_{32} | 1.000321 | 0.987620 | 1.000041 | 1.000035 |
θ_{33}-θ_{36} | 0.997855 | 0.998856 | 1.000000 | 1.000001 |
Decision vector for the solution found by RDPSO in the experiment with noisy data for Case Study 2
Elements of best vector | ||||
---|---|---|---|---|
θ_{1}-θ_{4} | 1.0128 | 0.9962 | 1.9923 | 1.0220 |
θ_{5}-θ_{8} | 1.9698 | 1.0057 | 1.4290 | 0.7919 |
θ_{9}-θ_{12} | 1.8388 | 1.6173 | 1.4905 | 0.9384 |
θ_{13}-θ_{16} | 1.2683 | 1.0180 | 1.3832 | 0.9271 |
θ_{17}-θ_{20} | 2.0248 | 1.3612 | 0.1242 | 1.4658 |
θ_{21}-θ_{24} | 0.0956 | 0.1101 | 0.5970 | 0.1482 |
θ_{25}-θ_{28} | 0.0979 | 1.0012 | 0.0977 | 0.9916 |
θ_{29}-θ_{32} | 1.3351 | 1.9900 | 1.3957 | 1.9209 |
θ_{33}-θ_{36} | 1.5504 | 1.3801 | 1.6637 | 1.1668 |
Conclusions
In this paper, a variant of RDPSO algorithm was proposed and showed to be able to successfully solve two inverse problems associated with the thermal isomerization of α-pinene and a three-step pathway, respectively. The results indicate that the proposed RDPSO algorithm outperformed its PSO predecessors and some other competitors in the first problem, and also had the second best algorithmic performance among all the compared algorithms.
Like other stochastic optimization methods, a possible drawback of the RDPSO method is the computational effort required. This is mainly because most of the computational time was spent on solving the forward problem. One measure that can be taken is to incorporate the local search technique into the algorithm in order to accelerate its convergence speed. Another is to develop a parallelized RDPSO implementation to solve inverse problems on computer clusters to reduce the computational cost to a reasonable level. Our future tasks will focus on these two ways of improving the algorithmic effectiveness of the RDPSO algorithm.
Availability and requirements
In the additional file 1 the source codes of five of the tested algorithms on the two benchmark systems are provided. It includes two file folds, one for benchmark system 1 and the other one for benchmark system 2. All the algorithms are programmed with C++ in Microsoft Visual C++ 6.0.
In additional file 2 the data files for the five algorithms used in the two case studies are provided. For each case, the data corresponding to the best results generated by 20 runs of each algorithm are provided in a .txt file.
Declarations
Acknowledgements
This work is supported by the Natural Science Foundation of Jiangsu Province, China (Project Number: BK2010143), by the Natural Science Foundation of China (Project Numbers 61170119, 61105128, 61373055), by the Program for New Century Excellent Talents in University (Project Number: NCET-11-0660), by the RS-NSFC International Exchange Programme (Project Number: 61311130141), and by the Key grant Project of Chinese Ministry of Education (Project Number: 311024).
Declarations
Funding for the publication of this article comes from the Natural Science Foundation of China (NSFC) grant 61170119.
This article has been published as part of BMC Bioinformatics Volume 15 Supplement 6, 2014: Knowledge Discovery and Interactive Data Mining in Bioinformatics. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcbioinformatics/supplements/15/S6.
Authors’ Affiliations
References
- Fogel GB, Corne DW: Evolutionary computation in bioinformatics. Morgan Kaufmann. 2002Google Scholar
- Pal SK, Bandyopadhyay S, Ray SS: Evolutionary computation in bioinformatics: A review. IEEE Transactions on Systems, Man, and Cybernetics-Part C: Applications and Reviews. 2006, 56 (5): 601-615.View ArticleGoogle Scholar
- Sun J, Lai C-H, Wu X: Particle swarm optimization: Classical and quantum perspectives. CRC Press. 2011Google Scholar
- Sun J, Fang W, Wu X, Palade V, Xu W: Quantum-behaved particle swarm optimization: Analysis of individual particle behaviour and parameter selection. Evolutionary Computation. 2012, 20 (3): 349-393. 10.1162/EVCO_a_00049.View ArticlePubMedGoogle Scholar
- Sun J, Wu X, Fang W, Palade V, Lai CH, Xu W: Convergence analysis and improvements of quantum-behaved particle swarm optimization. Information Sciences. 193: 81-103.Google Scholar
- Sun J, Fang W, Palade V, Wu X, Xu W: Quantum-behaved particle swarm optimization with Gaussian distributed local attractor point. Applied Mathematics and Computation. 2011, 218 (7): 3763-3775. 10.1016/j.amc.2011.09.021.View ArticleGoogle Scholar
- Sun J, Fang W, Wu X, Xie Z, Xu W: QoS multicast routing using a quantum-behaved particle swarm optimization algorithm. Engineering Applications of Artificial Intelligence. 2011, 24 (1): 123-131. 10.1016/j.engappai.2010.08.001.View ArticleGoogle Scholar
- Das S, Abraham A, Konar A: Swarm Intelligence Algorithms in Bioinformatics. Computational Intelligence in Bioinformatics, Studies in Computational Intelligence. 2008, 94: 113-147. 10.1007/978-3-540-76803-6_4.View ArticleGoogle Scholar
- Sun J, Chen W, Fang W, Wu X, Xu W: Gene expression data analysis with the clustering method based on an improved quantum-behaved particle swarm optimization. Engineering Applications of Artificial Intelligence. 2012, 25 (2): 376-391. 10.1016/j.engappai.2011.09.017.View ArticleGoogle Scholar
- Sun J, Wu X, Fang W, Ding Y, Long H, Xu W: Multiple sequence alignment using the Hidden Markov Model trained by an improved quantum-behaved particle swarm optimization. Information Sciences. 2012, 182 (1): 93-114. 10.1016/j.ins.2010.11.014.View ArticleGoogle Scholar
- Cho KH, Shin SY, Kim HW, Wolkenhauer O, McFerran B, Kolch W: Mathematical modelling of the influence of RKIP on the ERK signalling pathway. In Proceedings of the First International Workshop on Computational Methods in Systems Biology. 2003, Springer-Verlag, 127-141.Google Scholar
- Swameye I, Muller TG, Timmer J, Sandra O, Klingmuller U: Identification of nucleocytoplasmic cycling as a remote sensor in cellular signalling by databased modelling. Proc Natl Acad Sci USA 100. 2003, 1028-1033.Google Scholar
- Ashyraliyev M, Fomekong-Nanfack Y, Kaandorp JA, Blom JG: Systems biology: parameter estimation for biochemical models. FEBS J. 2009, 276: 886-902. 10.1111/j.1742-4658.2008.06844.x.View ArticlePubMedGoogle Scholar
- Mendes P, Kell DB: Non-linear optimization of biochemical pathways: applications to metabolic engineering and parameter estimation. Bioinformatics. 1998, 14: 869-883. 10.1093/bioinformatics/14.10.869.View ArticlePubMedGoogle Scholar
- Peifer M, Timmer J: Parameter estimation in ordinary differential equations for biochemical processes using the method of multiple shooting. IET Syst Biol. 2007, 1: 78-88. 10.1049/iet-syb:20060067.View ArticlePubMedGoogle Scholar
- Polisetty PK, Voit EO, Gatzke EP: Yield optimization of saccharomyces cerevisiae using a GMA model and a MILP-based piecewise linear relaxation method. Santa Barbara, CA. 2005Google Scholar
- Gonzalez OR, Kuper C, Jung K, Naval PC, Mendoza E: Parameter estimation using Simulated Annealing for S-system models of biochemical networks. Bioinformatics. 2007, 23: 480-486. 10.1093/bioinformatics/btl522.View ArticlePubMedGoogle Scholar
- Kikuchi S, Tominaga D, Arita M, Takahashi K, Tomita M: Dynamic modelling of genetic networks using genetic algorithm and S-system. Bioinformatics. 2003, 19: 643-650. 10.1093/bioinformatics/btg027.View ArticlePubMedGoogle Scholar
- Moles CG, Mendes P, Banga JR: Parameter estimation in biochemical pathways: a comparison of global optimization methods. Genome Res. 2003, 13: 2467-2474. 10.1101/gr.1262503.PubMed CentralView ArticlePubMedGoogle Scholar
- Tsai KY, Wang FS: Evolutionary optimization with data collocation for reverse engineering of biological networks. Bioinformatics. 2005, 21: 1180-1188. 10.1093/bioinformatics/bti099.View ArticlePubMedGoogle Scholar
- Zi Z, Klipp E, SBML-PET: A systems biology markup language-based parameter estimation tool. Bioinformatics. 2006, 22: 2704-2705. 10.1093/bioinformatics/btl443.View ArticlePubMedGoogle Scholar
- Rodriguez-Fernandez M, Egea JA, Banga JR: Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems. BMC Bioinformatics. 2006, 7:Google Scholar
- Egea JA, Rodriguez-Fernandez M, Banga JR, and Marti R: Scatter search for chemical and bioprocess optimization. Journal of Global Optimization. 2007, 37: 481-530. 10.1007/s10898-006-9075-3.View ArticleGoogle Scholar
- Besozzi D, Cazzaniga P, Mauri G, Pescini D, Vanneschi L: A Comparison of Genetic Algorithms and Particle Swarm Optimization for Parameter Estimation in Stochastic Biochemical Systems. 7th European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics. Edited by: Pizzuti, C., Ritchie, M.D. and Giacobini, M. 2009, Tubingen, GERMANY, 116-127.View ArticleGoogle Scholar
- Dräger A, Kronfeld M, Ziller MJ, Supper J, Planatscher H, Magnus JB, Oldiges M, Kohlbacher O, Zell A: Modeling metabolic networks in C. glutamicum: a comparison of rate laws in combination with various parameter optimization strategies. BMC Systems Biology. 2009, 3: 5-10.1186/1752-0509-3-5.PubMed CentralView ArticlePubMedGoogle Scholar
- Nakatsui M, Horimoto K, Okamoto M, Tokumoto Y, Miyake J: Parameter optimization by using differential elimination: a general approach for introducing constraints into objective functions. BMC Systems Biology. 2010, 4 (Suppl 2): S9-10.1186/1752-0509-4-S2-S9.PubMed CentralView ArticlePubMedGoogle Scholar
- Zhang Y, Xuan J, de los Reyes BG, Clarke R, Ressom HW: Reverse engineering module networks by PSO-RNN hybrid modelling. BMC Genomics. 2009, 10 (Suppl 1): S15-PubMed CentralView ArticlePubMedGoogle Scholar
- Xu R, Venayagamoorthy GK, Donald C, Wunsch I: Modeling of gene regulatory networks with hybrid differential evolution and particle swarm optimization. Neural Networks. 2007, 20 (8): 917-927. 10.1016/j.neunet.2007.07.002.View ArticlePubMedGoogle Scholar
- Katare S, Kalos A, West D: A hybrid swarm optimizer for efficient parameter estimation. In Proceedings of Congress on Evolutionary Computation. 2004, 309-315.Google Scholar
- Kutalik Z, Tucker W, Moulton V: S-system parameter estimation for noisy metabolic profiles using Newton-flow analysis. Iet Systems Biology. 2007, 1: 174-180. 10.1049/iet-syb:20060064.View ArticlePubMedGoogle Scholar
- Chou IC, Martens H, Voit EO: Parameter estimation in biochemical systems models with alternating regression. Theor Biol Med Model. 2006, 3: 25-10.1186/1742-4682-3-25.PubMed CentralView ArticlePubMedGoogle Scholar
- Vilela M, Borges CCH, Vinga S, Vasconcelos ATR, Santos H, Voit EO, Almeida JS: Automated smoother for the numerical decoupling of dynamics models. BMC Bioinformatics. 2007, 8-Google Scholar
- Voit EO, Almeida J: Decoupling dynamical systems for pathway identification from metabolic profiles. Bioinformatics. 2004, 20: 1670-1681. 10.1093/bioinformatics/bth140.View ArticlePubMedGoogle Scholar
- Liu PK, Wang FS: Inference of biochemical network models in S-system using multiobjective optimization approach. Bioinformatics. 2008, 24: 1085-1092. 10.1093/bioinformatics/btn075.View ArticlePubMedGoogle Scholar
- Gennemark P, Wedelin D: Efficient algorithms for ordinary differential equation model identification of biological systems. Iet Systems Biology. 2007, 1: 120-129. 10.1049/iet-syb:20050098.View ArticlePubMedGoogle Scholar
- Maki Y, Ueda T, Okamoto M, Uematsu N, Inamura K, Uchida K, Takahashi Y, Eguchi Y: Inference of genetic network using the expression profile time course data of mouse P19 cells. Genome Informatics. 2002, 13: 382-383.Google Scholar
- Matsubara Y, Kikuchi S, Sugimoto M, Tomita M: Parameter estimation for stiff equations of biosystems using radial basis function networks. BMC Bioinformatics. 2006, 7-Google Scholar
- Chang WC, Li CW, Chen BS: Quantitative inference of dynamic regulatory pathways via microarray data. BMC Bioinformatics. 2005, 6: 44-10.1186/1471-2105-6-44.PubMed CentralView ArticlePubMedGoogle Scholar
- van den Bergh F: An analysis of particle swarm optimizers. 2002, Ph.D. dissertation, University of Pretoria, Pretoria, South AfricaGoogle Scholar
- Kennedy J: Some issues and practices for particle swarms. In Proceedings of the IEEE Swarm Intelligence Symposium. 2007, 801-808.Google Scholar
- Omar MA, Addison Wesley, Hunter WG, MacGregor JF, Grjavec J: Elementary solid state physics: principles and applications Some problems associated with the analysis of multiresponse data 1993.\Box GEP. Technometrics. 1973, 15: 33-51. 10.1080/00401706.1973.10489009.View ArticleGoogle Scholar
- Sun J, Zhao J, Wu X, Fang W, Cai Y, Xu W: Parameter Estimation for Chaotic Systems with a Drift Particle Swarm Optimization Method. Physics Letters A. 2010, 374 (28): 2816-2822. 10.1016/j.physleta.2010.04.071.View ArticleGoogle Scholar
- Sun J, Fang W, Lai C-H, Xu W: Solving the Multi-Stage Portfolio Optimization Problem with A Novel Particle Swarm Optimization. Expert Systems with Applications. 2011, 38 (6): 6727-6735. 10.1016/j.eswa.2010.11.061.View ArticleGoogle Scholar
- Sun J, Wu X, Palade V, Fang W, Shi Y: Random Drift Particle Swarm Optimization. arXiv preprint. 2013, arXiv:1306.2863Google Scholar
- Sun J, Palade V, Wu X, Fang W, Wang Z: Solving the Power Economic Dispatch Problem with Generator Constraints by Random Drift Particle Swarm Optimization. IEEE Transactions on Industrial Informatics. 2013, 10 (1): 222-232.View ArticleGoogle Scholar
- Sun J, Palade V, Wu X, Fang W: Multiple Sequence Alignment with Hidden Markov Models Learned by Random Drift Particle Swarm Optimization. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2014, 10.1109/TCBB.2013.148Google Scholar
- Box GEP, Hunter WG, MacGregor JF, Grjavec J: Some problems associated with the analysis of multiresponse data. Technometrics. 1973, 15: 33-51. 10.1080/00401706.1973.10489009.View ArticleGoogle Scholar
- Holzinger A: On Knowledge Discovery and interactive intelligent visualization of biomedical data: Challenges in Human-Computer Interaction & Biomedical Informatics. DATA-International Conference on Data Technologies and Applications. 2012, 5-16.Google Scholar
- Holzinger A, Yildirim P, Geier M, Simonic KM: Quality-based knowledge discovery from medical text on the Web. Example of computational methods in Web intelligence. Qual Issues in the Management of Web Information ISRL. 2013, 50: 145-158. 10.1007/978-3-642-37688-7_7.View ArticleGoogle Scholar
- Kennedy J, Eberhart R: Particle swarm optimization. In Proceedings of IEEE International Conference on Neural Networks (ICNN 95). 1995, Perth, Australia, 1942-1948.View ArticleGoogle Scholar
- Angeline PJ: Evolutionary optimization versus particle swarm optimization: philosophy and performance differences. In Proceedings of the 7th International Conference on Evolutionary Programming VII, Springer-Verlag. 1998, 601-610.Google Scholar
- Shi YH, Eberhart R: A modified particle swarm optimizer. In Proceedings of IEEE International Conference on Evolutionary Computation. 1998, Anchorage, Ak, 69-73.Google Scholar
- Clerc M: The swarm and the queen: towards a deterministic and adaptive particle swarm optimization. In Proceedings of the 1999 Congress on Evolutionary Computation. 1999, 1951-1957.Google Scholar
- Suganthan PN: Particle swarm optimiser with neighbourhood operator. In Proceedings of the 1999 Congress on Evolutionary Computation. 1999, 1962-1967.Google Scholar
- Kennedy J: Bare bones particle swarms. In Proceedings of the IEEE Swarm Intelligence Symposium. 2003, 80-87.Google Scholar
- Janson S, Middendorf M: A hierarchical particle swarm optimizer and its adaptive variant. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics. 2005, 35: 1272-1282.View ArticleGoogle Scholar
- Clerc M, Kennedy J: The particle swarm - explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation. 2002, 6: 58-73. 10.1109/4235.985692.View ArticleGoogle Scholar
- Storn R, Price K: Differential evolution - A simple and efficient heuristic for global optimization over continuous spaces. J of Global Optimization. 1997, 11: 341-359. 10.1023/A:1008202821328.View ArticleGoogle Scholar
- Glover F: Heuristics for integer programming using surrogate constraints. Decision Sciences. 1977, 8 (1): 156-166. 10.1111/j.1540-5915.1977.tb01074.x.View ArticleGoogle Scholar
- Beyer HG, Schwefel HP: Evolution strategies - A comprehensive introduction. Natural Computing: an international journal. 2002, 1: 3-52. 10.1023/A:1015059928466.View ArticleGoogle Scholar
- Runarrson TP, Yao X: Stochastic ranking for constrained evolutionary optimization. IEEE Transactions on Evolutionary Computation. 2000, 4: 284-294. 10.1109/4235.873238.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.