 Research
 Open Access
 Published:
Dynamic sensitivity analysis of biological systems
BMC Bioinformatics volume 9, Article number: S17 (2008)
Abstract
Background
A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fedbatch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be timedependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the timedependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains.
Results
We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fedbatch fermentation system with a timevarying feed rate to evaluate the applicability of the algorithm to realistic models with timedependent admissible input.
Conclusion
By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with timedependent admissible input.
Background
A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. The most often used mathematical models for dynamic biological systems are formulated as nonlinear ordinary differential equations (ODEs). The critical challenges to get an ODE model are structure identification and parameter estimation of the model. To identify the structure and parameters of a dynamic model, the most important and essential job is to find the solution of ODEs efficiently and accurately. This job can be treated by analytical and numerical methods. Analytical methods are limited to certain special forms of ODEs, but numerical methods have no such limitations. There are several numerical methods can be used to solve ODEs [1–3], e.g., Taylorseries methods, modified Euler methods, and RungeKutta methods with variable step size control. The Taylorseries method takes more computation time to solve ODEs if the various derivatives are complicated, and the error is difficult to determine for arbitrary functions. The modified Euler method is a special case of a secondorder RungeKutta method, and is more efficient compared to the Taylorseries method [4]. Fourthorder RungeKutta method is the most widely used ODE solver to meet requirements on both efficiency and accuracy. Collocation methods [5] are another common algorithms for solving ODEs and have been used for more than forty years. Wang [6] proposed a modified collocation method to transform ODEs into algebraic equations, and solved them by the NewtonRaphson method or an iteration method with step length restriction. The restricted step size is fixed and computed by trial and error. To overcome this drawback, we propose an adaptive step size control approach based on the Banach fixed point theorem for the modified collocation method in this paper.
There are different types of gains and dynamic sensitivities defined for sensitivity analysis [7–9]. The relative change of a dependent variable in response to a relative change in an independent variable is called a logarithmic gain, or a log gain. Log gains describe the change of dependent variables due to an environment change and are very useful for the assessment of the robustness and parameter estimation of a model. The change of a dependent variable in response to a change in a parameter is called a parameter sensitivity. In contrast to log gains, parameter sensitivities are the change of dependent variables correspond to a structure change in the model. The Biochemical Systems Theory (BST) [10] and Metabolic Control Analysis (MCA) [11–13] have achieved a great success in addressing the sensitivities at a steady state. However, the transient or periodic behavior is the primary interest in many systems (e.g., oscillation systems and fermentation systems that do not have a steady state). In these systems, the parameter sensitivities and log gains change with time, therefore the calculation methods for the steady state responses can not function. Dynamic sensitivity analysis is used in studying timevarying sensitivities in dynamic biological systems. A dynamic biological system can be characterized using logarithmic gains, sensitivities with respect to parameters and initial conditions. Several methods have been published to evaluate dynamic sensitivities [14–24]. They can be divided into the indirect methods (IDMs) and the direct methods (DMs). In the IDMs, the value of one dedicated parameter is varied infinitesimally while the values of other parameters are fixed. The model equations are solved anew for these sets of values of the parameters that differ in the value of the dedicated parameter only. The sensitivity of each variable with respect to this dedicated parameter is computed using the difference between the solutions of that variable for the two sets of values of the parameters divided by the infinitesimal difference of the dedicated parameter. In the DMs, using an ODE solver to solve the model equations and sensitivity equations simultaneously is the most used method for computing dynamic sensitivities. Shiraishi et al. [25] published a variableorder, variablestep Taylorseries method that can be used as an ODE solver providing a highly accurate calculation to compute dynamic sensitivities. This method is limited to general mass action (GMA) models described by powerlaw differential equations. RungeKutta methods with variable step size control can be used to compute dynamic sensitivities for most of the nonlinear differential equations, but is inefficient to determine the step size in a large dimensional system including the model differential equations and sensitivity differential equations. Due to the efficiency, Dunker [15] proposed the decoupled direct method (DDM), in which the sensitivity equations are solved separately from the model equations. He said: "the decoupled method has advantages in simplicity, stability, accuracy, efficiency, storage requirements, and program size over other methods". Although the DDM approach has so many advantages, the step size for the time profile determined by the error control of model equations is unable to be used for the sensitivities when the sensitivity equations are more stiff than the model equations and will generate inaccurate results.
Dynamic sensitivity analysis evaluates the influences on dependent variables due to variations of parameters, initial conditions and independent variables. In many practical applications, e.g., the fedbatch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be timedependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the timedependent input. Shiraishi et al. [26] proposed an efficient method, based on a combination of the recasting technique and the Taylorseries method, for calculating the time courses of log gains to investigate the dynamic behavior of log gains for oscillation models with a limit cycle. The method is limited to the computations of dynamic log gains with respect to constant independent variables. We extend the computations of dynamic log gains to a model with continuous timevarying admissible input based on the finite parameterization method (PM). The classical PM was created for numerical solutions of optimal control problems [27]. The central idea of the method relies on a simple approximation mechanism. The whole time domain of a continuous admissible input is partitioned into several subintervals, and the input for each subinterval is approximated by a piecewise constant function. The dynamic log gains with respect to the continuous admissible input can be computed based on the partial derivations of dependent variables with respect to the piecewise constant input [28–30].
In this paper, we present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. This algorithm is the modified collocation method, proposed by Wang [6], with an adaptive step size control approach. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. In the algorithm, the modified collocation method is used to transform model and sensitivity equations into algebraic equations, and the approximated solution is solved by an iteration method. This algorithm can be extended easily to solve problems of mixed differential and algebraic equations (DAEs) by combing algebraic equations with that transferred from differential equations. In our algorithm for computing dynamic sensitivities of an ODE system, the model equations and sensitivity equations are solved alternatively in two stages. First, the model equations are advanced from t_{ i }to t_{ i }+ η using the iteration method, where η is the step size decided by model equations based on the fixedpoint theorem. Second, the solution of model equations at t_{ i }+ η and the same step size are used to propagate the sensitivity equations from t_{ i }to t_{ i }+ η. For dynamic systems with continuous timedependent admissible input, the dynamic log gains are computed based on the parameterization techniques. The PM is used to approximate the original infinitedimensional problem by a finite dimensional one with piecewise constant input. The dynamic log gain for this approximation problem is defined as the percentage change of a dependent variable in response to an infinitesimal percentage change for each piecewise constant input.
To show this algorithm can perform dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities from this new method and that from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis is performed on an ethanol fedbatch fermentation system with a timevarying feed rate to evaluate the applicability of the algorithm to realistic models with timedependent admissible input.
Results and discussion
To illustrate the accuracy of our algorithm, it is implemented and applied to stiff chemical mechanisms for the pyrolysis of ethane as well as the oxidation of formaldehyde. These systems have been shown to be unstable using both the DM and the Green's function method [15]. The same dynamic sensitivity analysis is performed on an ethanol fedbatch fermentation system with a timevarying feed rate to evaluate the applicability of the algorithm to realistic models with timedependent admissible input.
Pyrolysis of ethane
The chemical mechanism for the pyrolysis of ethane is a very stiff system and consists of seven species in five reactions. The chemical reactions and rate constants are shown in Table 1 and are described by GMA model equations as follows:
where [x] is the concentration of species x and k_{ i }is the rate constant. The initial concentration of C_{2}H_{6} is 5.951 × 10^{6} mol/cm^{3} and all other initial concentrations are zeros. All sets of sensitivity coefficients with respect to all rate constants and initial conditions are computed simultaneously without any difficulty using our algorithm with a tolerance of 10^{7}. The normalized sensitivity coefficients for the pyrolysis of ethane at 1 s and 20 s calculated by our algorithm are shown in Table 2. The results obtained by the indirect method (IDM) according to the finite difference approximation and the direct method with Rosenbrock stiff integrator (R/DM) [24] are also given in Table 2 for comparison. The results of our algorithm are of equal accuracy to R/DM in comparison to the IDM and the maximum relative error is 0.58%.
Oxidation of formaldehyde
The formaldehyde oxidation mechanism is a larger system, involves 15 species in 25 reactions. The chemical reactions and rate constants are shown in Table 3 and are described by GMA model equations as follows:
where [x] is the concentration of species x and k_{ i }is the rate constant. The initial concentrations in mol/cm^{3} are [CH_{2}O] = 1.124 × 10^{7}, [O_{2}] = 2.109 × 10^{6}, [CO] = 4.699 × 10^{6}, [M ] = 1.1772 × 10^{5}, and all other initial concentrations are zeros. Sensitivity coefficients with respect to all rate constants and initial conditions are computed successfully using our algorithm with a tolerance of 10^{9}. The normalized sensitivity coefficients for O and H_{2}O at 0.005 s calculated by our algorithm are presented in Table 4. The results obtained by IDM and the direct method with Rosenbrock stiff integrator (R/DM) are also given in Table 4 for comparison. Our results are in good agreement with the R/DM in comparison to the IDM, and the maximum relative error is 0.25%. The discrepancies between the results of our algorithm and the R/DM method are sufficiently small to prove that this new method is capable of performing dynamic sensitivity analysis for stiff differential equations as accurate as direct methods.
Ethanol fedbatch fermentation
The dynamic sensitivity analysis of an ethanol fedbatch fermentation process, a real dynamic biological system never reaching a steady state, is used to elucidate the applicability of our algorithm. Wang et al. [31] built a mathematical kinetic model of fermentation for ethanol and glycerol production using Saccharomyces diastaticus LORRE 316, which is a high ethanol tolerance yeast. The mathematical kinetic model for the fedbatch process consists of the dynamic behavior of biomass, glucose, ethanol and glycerol, and its dynamic mass balance equations are expressed as follows:
where x is the concentration of cell mass, s is the concentration of glucose, p_{1} is the concentration of ethanol, p_{2} is the concentration of glycerol, V is the working volume of the fermenter, t_{ f }is the final fermentation time, τ = t/t_{ f }is the normalized fermentation time, s_{ F }is the feed concentration of glucose, F is the feed rate, ${Y}_{{p}_{1}/s}$ is the ethanol yield factor, and ${Y}_{{p}_{2}/s}$ is the glycerol yield factor. The unstructured kinetic models for the specific cell growth and product formation are respectively expressed as follows:
Using a batch fermentation model, Wang et al. obtained the optimal values of 19 parameters [31]. The initial and feed concentrations of glucose are set to 10 and 200 g/L, the initial concentration of biomass is set to 2 g/L, and the starting working volume is set to 1.5 L in the computations of optimal feed rate and optimal fermentation time to maximize the ethanol production rate J = p_{1}V/t_{ f }under some physical constraints, e.g., the residual glucose restriction s(t_{ f }) ≤ s_{ r }for reducing the separation cost, s_{ r }is the concentration of the desired residual glucose. The optimal final fermentation time is 12.836 hours and the optimal feed rate F* for the fedbatch fermentation model [31] is as follows:
Figure 1 shows the computational time profile of the ethanol fedbatch fermentation model with the optimal feed rate.
Our algorithm is applied to the ethanol fedbatch fermentation model using the initial conditions as described above. All dynamic sensitivities with respect to 22 parameters (including s_{ F }, F and t_{ f }) and initial conditions, and the dynamic log gains with respect to timevarying feed rate are computed simultaneously without any difficulty. Figure 2 shows the dynamic relative sensitivities with respect to μ_{ m }, ${Y}_{{p}_{1}/s}$, ${Y}_{{p}_{2}/s}$, and s_{ F }. When the maximum specific growth rate μ_{ m }is increasing, the rate of consuming glucose is increasing such that the concentration of residue glucose is decreasing. This situation is compatible with the trend in Figure 2(a). The increases in the ethanol and glycerol yield factor cause the increases in the production of ethanol and glycerol, and more glucose remains at the final time. As Figures 2(b) and 2(c) show, to increase the production of ethanol and glycerol by improving the ethanol yield factor is better than by increasing the glyverol yield factor. Figure 2(d) shows that if the feed concentration of glucose is increasing, the cell growth and the production of ethanol and glycerol are increasing. Under this condition, S. diastaticus LORRE 316 is unable to completely consume glucose to produce ethanol during the fermentation time and more glucose remains at the final time.
The relative sensitivity with respect to t_{ f }is shown in Figure 3(a). As expected, an increase in t_{ f }causes a low relative increase in the concentration of cell mass and a high relative decrease in the concentration of residue glucose. Figures 3(b), 3(c) and 3(d) show the dynamic relative sensitivities with respect to the initial conditions x, s, and V. When the initial concentration of cell mass increases, the residue glucose decreases, and the production of ethanol will increase a little, but the production of glycerol will decrease a little at the final fermentation time. Starting the fermentation process with more glucose will cause more glucose to remain and the production of ethanol and glycerol to increase a little at the final fermentation time as shown in Figure 3(c). Figure 3(d) shows that if the initial working volume is increasing, all concentrations of cell mass, glucose, ethanol, and glycerol are decreasing at the final fermentation time.
We are interested in the ethanol production rate J in the fermentation process. The effects on J with respect to μ_{ m }, ${Y}_{{p}_{1}/s}$, ${Y}_{{p}_{2}/s}$, s_{ F }, and t_{ f } are shown in Figure 4. To increase J, it is clear that an increase in ${Y}_{{p}_{1}/s}$ or s_{ F }will have more impact than an equal relative increase in ${Y}_{{p}_{2}/s}$ or μ_{ m }. The negative value of relative sensitivity for J with respect to t_{ f }means a decrease in the fermentation time will get a higher J at the expense of more residual glucose. Though the relative sensitivity of J with respect to s_{ F }is higher than that with respect to ${Y}_{{p}_{1}/s}$ at the final fermentation time, by increasing s_{ F }to increase J will cause more glucose left at the final time and increase the cost to separate the residue glucose and the ethanol product. We can make a conclusion that to increase J by increasing ${Y}_{{p}_{1}/s}$ will be a better choice than increasing s_{ F }or ${Y}_{{p}_{2}/s}$, and decreasing the fermentation time.
The feed rate F(t) of the fedbatch fermentation model is a timedependent input control variable, so that the computation of the effect on J with respect to F(t) is an infinite dimensional problem. The fermentation time is divided into ten equal time partitions, and the optimal feed rate F* for the fedbatch fermentation model [31] is approximated by ten piecewise constant functions. The ten input control parameters, denoted by F_{ i }, i = 1,..., 10, are shown in equation (1). The proposed algorithm computes the dynamic log gains based on the parameterization method. All dynamic log gains with respect to F_{ i }, i = 1,..., 10, are computed (data not shown here) with the parameter sensitivities simultaneously. The dynamic log gains of J with respect to F_{ i }, i = 1,..., 10, are computed by
Due to the optimal values of F_{ i }, i = 6,..., 10 are equal to or very close to 0, The dynamic log gains of J with respect to them are small and can be ignored. The dynamic log gains of J with respect to F_{ i }, i = 1,..., 5, are shown in Figure 5. The effects on J are in decreasing order from F_{1} to F_{5}. Increasing the feed rate at an early stage will get a higher J at the final fermentation time than that at a later stage without considering the residual glucose.
Conclusion
To deeply study the dynamic behavior of a biological system, one of the methods is to model it as a mathematical model. The most used mathematical model for simulating biological systems is the ODE model. The essential task for modeling and simulating a biological system is to find the solution of an ODE model efficiently and accurately. We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an ODE system simultaneously. Instead of using error control to decide the step size in solving the model equations, our algorithm computes the step size based on the fixedpoint theorem and the same step size can be used in solving the sensitivity equations.
Dynamic sensitivity analysis is a useful tool to investigate the behavior of dynamic systems. In the direct methods for solving the dynamic sensitivities, sensitivity equations and model equations are coupled and solved together at the expense of more computation time. In contrast, sensitivity equations and model equations are solved separately in the decouple direct methods. The DDMs are more efficient than the DMs due to the dimension of ODEs. The chief disadvantage of DDMs is the requirement of error control on both model equations and sensitivity equations. Our algorithm with an efficient step control approach based on the fixedpoint theorem is used to address the disadvantage of DDMs. Analogous to the DMs, the same step size obtained by model equations is used on both model and sensitivity equations. It has been implemented and applied to wellknown stiff problems with the same accuracy compared to the direct method with Rosenbrock stiff integrator (R/DM). As our algorithm is one of the DDMs, it has the efficiency of the DDMs and the same accuracy of the DMs as presented in the section describing the results. By combining the efficiency and accuracy, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems.
We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with timedependent admissible input. The parameterization method is used to approximate the infinitedimensional computation problem for dynamic log gains in models with timedependent admissible input by a classical finitedimensional computation problem of dynamic log gains. Then, all dynamic log gains and parameter sensitivities can be obtained simultaneously from our algorithm. Appropriate parameterization allows one to obtain a more efficient way to compute the dynamic log gains with respect to a continuous timedependent input than that by finite difference approximation. Finally, the new proposed algorithm is applied to the ethanol fedbatch fermentation system, a real dynamic biological system which never reaches a steady state, with a timevarying feed rate for elucidating the applicability to realistic models with timedependent admissible input. Through the dynamic sensitivity analysis of the ethanol fedbatch fermentation model, we conclude that to get a higher ethanol production rate by increasing the ethanol yield factor is a good choice.
Methods
A dynamic biological system is always modeled as a nonlinear ODE system:
where x(t) ∈ ℝ^{n}is a vector of dependent variables, y(t) ∈ ℝ^{m}is a vector of independent variables, θ ∈ ℝ^{p}is a vector of parameters, v ∈ ℝ^{q}is a vector of fluxes between the variables, N ∈ ℝ^{(n+m) × q}is the stoichiometric matrix describing the interconnecting fluxes, x_{0} and y_{0} are initial concentrations of x and y respectively. Using a different kinetic description for v results in a different mathematical model. In the MichaelisMenten model, each element v_{ i }of v is of the form
where x_{ j }is the substrate, ${V}_{i}^{max}$ is the maximum flux for v_{ i }, and K_{ i }is the halfsaturated flux for v_{ i }. For the GMA systems, the kinetic equation for v_{ i }is expressed as a powerlaw function
where x_{ j }∈ x for j ≤ n, x_{ j }∈ y for j >n, γ_{ i }is the rate constant, and g_{ ij }is the kinetic order for each x_{ j }. Equation (2) can be expressed concisely as:
where the function f is assumed to be continuous and differentiable in all its arguments x, y,and θ. This assumption on f is satisfied for both of equations (3) and (4).
For a model described by a nonautonomous dynamic system as follows:
we let x_{n+1}= t and dx_{n+1}/dt = 1, equation (6) can be rewritten as equation (5) with x(t) ∈ ℝ^{n+1}. This is an (n + 1)dimensional autonomous dynamic system. Similarly, an ndimensional time dependent equation is a special case of an (n+ 1)dimensional autonomous dynamic system. Using this trick, we can always remove any time dependence by adding an extra dimension to the system. Thus, without loss of generality, we will consider the autonomous dynamic systems expressed by equation (5) unless stated otherwise.
ODE solver
Given a set of ODEs expressed as equation (5) and a set of time points T = {t_{ i }i = 1,..., k}. An ODE solver is to find the value of x(t_{ i }), t_{ i }∈ T for a given θ. Many ODE solvers with variable step size control can be used to solve equation (5). Wang [6] proposed a modified collocation method with Lagrange polynomials as shape functions to transform ODEs into algebraic equations. The whole time domain of the problem is divided into a number of nonoverlapping intervals [t_{i1}, t_{ i }], i = 1,..., k. The unified formulas of the modified collocation method for the subinterval [t_{j1}, t_{ j }], t_{i1}≤ t_{j1}<t_{ j }≤ t_{ i }, can be expressed as
where η_{ j }is the step size in time t_{ j }, Î is an identitylike matrix, and the coefficient matrices D and $\widehat{D}$ depend on the shape functions. The accuracy and efficiency of collocation methods depend largely on the degree of shape functions. The modified collocated equations with piecewise linear polynomials transformed from equation (5) for each subinterval [t_{j1}, t_{ j }], t_{i1}≤ t_{j1}<t_{ j }≤ t_{ i }have the same formulas as the modified Euler method:
Instead of solving the ODEs in equation (5) directly, we find the solution of algebraic equations in equation (8) stepbystep for each time interval [t_{i1}, t_{ i }], i = 1,..., k. The solution obtained from equation (8) is a good approximation solution of ODEs in equation (5) when the step size η_{ j }is small enough. How to decide the step size is an important problem for all ODE solvers. A larger step size can cause the solution to be inaccurate and divergent, and a smaller step size is inefficient for the computations. Wang [6] uses the NewtonRaphson method with step length restriction to solve equation (8). The restricted step size is fixed and computed by trial and error. To overcome this drawback, we propose an adaptive step size control approach based on the Banach fixed point theorem for the modified collocation method in this paper. We will show that this approach can determine the step size automatically and efficiently when computing the solutions and dynamic sensitivities of equation (5) simultaneously.
The Banach fixed point theorem and some terminologies for describing the theorem are defined below.
Definition 1. Metric space [32]
A metric space (X, d) is a set X where a notion of distance d (called a metric) between elements of the set is defined.
Definition 2. Cauchy sequence [32]
A sequence (x_{ n }) in a metric space (X, d) is said to be Cauchy if for every ε > 0 there is a positive integer N such that for all natural numbers m, n >N, the distance d(x_{ m }, x_{ n }) is less than ε.
Definition 3. Complete metric space [32]
A metric space (X, d) in which every Cauchy sequence has a limit in X is called complete.
Theorem 4. Banach fixed point theorem [32]
Let (X, d) be a nonempty complete metric space. A mapping ψ: X → X is called a contraction on X if there is a nonnegative real number q < 1 such that for all a, b in X
d(ψ(a), ψ(b)) ≤ q·d(a, b). (9)
The contraction ψ on X admits one and only one fixed point x* ∈ X such that ω (x*) = x*.
We now show how to decide the step size η_{ j }in equation (8) based on the Banach fixed point theorem. Equation (8) is an implicit expression of x, and it can be rewritten as
where c = x(t_{j1})+0.5η_{ j }f(x(t_{j1}), y(t_{j1}); θ) is a constant vector. When the values of independent variable y and θ are given, the solution of x = g(x) can be found by an iteration process. A sequence of values of x(t_{ j }) is obtained using the iterative rule. If x^{i}(t_{ j }) tends to a limit x*(t_{ j }) when i → ∞, it is the answer of x = g(x) and called a fixed point of the function g(x). Let X be the set of x^{i}(t_{ j }), i = 1,..., ∞ and d(a, b) = a  b_{ p }where a and b are arbitrary x^{i}(t_{ j }) and ·_{ p }is the pnorm. Then (X, d) forms a nonempty complete metric space. If g is a contraction on X, the Banach fixed point theorem guarantees the existence of a fixed point and the convergence of the iteration process to that fixed point. By the equation (9) and the definition of distance function d, for a, b ∈ X we obtain
g(a)  g(b)_{ p }≤ qa  b_{ p }, q < 1. (11)
We suppose that g(x) is a continuous and differentiable function on X. By the generalized mean value theorem and the definition of matrix norm, we have
Comparing equations (11) with (12), we obtain
where ·_{ p }is the pnorm of a matrix. By substitution of the term on the right of the equal sign in equation (10) for g, equation (13) can be rewritten as
This equation is used to compute the maximum η_{ j }with p = 2 when the process of finding the solution of equation (8) is in progress. The Jacobian matrix ∂f/∂x in equation (14) must be evaluated at each time t_{ j }. The computation of the Jacobian matrix can be done by evaluating the analytic formula of the partial derivative of f with respect to x which is userprovided, or by the finite difference approximation. For the GMA systems, the model equations are expressed in powerlaw and the value of the Jacobian matrix can be straightforwardly obtained using the analytic formula as follows:
where N is the stoichiometric matrix, v_{ i }is the i^{th}element of v ∈ ℝ^{q}and g_{ ij }is the kinetic order for each x_{ j }∈ x in v_{ i }. For efficiency, we approximate the value of 2norm of the Jacobian matrix ∂f/∂x by ∂f/∂x_{Δ}, where n is the dimension of x and ∂f/∂x_{Δ} is the maximum absolute value of the element of the Jacobian matrix. The proposed algorithm AMCM, Adaptive Modified Collocation Method, is shown as follows
Algorithm AMCM
Input:

1.
A set of n ordinary differential equations $\dot{x}$ = f(x, y) with n dependent variables x_{ i }, i = 1,..., n and m independent variables y_{ i }, i = 1,..., m.

2.
Two order sets x_{0} = {x_{ i }(t_{0})i = 1,..., n} of initial values of x and y_{0} = {y_{ i }(t_{0})i = 1,..., m} of initial values of y.

3.
An order set T = {t_{1},..., t_{ k }} of sampling points, t_{ i }, 1 ≤ i ≤ k is the sampling time of the solution of each ODE, k is the number of sampling points.

4.
A tolerance ε
Output: The set of solutions of dependent variables at each sampling time.

For each sampling time t_{ i }in T.

1.
η_{ j }← t_{ i } t_{i1}, d_{ t }← 0, x^{c}← x(t_{i1}), y^{c}← y(t_{i1})

2.
Repeat the following steps until d_{ t }= t_{ i } t_{i1}.

(a)
x^{p}← x^{c}, y^{p}← y^{c}

(b)
Evaluate the Jacobin matrix $A\leftarrow \frac{\partial f({x}^{p},{y}^{p})}{\partial x}$

(c)
Compute the upper bound μ of the value of A_{2} by $\sqrt{n(m+n)}\left\rightA{}_{\Delta},\leftA\right{}_{\Delta}\equiv \underset{i,j}{\mathrm{max}}\left{a}_{ij}\right$

(d)
If μ * ε ≥ 1, it means the ODEs are stiff, then exit this algorithm.

(e)
If μ * η_{ j }> 1, then update η_{ j }with 0.9/μ

(f)
Call iteration algorithm to compute the value of x^{c}stepped forward η_{ j }from x^{p}.

(g)
If the iteration algorithm succeeds in computing x^{c}, then d_{ t }← d_{ t }+ η_{ j }and η_{ j }← t_{ i } t_{i1} d_{ t }, otherwise exist this algorithm.

3.
x(t_{ i }) ← x^{c}, y(t_{ i }) ← y^{c}.

return x(t_{ i }), i = 1,..., k.
End of Algorithm AMCM
Algorithm Iteration
Input:

1.
A set of n ordinary differential equations $\dot{x}$ = f(x, y).

2.
x(t), y(t), η_{ j }and the iteration limitation.
Output: x(t + η_{ j }).

1.
Evaluate the value of f(x(t), y(t)).

2.
x(t + η_{ j }) ← x(t) + f(x(t), y(t)) * η_{ j }.

3.
y(t + η_{ j }) ← y(t) + f(x(t), y(t)) * η_{ j }.

4.
Repeat the following steps until the iteration limitation is reached or the value of x(t + η_{ t }) converges.

(a)
Evaluate the value of f(x(t + η_{ j }), y(t + η_{ j })).

(b)
x(t + η_{ j }) ← x(t) + 0.5 * η_{ j }* (f(x(t), y(t)) + f(x(t + η_{ j }), y(t + η_{ j })))

5.
If the iteration limitation is reached, then exit this algorithm; otherwise, return x(t + η_{ j }).
End of Algorithm Iteration
Dynamic sensitivity Solver
For a model described by equation (5), the absolute parameter sensitivity s(x_{ i }, θ_{ j }) of dependent variable x_{ i }∈ x with respect to parameter θ_{ j }∈ θ is defined as
where x_{ i }(t; θ_{ j }+ Δθ_{ j }) is the i^{th}component of the solution of equation (5) with an increment Δθ_{ j }on the j^{th}parameter. The function x_{ i }(t; θ_{ j }+ Δθ_{ j }) can be expanded into a Taylor series as follows:
where 0 <ξ < 1. If Δθ_{ j }is sufficiently small, the last term of equation (16) can be truncated, leading to a linear approximation of x_{ i }(t; θ_{ j }+ Δθ_{ j }). Substituting the linear approximation of x_{ i }(t; θ_{ j }+ Δθ_{ j }) into equation (15) leads to
This is defined as the firstorder local sensitivity of x_{ i }with respect to θ_{ j }[33]. The relative parameter sensitivity S(x_{ i }, θ_{ j }) of x_{ i }with respect to θ_{ j }is defined as
Similar to the parameter sensitivity, the absolute log gain l(x_{ i }, y_{ j }) and log gain L(x_{ i }, y_{ j }) of x_{ i }∈ x with respect to y_{ j }∈ y are expressed respectively as follows:
Once the local sensitivity is known, the calculation of the relative sensitivity is straightforward. So, for briefing, we limit our explanation on the absolute sensitivity only below.
The absolute dynamic sensitivity of x_{ i }with respect to θ_{ j }is given as
where f_{ i }is the i^{th}element of f [34]. The absolute dynamic log gain of x_{ i }with respect to y_{ j }is similar to equation (17) by replacing θ_{ j }, s(x_{ i }, θ_{ j }) with y_{ j }, l(x_{ i }, y_{ j }) respectively when y_{ j }(t) is a constant. In the case which y_{ j }(t) is a continuous timedependent function, the whole time domain of y_{ j }(t) is partitioned into N_{ u }time intervals (t_{k1}, t_{ k }), k = 1,..., N_{ u }. Function y_{ j }(t) is parameterized by the piecewise constant functions ω_{ k }(t), k = 1,..., N_{ u }, as follows:
where u_{ k }, k = 1,..., N_{ u }, are constant input control parameters and
The continuous timedependent function y_{ j }(t) is approximated by equation (18). The dynamic log gain l(x_{ i }, y_{ j }) of infinite dimension is transferred into N_{ u }dynamic log gains of dimension one with respect to the input control parameters, and expressed as.
where j = 1,..., N_{ u }.
We extended the proposed algorithm AMCM to compute the dynamic sensitivities. The dynamic sensitivities with respect to parameters (include the initial conditions) and absolute dynamic log gains can be computed simultaneously. Let u be the vector of input control parameters and r be an N_{ r }dimensional vector of constants which contains constant independent variables in y and the input control parameters in u. When all components of y are constant, the vector r is equal to y. Let z be an n + p + N_{ r }dimensional vector which contains model parameters θ, initial conditions of dependent variables x_{0} and constant input in r. Φ indicates a matrix of size n × (n + p + N_{ r }) which contains the absolute sensitivities with respect to model parameters and initial conditions, and the absolute log gains with respect to constant independent variables and input control parameters. z and Φ have the following form:
The model equations are rewritten as
The sensitivity equations in matrix form can be derived by applying the chain rule to the derivative of Φ:
Let ϕ_{ i }be the i^{th}column vector of Φ and z_{ i }be the i^{th}element of z. The matrix sensitivity equations in equation (19) are rearranged into a vector of linear ODEs:
Where
In order to find the solution of equation (20), the whole time domain is divided into a number of nonoverlapping time intervals [t_{i1}, t_{ i }], i = 1,..., k. The sensitivity equations in equation (20) are transformed to algebraic equations using modified collocation method with piecewise linear polynomials as shape functions for each subinterval [t_{j1}, t_{ j }], t_{i1}≤ t_{j1}<t_{ j }≤ t_{ i }:
where η_{ j }is the step size in time t_{ j }. This equation can be rewritten as:
where the constant vector
The value of w(t_{ j }) can be evaluated if the values of x(t_{ j }) have been computed when solving equation (8) by the ODE solver. The value of the M(t_{ j }) in equation (22) must be calculated before the solution of equation (21) can be obtained by an iteration method. When the value of the Jacobian matrix ∂f/∂x in time t_{ j }is known, the value of the M(t_{ j }) can be obtained straightforwardly. There is no requirement for computing the value of the Jacobian matrix here due to it has been computed for step size control when solving equation (8) by the ODE solver.
Equation (22) is similar to equation (10), we can apply the fixed point theorem to ϕ = h(ϕ) to get the maximum η_{ j }satisfying the following condition
According to the definition of the matrix norm, it is easy to verify that M_{ p }= ∂f/∂x_{ p }and we have
Equation (23) is the same as equation (14), so the same criterion is used to determine the step size η_{ j }for computing the time course and sensitivity profile with the guarantee of convergence and existence of a fixed point. The dynamic sensitivities can be obtained directly by solving equation (21) after the time profile of x has been computed and the step size η_{ j }has been decided by the ODE solver.
References
 1.
Albrecht P: A New Theoretical Approach to RungeKutta Methods. SIAM Journal on Numerical Analysis. 1987, 24: 391406. 10.1137/0724030.
 2.
Albrecht P: The RungeKutta Theory in a Nutshell. SIAM Journal on Numerical Analysis. 1996, 33: 17121735. 10.1137/S0036142994260872.
 3.
Inc M, Bildik N, Bulut H: A comparison of numerical ODE solvers based on Euler methods. Math Comput Appl. 1998, 3: 153159.
 4.
Gerald CF, Wheatley PO: Applied numerical analysis. 1994, AddisonWesley
 5.
Villadsen JV, Stewart WE: Solution of boundaryvalue problems by orthogonal collocation. Chem Eng Sci. 1967, 22: 14831501. 10.1016/00092509(67)800745.
 6.
Wang FS: A modified collocation method for solving differentialalgebraic equations. Applied Mathematics and Computation. 2000, 116: 257278. 10.1016/S00963003(99)001381.
 7.
Savageau MA: Parameter sensitivity as a criterion for evaluating and comparing the performance of biochemical systems. Nature. 1971, 229: 542544. 10.1038/229542a0.
 8.
Savageau MA: The behavior of intact biochemical control systems. Current Topics in Cellular Regulation. 1972, 6: 63129.
 9.
Voit EO: Computational analysis of biochemical systems: a practical guide for biochemists and molecular biologists. 2000, Cambridge, UK: Cambridge University Press
 10.
Savageau M: Biochemical Systems Analysis. A Study of Function and Design in Molecular Biology. 1976, Reading, MA: AddisonWesley
 11.
Kacser H, Burns JA: The control of flux. Symp Soc Exp Biol. 1973, 27: 65104.
 12.
Fell DA: Metabolic control analysis: a survey of its theoretical and experimental development. Biochem J. 1992, 286: 313330.
 13.
Heinrich R, Schuster S: The Regulation of Cellular Systems. 1996, New York: Chapman & Hall
 14.
Caracotsios M, Stewart WE: Sensitivity Analysis of Initial Value Problems with Mixed Ode's and Algebraic Equations. Comput Chem Eng. 1985, 9: 359365. 10.1016/00981354(85)850146.
 15.
Dunker AM: The decoupled direct method for calculating sensitivities coefficients in chemical kinetics. J Chem Phys. 1984, 81: 23852393. 10.1063/1.447938.
 16.
Dougherty EP, Hwang JT, Rabitz H: Further developments and applications of the Green's function method of sensitivity analysis in chemical kinetics. J Chem Phys. 1979, 71: 17941808. 10.1063/1.438530.
 17.
Dickinson RP, Gelinas RJ: Sensitivity analysis of ordinary differential equation systems. A direct method. J Comput Phys. 1976, 21: 123143. 10.1016/00219991(76)900073.
 18.
Leis JR, Kramer MA: The simultaneous solution and sensitivity analysis of systems described by ordinary differential equations. ACM Trans Math Softw. 1988, 14: 4560. 10.1145/42288.46156.
 19.
Mauch K, Arnold S, Reuss M: Dynamic sensitivity analysis for metabolic systems. Chemical Engineering Science. 1997, 52: 25892598. 10.1016/S00092509(97)000754.
 20.
Ghosh A, Miller D, Zou R, Pais H, Sokhansanj B, Kriete A: Integrated Spatiotemporal Model of Cell Signaling. FOSBE. 2005
 21.
Hwang JT, Dougherty EP, Rabitz S, Rabitz H: The Green's function method of sensitivity analysis in chemical kinetics. J Chem Phys. 1978, 69: 51805191. 10.1063/1.436465.
 22.
Ingalls BP, Sauro HM: Sensitivity analysis of stoichiometric networks: an extension of metabolic control analysis to nonsteady state trajectories. Journal of Theoretical Biology. 2003, 222: 2336. 10.1016/S00225193(03)000110.
 23.
Shen J: A direct method of calculating sensitivity coefficients of chemical kinetics. J Chem Phys. 1999, 111: 72097214. 10.1063/1.480049.
 24.
Zou R, Ghosh A: Automated sensitivity analysis of stiff biochemical systems using a fourthorder adaptive step size Rosenbrok integration method. AIEE Proc Sys Biol. 2006, 153: 7990. 10.1049/ipsyb:20050058.
 25.
Shiraishi F, Takeuchi H, Hasegawa T, Nagasue H: A Taylorseries solution in Cartesian space to GMAsystem equations and its application to initialvalue problems. Applied Mathematics and Computation. 2002, 127: 103123. 10.1016/S00963003(01)000078.
 26.
Shiraishi F, Hatoh Y, Irie T: An efficient method for calculation of dynamic logrithmic gains in biochemical systems theory. Journal of Theoretical Biology. 2005, 234: 7985. 10.1016/j.jtbi.2004.11.015.
 27.
Gorbunov VK: The parameterization method for optimal control problems. Comput Math Math Phys. 1979, 19: 212224.
 28.
Gorbunov VK, Lutoshkin IV: The parameterization method in optimal control problems and differentialalgebraic equations. Journal of Computational and Applied Mathematics. 2006, 185: 377390. 10.1016/j.cam.2005.03.017.
 29.
Li R, Teo KL, Wong KH, Duan GR: Control parameterization enhancing transform for optimal control of switched systems. Mathematical and Computer Modelling. 2006, 43: 13931403. 10.1016/j.mcm.2005.08.012.
 30.
Teo KL, Jennings LS, Lee HW, Rehbock V: The control parameterization enhancing transform for constrained optimal control problems. J Austral Math Soc Ser B. 1999, 40: 314335.
 31.
Wang FS, Su TL, Jang HJ: Hybrid differential evolution for problems of kinetic parameter estimation and dynamic optimization of an ethanol fermentation process. Ind Eng Chem Res. 2001, 40: 28762885. 10.1021/ie000544+.
 32.
Kreyszig E: Introductory Functional Analysis with Applications. 1989, Wiley
 33.
Varma A, Morbidelli M, Wu H: Parameter sensitivity in chemical systems. 1999, Cambridge: Cambridge University Press
 34.
Tomovic R, Vukobratovic M: General Sensitivity Theory. 1972, New York: American Elsevier
Acknowledgements
The financial support from the National Science Council, Taiwan, ROC (Grant NSC972221E194010MY3 and NSC972627B194001), is highly appreciated.
This article has been published as part of BMC Bioinformatics Volume 9 Supplement 12, 2008: Asia Pacific Bioinformatics Network (APBioNet) Seventh International Conference on Bioinformatics (InCoB2008). The full contents of the supplement are available online at http://www.biomedcentral.com/14712105/9?issue=S12.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
WHW developed and implemented the algorithm and drafted the manuscript. FSW conceived of the study, participated in its design and coordination, and helped to draft the manuscript. MSC assisted in developing the algorithm and finalizing the manuscript. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Wu, W.H., Wang, F.S. & Chang, M.S. Dynamic sensitivity analysis of biological systems. BMC Bioinformatics 9, S17 (2008). https://doi.org/10.1186/147121059S12S17
Published:
Keywords
 Sensitivity Equation
 Ordinary Differential Equation Model
 Ordinary Differential Equation System
 Dynamic Sensitivity
 Banach Fixed Point Theorem