Skip to main content

Using optimal control to understand complex metabolic pathways

Abstract

Background

Optimality principles have been used to explain the structure and behavior of living matter at different levels of organization, from basic phenomena at the molecular level, up to complex dynamics in whole populations. Most of these studies have assumed a single-criteria approach. Such optimality principles have been justified from an evolutionary perspective. In the context of the cell, previous studies have shown how dynamics of gene expression in small metabolic models can be explained assuming that cells have developed optimal adaptation strategies. Most of these works have considered rather simplified representations, such as small linear pathways, or reduced networks with a single branching point, and a single objective for the optimality criteria.

Results

Here we consider the extension of this approach to more realistic scenarios, i.e. biochemical pathways of arbitrary size and structure. We first show that exploiting optimality principles for these networks poses great challenges due to the complexity of the associated optimal control problems. Second, in order to surmount such challenges, we present a computational framework which has been designed with scalability and efficiency in mind, including mechanisms to avoid the most common pitfalls. Third, we illustrate its performance with several case studies considering the central carbon metabolism of S. cerevisiae and B. subtilis. In particular, we consider metabolic dynamics during nutrient shift experiments.

Conclusions

We show how multi-objective optimal control can be used to predict temporal profiles of enzyme activation and metabolite concentrations in complex metabolic pathways. Further, we also show how to consider general cost/benefit trade-offs. In this study we have considered metabolic pathways, but this computational framework can also be applied to analyze the dynamics of other complex pathways, such as signal transduction or gene regulatory networks.

Background

This paper is based on two main concepts: (i) genetic and biochemical dynamics are key to understand biological function, and (ii) optimality hypotheses enable predictions in biology. We start by briefly reviewing the relevant previous literature, with emphasis on early works. Then, we discuss the integration of these two main ideas in an optimal control framework which can then be used to analyze and understand complex biochemical pathways. We finally illustrate this approach with case studies related with metabolic networks.

Dynamics

Mathematical modelling allows us to understand complex biological systems [1,2,3]. Wolkenhauer and Mesarovic [4] argue that the central dogma of systems biology is that system dynamics give rise to the functioning and function of cells. In this view, the language of dynamical systems is used to represent mechanisms at different levels (metabolic, signalling and gene expression) in order to describe the observed biochemical and biological phenomena. This mechanistic dynamic modelling is usually carried out using systems of coupled ordinary differential equations [5, 6]. Dynamical systems theory has a long history in physiology [2, 7, 8], and is receiving increasing attention in molecular systems biology [9,10,11,12,13,14,15,16,17].

The dynamic behaviour of biological systems is often explained in terms of feedback regulation mechanisms [4, 18]. Feedback is also a pillar in control theory, and in this context it is important to remember that the early concepts of feedback control were inspired by the study of regulation in biosystems [2]. Not surprisingly, a number of researchers have suggested that molecular systems biology can greatly benefit from the powerful methods of modern systems and control theory [1, 19,20,21,22,23,24]. Similar arguments have been made for synthetic biology [25,26,27,28,29].

Optimality

Can biology be predictive? It has been argued that the systems approach to biology will enable us to predict biological outcomes despite the complexity of the organism under study [30,31,32]. According to Sutherland [33], optimality is the only approach biology has for making predictions from first principles.

The optimality hypothesis as an underlying principle in living matter has a long history. It was already used in the early 1900s to explain structural and functional organization in physiology [34, 35]. A few decades later, Rosen [36] published what seems to be the first monograph dedicated to optimality in biology, reviewing a number of applications. In this book, Rosen started highlighting the successful use of optimality principles in physics (where they are usually called variational principles), and their interrelationships with other disciplines including biology, mathematics and the social sciences (especially with economics). Rosen then proceeded to discuss how evolution via natural selection explains the appearance of optimality principles in biology, and how these principles can explain biological systems. These ideas were updated in a later publication [37], noting the two distinct (yet related) roles that optimality can play:

  1. (i)

    analytic (in the sense of explanatory), i.e. helping us to understand, or even predict, the way in which a biological system occurs. Many examples can be found in ecology [38] and evolutionary [39, 40] and behavioral biology [41].

  2. (ii)

    synthetic (in the sense of design and/or decision making), where it helps us to decide how to optimally manipulate, change or even build a bioprocess or biosystem. Examples can be found in biomedicine [42], metabolic engineering [43,44,45] and synthetic biology [45, 46].

Optimality in cellular systems

In the remainder of this paper, we will focus on the explanatory role of optimality in the context of cellular systems. Savageu [47] and Heinrich and collaborators[48,49,50,51] developed many of the early theoretical applications of optimality principles in this domain, mostly to analyze metabolic networks and their regulation. Other notable examples of optimization studies in the context of biochemical pathways can be found in [43,44,45,46, 52,53,54,55,56,57,58,59,60,61,62] and references cited therein. Optimization can be applied at the genome-scale level with steady-state models (acting as algebraic constraints), as done with constraint-based models [63, 64].

Chapters specifically devoted to the interrelation between evolution and optimality in the context of biochemical pathways can be found in [65, 66]. From all these studies, two key ideas can be distilled: (i) evolution via natural selection can be understood as a fitness optimization process; (ii) therefore fitness optimality should allow us to understand, explain and even predict the evolution of the design of biochemical networks provided we can characterize their function(s) and how such function(s) impact on the organism fitness.

Heinrich et al [50] noted that the optimality hypothesis finds support in the fact that perturbations (e.g. mutations) or changes in the structure of enzymes usually leads to worse functioning of the metabolism. These authors also reviewed the cost functions most frequently considered in metabolic networks, concluding that they were mostly related with fluxes, concentrations of intermediates, transition times and thermodynamic efficiencies.

In this context, it is worth remembering that any optimization problem involves at least two elements: the objective (or cost) function, i.e. the criterion being maximized or minimized, and the decision variables, i.e. the degrees of freedom of the system which can vary to seek the optimal cost. Additionally, in most realistic situations the optimization problem also needs to incorporate constraints, i.e. relationships describing what is feasible or acceptable (i.e. requirements and fundamental limitations). Both in biology and physics, the set of constraints include physicochemical laws (e.g. conservation of mass, thermodynamics). But a main difference between physics and biology is the presence of additional functional constraints in the latter [1]. That is, biological systems evolve to fulfill functions, with insufficient performance leading to extinction.

Optimality and dynamics: optimal control theory

Most of the above mentioned early studies of optimality in biology considered stationary (i.e. steady state) systems. However, as remarked at the beginning of this paper, biological function is closely linked to dynamics. Thus we now focus on the question of explaining and/or predicting dynamics (i.e. function) exploiting optimality principles.

The optimization of dynamical systems is studied by optimal control theory [67, 68]. Optimal control considers the optimization of a dynamic system, that is, one seeks the optimal time-dependent input (control) to minimize or maximize a certain performance index (cost function). Optimal control is sometimes also called dynamic optimization when the system is open loop. The basic elements of an optimal control problem are the performance index (cost function to be optimized), the control variables (time-varying decision variables), and the constraints (which can be inequalities or equalities, dynamic or static, and can be active during all or part of the time horizon considered). Typically, equality dynamic constraints constitute the time-varying model of the system under consideration.

Rosen was probably the first to recognize the importance of optimal control as a unifying framework that could bring together important areas of theoretical biology. Although in his works [36, 37] he outlined the potential of optimal control theory for biological research, he did not present any illustrative example. Successful applications of optimal control in biology and biomedicine started to appear in the 1970s, with notable impetus in the case of optimal decision making in biomedical engineering (e.g. drug scheduling [69]). Around the same time, explanatory uses of optimal control in biology started to appear (an introductory book discussing several early examples can be found in [70]).

In recent years, optimal control has been increasingly used to explain cellular phenomena (e.g. [71,72,73]). In the case of metabolic systems and their regulation, Klipp et al [74] were the first to predict temporal gene expression profiles in a metabolic pathway assuming optimal function under a constraint limiting total enzyme amount. They used an optimization approach similar to control parameterization methods in numerical optimal control. Notably, these researchers considered a simplified model of the central metabolism of yeast undergoing a diauxic shift, and they were able to predict time-dependent enzyme profiles which agreed well with experimental gene expression data [75]. Further, considering a simple linear pathway model, they found a wave-like enzyme activation profile which agreed with previous observations of gene expression during the cell cycle [76, 77], indicating that for this pathway topology, genes involved in a certain function are activated when such function is needed. Interestingly, these results were later experimentally confirmed by Alon and co-workers [78], who named this kind of activation profile “just-in-time” (a term originated in industrial manufacturing to describe a methodology aimed at reducing production times and inventory sizes). As noted by Ewald et al [79], this timing pattern had also been previously observed in the flagellum assembly pathway [80]. Similar activity motifs were also found in the timing in transcriptional control of the yeast metabolic network [81].

There are several important messages in the pioneering work of Klipp et al [74]: (i) it is possible to predict the dynamics of biological pathways without knowing the underlying regulatory mechanisms, confirming the concepts proposed by Heinrich and collaborators a decade earlier [50]; (ii) the timing of gene expression allows the cell metabolism to optimally adapt to varying external conditions; (iii) the optimal operation of the pathway needs to take into account constraints that arise from physico-chemical limitations of the cell (e.g. total enzyme concentration is limited by protein synthesis capacity); (iv) for the case studies considered, the authors considered different (and single) objective functions, which encapsulate the optimality hypothesis.

The work of Klipp et al [74] spurred the application of optimal control to characterize cellular dynamics related with metabolism [82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98]. Ewald et al [79] have recently reviewed many of these studies, illustrating how dynamic optimization is a powerful approach that can be used to decipher the activation and regulation of metabolism. Furthermore, Ewald et al [79] have also reviewed other important types of applications, including dynamic resource allocation in cells (as studied by e.g. [93]), the genomic organization of metabolic pathways (e.g. [87]), the development of effective treatments against pathogens (e.g. [95, 99]), and other manifold applications in metabolic engineering and synthetic biology.

Cellular trade-offs and multicriteria optimality

Trade-offs are ubiquitous in evolutionary biology: very often one trait can not improve without a decrease in another, as already noted by Darwin [100]. Numerous works have studied cellular trade-offs considering e.g. the design of microbial metabolism [101, 102] and strategies for resource allocation, storage and growth [103,104,105,106]. Similarly, trade-offs between economy and effectiveness have been found in biological regulatory systems [107]. Analyses of different trade-offs have been used to uncover design principles in cell signalling networks [108,109,110].

A mathematical framework that combines in a natural way the concepts of optimality and trade-offs is multicriteria optimization, where one seeks the best (optimal) trade-offs (the so called Pareto optimal set) that correspond to the simultaneous optimization of several objectives. The concept of Pareto optimality was originally developed in the field of economics, but it was already suggested for applications to ecology in the early 1970s [111]. In the case of biochemical networks, multicriteria optimality was discussed by Heinrich and Schuster in the 1990s for metabolic steady-state conditions [49, 65, 112]. During the 2000s, the approach gained traction: El-Samad et al [113] studied the Pareto optimality of gene regulatory network associated with the heat shock response in bacteria; multi-objective approaches were used to perform metabolic network optimization [114, 115] and flux balance analysis [116,117,118]. More recently, Alon and co-workers applied Pareto optimality to explain evolutionary trade-offs [119] and biological homeostasis systems [107]. Higuera et al [120] analyzed optimal trade-offs for the allosteric regulation of enzymes in a simple model of a metabolic substrate-cycle. Different optimal trade-offs in molecular networks capable of regulation and adaptation have also been studied in the context of synthetic biology [121,122,123,124,125]. Many other applications of multicriteria optimality in biology are reviewed in [126,127,128].

Considering the prediction of dynamics in biochemical pathways, to the best of our knowledge the study of de Hijas-Liste et al [88] was the first to apply a multicriteria dynamic optimization framework. These authors proposed multi-objective formulations for several metabolic case studies, showing how this framework provided biologically meaningful results in terms of the best trade-offs between conflicting objectives.

Both in single and multicriteria formulations there is an underlying key question that needs to be addressed: the selection of the objective functions. In other words, we need to formulate objective functions which encapsulate the fitness and the trade-offs of the biological system under study. Heinrich and Schuster [65] suggested following heuristic arguments and checking their validity by comparing the predictions derived from the associated optimality principle with experimental observations. This study-hypothesize-test approach has been the one followed by the vast majority of the works cited above. Recently, we proposed an alternative based on an inverse optimal control formulation [129] that aims to find the optimality criteria that, given a dynamic model, can explain a set of given dynamic (time series) measurements. In other words, inverse optimal control can be used to systematically infer optimality principles in complex pathways from measurements and a prior dynamic model.

Challenges and motivation

As discussed above, optimal control can provide important insights in the domain of systems biology. So far, existing studies have made use of rather simple dynamic models (see the review of Ewald et al.[79]). In many situations, using a simple model might be in fact a totally satisfactory approach: an excellent illustration is the study by Giordano et al [93], where optimal control of coarse-grained dynamic models was used to explain resource allocation strategies in microbial physiology.

However, for many applications, more complex models need to be used. Can optimal control be applied at these larger scales? This is one of the key questions considered here. The answer ultimately relies on two key issues: (i) the availability of the detailed (time-series) measurements need to build these more complex dynamic models, and (ii) the existence of numerical optimal control methods that can solve these problems in a reliable and efficient way. Regarding (i), Ewald et al.[79] mention recent advances in experimental techniques (like large-scale quantitative proteomic data) that should make it possible to apply these optimality principles at more complex molecular levels.

In contrast, regarding (ii), the situation currently remains an open question, serving as the main motivation for this work. Despite the many significant advances during the last decades, reliably solving nonlinear optimal control problems can be very challenging, even for small problems, due to the existence of local solutions and high sensitivity to initial guesses [130]. This is true even in areas with a long optimal control tradition, such as aerospace [131, 132]. In the case of kinetic models, it has been shown that small (and apparently simple) problems can exhibit hundreds of unique local solutions [133]. The existence of local solutions, possibly exacerbated in problems with path constraints, also pose a challenge for the application of standard optimal control methods in areas such as robotics and (bio)chemistry [134]. It is important to note that finding the global solution of nonlinear optimal control problems with guarantees (i.e. certificate of global optimality) remains an open question of research [135].

In the case of biochemical pathways, the application of optimal control faces important challenges and pitfalls, in line with those identified for chemical reaction systems [133, 134], including: multimodality (local solutions, due to nonlinear dynamics), complicated control profiles with many switching points and singular arcs, path constraints, possible discontinuous dynamics, sensitivity to initial guesses, and scaling issues. More details regarding these issues are given below.

Our objectives and approach

Here, our goal is to provide a multi-objective optimal control approach that can be applied to dynamic models of complex biochemical pathways, surmounting the above mentioned challenges. In particular, we present a computational workflow that is reliable (robust), avoiding convergence to local solutions, but efficient enough (in terms of reasonable computation times) to handle realistic network topologies and arbitrary nonlinear kinetics. We will show how this workflow is capable of scaling up well with network size, and how it can handle multi-criteria formulations.

We illustrate the performance of this workflow considering three different case studies, based on dynamic models of the central carbon metabolism of S. cerevisiae and B. subtilis. In particular, we use our workflow to explain metabolic dynamics during nutrient shift experiments.

Methods

Optimal control problem: general formulation

We consider dynamic systems described by nonlinear ordinary differential equations (ODEs). The problem of optimal control (OCP) consists of computing the optimal decision variables (also called time-varying inputs, or controls) and time-invariant parameters that minimize (or maximize) a given cost functional (performance index), subject to the set of ODEs and possibly algebraic inequality constraints. Mathematically, the OCP is usually stated as follows:

$$\begin{aligned} \min _{\mathbf {u}(t), t_f}{\mathbf {J[x,u,p]}} \end{aligned}$$
(1)

Subject to:

$$\begin{aligned} \frac{\text {d}\mathbf {x}}{\text {d}t}&=\mathbf {f}[\mathbf {x}(t,\mathbf {p}),\mathbf {u}(t),\mathbf {p},t],\nonumber \\&\mathbf {x}(t_0,\mathbf {p})=\mathbf {x_0} \end{aligned}$$
(2)
$$\begin{aligned} \mathbf {g}[\mathbf {x}(t,\mathbf {p}),\mathbf {u}(t),\mathbf {p}]\le&0 \end{aligned}$$
(3)
$$\begin{aligned} \mathbf {g}\iota [\mathbf {x}(t_{\iota },\mathbf {p}),\mathbf {u}(t_{\iota }),\mathbf {p}]\le&0 \end{aligned}$$
(4)
$$\begin{aligned} \mathbf {u}^{\mathbf {L}} \le&\mathbf {u}(t) \le \mathbf {u}^{\mathbf {U}} \end{aligned}$$
(5)

where \(\mathbf {J[x,u,p]}\) is the objective functional (sometimes called performance index, or cost functional), encapsulating the optimality criteria; \(\mathbf {u}(t)\) are the time-dependent control variables which must be computed in order to minimize (or maximize) the objective functional (\(\mathbf {J[x,u,p]}\)). The problem is subject to constraints, including the dynamics of the system described by Eq. (2), i.e. the set of ordinary differential equations and their corresponding initial values (\(\mathbf {x}(t_0)\)), forming the so-called initial value problem (IVP); inequality (\(\mathbf {g}\)) path constraints are encoded in equations (3), representing inequalities relationships that must be enforced during the time horizon considered (e.g. total enzyme capacity, critical thresholds for specific concentrations, etc.). In some cases we also need to consider inequality (\(\mathbf {g}_\iota\)) time-point constraints, encoded in Eq. (4). Finally, (\(\mathbf {u}^U,\mathbf {u}^L\)) are the upper and lower bounds for the control variables, as stated in Eq. (5).

In the case of a multicriteria formulation, the cost functional \(\mathbf {J[x,u,p]}\) is a set of objective functions corresponding to the N different criteria considered:

$$\begin{aligned} \mathbf {J[\mathbf {x},\mathbf {u},\mathbf {p}]}=\begin{bmatrix} \mathrm {J}_{1}[\mathbf {x},\mathbf {u},\mathbf {p}] \\ \mathrm {J}_{2}[\mathbf {x},\mathbf {u,\mathbf {p}}] \\ . \\ . \\ . \\ \mathrm {J}_{\mathrm{N}}[\mathbf {x},\mathbf {u},\mathbf {p}] \end{bmatrix} \end{aligned}$$
(6)

where, in its general form, each objective functional \(J_i\) in this set (\(i\in [1,N]\)) consists of a Mayer (\(\Phi _M^i\)) and a Lagrange (\(\Phi _L^i\)) term:

$$\begin{aligned} \mathrm {J}_{\mathrm{i}}[\mathbf {x},\mathbf {u},\mathbf {p}]=\Phi _M^i[\mathbf {x}(t_f,\mathbf {p}),\mathbf {p}]+\int _{t_0}^{t_f}\Phi _L^i[\mathbf {x}(t,\mathbf {p}),\mathbf {u}(t),\mathbf {p}] \end{aligned}$$
(7)

We assume the same general form for the single-objective case.

Multicriteria optimal control

In multicriteria optimization, the objectives are usually in conflict (improving one damages the others), so the optimal solution is not unique, but a set of optimal trade-offs (i.e. the set of the best compromises, which is also called the Pareto set). Many methods have been developed for multicriteria optimal control, and they are usually classified in three categories [136]: scalarization techniques, continuation methods and set-oriented approaches. Here we use the \(\epsilon\)-constraint scalarization method, which transforms the original multiobjective problem into a finite set of single-objective optimal control problems.

The \(\epsilon\)-constraint method proceeds by solving a single objective problem with respect to one of the objectives \(J_a\) while treating the rest of the objectives \(J_i\) as algebraic constraints:

$$\begin{aligned} \min _{\mathbf {u}(t), t_f}{J_a\mathbf {[x,u,p]}} \end{aligned}$$
(8)

Subject to:

$$\begin{aligned} J_b\mathbf {[x,u,p]} \le \epsilon _i \text { with } i=1,...,n \text { and } i\ne a \end{aligned}$$
(9)

The above problem will also be subject to the rest of the differential and algebraic constraints considered in the original multicriteria formulation. Therefore the Pareto set is obtained by solving a set of single objective problem for different values of \(\epsilon _i\).

Numerical solution of nonlinear optimal control problems

Methods for the numerical solution of nonlinear optimal control problems can be classified under three categories: dynamic programming, indirect and direct approaches (Fig. 1). Dynamic programming [137, 138] suffers from the so-called curse of dimensionality, so the latter two are the most promising strategies for realistic problems. Indirect approaches were historically the first developed and are based on the transformation of the original optimal control problem into a multi-point boundary value problem using Pontryagin’s necessary conditions [67, 68]. Indirect methods are not used in practice due to several major difficulties, particularly the need of very good initial guesses, and in the case of problems with path constraints, the need of an a priori estimation of the sequence of constrained/unconstrained singular arcs (more details can be found in e.g. page 129 in [139]). Direct methods, which are presently the preferred way to solve these problems, transform the optimal control problem into a nonlinear programming problem (NLP). They are based on the discretization of either the control, known as the sequential strategy, or both the control and the states, known as the simultaneous strategy.

Fig. 1
figure1

Classification of solution strategies for nonlinear optimal control problems. Figure adapted from [146]

Sequential strategy (control vector parameterization)

In the sequential strategy [140,141,142,143], also known as control vector parametrization, the controls are approximated by piecewise functions, usually by a low order polynomial, the coefficients of which are the decision variables of the resulting discretized problem. Thus, the problem is transformed into an outer non-linear programming (NLP) problem with an inner initial value problem (IVP) where the dynamic system is integrated for each evaluation of the cost function.

The control vector parameterization (CVP) approach proceeds by dividing the time horizon into a number of elements (\(\rho\)). The control variables (\(j=1 \ldots n_u\)) are then approximated within each interval (\(i=1 \ldots \rho\)) by means of some basis functions, usually low order Lagrange polynomials [144], as follows:

$$\begin{aligned} u_j^{(i)}(t)=\sum _{k=1}^{M_j}u_{ijk}\ell _k^{(M_j)}(\tau ^{(i)})&t\in [t_{i-1},t_i] \end{aligned}$$
(10)

where \(\tau\) is the normalized time in each element i:

$$\begin{aligned} \tau ^{(i)}=\frac{t-t_{i-1}}{t_i-t_{i-1}} \end{aligned}$$
(11)

and \(M_j\) the order of the Lagrange polynomial (\(\ell\)). In this work we will consider \(M_j=1\) or \(M_j=2\), i.e. piecewise constant or piecewise linear approximations of the controls.

Using the above discretization, the controls can be expressed as functions of a new set of time invariant parameters corresponding to the polynomial coefficients (\(\mathbf {w}\)). Therefore the original infinite dimensional problem is transformed into a non-linear programming problem with dynamic (the model) and algebraic constraints, and where the decision variables \(\mathbf {w}\) correspond to the unknown polynomial coefficients. In the simplest case of \(M_j=1\), in every interval, every control \(u_j^{(i)}(t)\) will be approximated by a constant coefficient \(\mathbf {w}_j^{(i)}\) (piecewise constant approximation). In other words, this strategy gives rise to an outer NLP problem with an embedded inner initial value problem, i.e. the dynamics must be integrated for each evaluation of the cost function and the algebraic constraints. Further details about the theory and practice of the CVP approach and its variants can be found in e.g. [140, 141, 143].

Simultaneous strategy (complete discretization)

In the simultaneous strategy [139, 145, 146], also known as complete parameterization approach, both states and controls are discretized by dividing the whole time domain into small intervals. This is done either using multiple shooting, where similarly to the sequential approach, the problem is integrated separately in each interval and linked with the rest through equality constraints, or by a collocation approach, where the solution of the dynamic system is being coupled with the optimization problem.

The direct collocation approach is probably the most well known complete discretization approach. In this method the solution of the infinite dimensional problem is transcribed into a non-linear programming problem discretizing both the states and the controls [139] by means of low-order polynomial approximations. For example, the states can be approximated by a K-stage Runge-Kutta scheme. Different Runge-Kutta schemes use polynomials of different order (\(K+1\)) to approximate the system’s solution in each integration step (i) with stepsize \(h_i\):

$$\begin{aligned} \mathbf {x}_{i+1}=\mathbf {x}_i+h_i \sum _{j=1}^{K}{\beta _{j} \mathbf {f}_{ij}} \end{aligned}$$
(12)

with:

$$\begin{aligned} \mathbf {f}_{ij}=\mathbf {f} \left\{ \left( \mathbf {x}_i + h_i \sum _{l=1}^{K}\alpha _{jl} \mathbf {f}_{il} \right) ,(t_i + h_i \rho _j) \right\} \end{aligned}$$
(13)

where \(\mathbf {f}\) is the right hand side of the ODEs while \(\beta _{j},\alpha _{jl},\rho _j\) are order-dependant parameters. Thus, this method transforms the original infinite dimensional problem into a large NLP problem which, in contrast to the CVP method above, does not require the integration of the dynamic system during the iterative solution of the NLP. An additional advantage of complete discretization methods is a better handling of path inequalities [139]. Readers interested in achieving a deeper understanding of the theory and numerical details of the simultaneous strategy and its variants should check [139, 145, 146] and the references cited therein.

Effect of constraints on the optimal solution

Constraints play a key role in mathematical modeling in biology. From the computational optimization point of view, constraints limit the solution space, adding further complexity to already complex mathematical formulations. However, from the biological point of view, constraints add information to the model, making it more realistic.

In general, constraints can have physical and biological meaning [65]. For example, thermodynamic feasibility is often neglected in metabolic models although it can provide information on both directionality of reactions and range of parameters [147]. Additionally, information on the physico-chemical properties and the stoichiometry of the network under consideration can greatly restrict the solution space.

Apart from physical constraints, biological limitations can be of even higher importance and complexity [37]. These limitations are not only complex to express but also hard to get estimates for. Considering variations of the critical values of certain metabolites or proteins, necessary for the survival of the cell, can have a quite significant effect on the optimal solution. Other relevant constraints can be related to the maximum protein production rate, the total enzyme burden or the total protein investment [48, 148].

In this context, it would be interesting to analyze the interplay between constraints in biological models with the optimal trade-offs (Pareto set). In some cases, constraints can act as the operating principles we are trying to identify and understand [149]. It would also be interesting to study (i) the impact of constraints not only on the current behavior of a biological network, and (ii) their role on how the network has evolved [106, 121].

In mathematical optimization, once an optimal solution is found, it is always interesting to analyze the role of the different constraints on such solution. In constrained optimization in economics, especially in linear programming, the concept of shadow price is used to quantify how much the optimal value of the objective function changes by relaxing a constraint [150, 151]. In optimal control the equivalent of the shadow price is the adjoint variable [70]. The adjoint variables \(\varvec{\lambda }(t)\) (also called costate variables) can be estimated and provided along with the results as part of the solution of the optimal control problem. However, an interpretation of their significance is not necessarily intuitive. As mentioned previously, in nonlinear optimal control theory and while using direct methods, the problem is formulated as a constrained NLP optimization problem where both algebraic and differential equations are constraining the search space. In such a problem, the Lagrange multipliers represent what the cost would be if those constraints were violated. Additionally the Lagrange multipliers with respect to the state variables are discrete approximation of the adjoint variables, the approximation be depending on the transcription method of choice. Consequently, in nonlinear optimal control the adjoint variables can be used to explain the possible variation on the cost function associated with an incremental change in the states [70].

Using direct optimization methods, the general optimal control problem is transcribed into a NLP problem. Let us consider a general NLP problem with cost function \(F(\mathbf {y})\), dependent on n variables \(\mathbf {y}\), and where this cost has to be minimized subject to the \(m \le n\) constraints:

$$\begin{aligned} \mathbf {c}(\mathbf {y})=\mathbf {0} \end{aligned}$$
(14)

The Lagrangian is defined as [139]:

$$\begin{aligned} \mathcal {L}(\mathbf {y},\pmb {\alpha })=\mathbf {F}(\mathbf {y})-\pmb {\alpha }^T\mathbf {c}(\mathbf {y})=\mathbf {F}(\mathbf {y})-\sum _{i=1}^m{\alpha _i c_i(\mathbf {y})} \end{aligned}$$
(15)

with \(\pmb {\alpha }\) corresponding to the Lagrange multipliers.

Let us now consider an optimal control problem where we seek to find the controls \(\mathbf {u}(t)\) to minimize the cost function:

$$\begin{aligned} J=\int _{t_1}^{t_F}{L[\mathbf {x}(t),\mathbf {u}(t)] dt} \end{aligned}$$
(16)

subject to the differential-algebraic equation (DAE) constraints:

$$\begin{aligned} \dot{\mathbf {x}} =&\mathbf {f}[\mathbf {x},\mathbf {u},t] \end{aligned}$$
(17)
$$\begin{aligned} \mathbf {0} =&\mathbf {g}[\mathbf {x},\mathbf {u},t] \end{aligned}$$
(18)

The augmented performance index is formed as follows:

$$\begin{aligned} \hat{J}=\int _{t_1}^{t_F}{L[\mathbf {x}(t),\mathbf {u}(t)] dt} - \int _{t_1}^{t_F}{\pmb {\lambda }^T(t)\{\dot{\mathbf {x}}-\mathbf {f}[\mathbf {x}(t),\mathbf {u}(t)]\}} + \int _{t_1}^{t_F}{\pmb {\mu }^T(t)\mathbf {g}[\mathbf {x}(t),\mathbf {u}(t)]} \end{aligned}$$
(19)

where \(\pmb {\lambda }\) and \(\pmb {\mu }\) are the adjoint variables.

Consider now that this infinite-dimensional optimal control problem is transcribed into a finite-dimensional NLP problem using a complete parameterization strategy. The relationship between the Lagrange multipliers (\(\pmb {\alpha }\)) and the adjoint variables (\(\pmb {\lambda ,\mu }\)) will be dependent on the transcription method. Generally, it has been shown that the Lagrange multipliers estimate the adjoint variables in the limit of \(\rho \rightarrow \infty\), where \(\rho\) is the number of time intervals in which the time horizon has been discretized [139]. More information about the adjoint variables computation and interpretation can be found in [70, 139].

Challenges and pitfalls in numerical optimal control

Using direct methods to solve optimal control problems involving nonlinear models frequently result in multimodal problems, i.e. there are multiple local solutions and at least a global one. This is a consequence of the presence of nonlinear dynamics and path constraints. As a result, local optimization methods will usually converge to bad local solutions [45, 152]. Often, researchers resort to the use of a multi-start strategy, i.e. initializing local methods from multiple points in the search space. However, this approach becomes inefficient for problems of realistic size [88].

In theory, deterministic global optimization methods can surmount these difficulties and find the global optimum with guarantees. Although several deterministic methods for finding the global solution of optimal control problems have been developed [133, 134, 153,154,155,156], difficulties remain because they do not scale well with problem size. The current state of the art in deterministic global optimal control is that there are no generic complete-search algorithms for solving these problems to global optimality, thus it remains an open field of research [135].

Alternatively, one can consider approximate probabilistic methods. Purely stochastic global optimization methods were the first probabilistic methods used for optimal control [157,158,159,160,161,162], but they can rather inefficient, requiring many evaluations of the cost function, resulting in large computation times that become prohibitive if refined solutions of large problems are sought. Hybrid global-local optimization strategies combine the advantages of global stochastic and local deterministic methods, showing good performance and robustness for many realistic problems of small and medium size [88, 131, 163, 164]. It should be noted that these non-deterministic methods do not offer guarantees of global optimality, although their empirical performance seems to be adequate for many practical problems of realistic size [131, 164]. However, these hybrid strategies might also become too computationally expensive if we seek refined solutions of large optimal control problems.

In addition to multimodality [131, 133, 134, 158], other numerical and computational issues in nonlinear optimal control include complicated control profiles with multiple and very sensitive switching points [143, 163], discontinuous dynamics [142], path constraints [139, 165] and singular arcs [88, 139, 166], and the need of good initial guesses [131]. Even small nonlinear optimal control problems can be non-trivial, exhibiting unusual behavior due to the existence of local solutions and extreme sensitivity to initial guesses [130].

Our combined strategy for numerical optimal control

Although several surveys of numerical methods and software for optimal control have been published during the last decade [131, 167, 168], these reviews are neither very recent nor very exhaustive. More importantly, there is a lack of studies comparing different approaches in a fair way and using a set of well defined benchmark problems. Therefore, choosing the current best numerical method and software to solve the class of problems considered here becomes a daunting task, especially for non-experienced users. To make things worse, using many currently available software packages we might get solutions that look reasonable but which are, however, artifacts or bad local solutions.

Therefore, here we present a robust workflow and guidelines to avoid, as much as possible, the many challenges and pitfalls that are common in nonlinear optimal control, as discussed in the previous section. We have designed an strategy with three key ideas in mind, i.e. it should be able to: (i) run without good initial guesses, (ii) avoid convergence to local solutions, (iii) approximate complicated control profiles with good accuracy and reasonable computation cost, (iv) scale up well in terms of control and state variables. Further, the approach is able to handle multiobjective optimal control problems by transforming them into a set of nonlinear optimal control problems, as depicted in Fig. 2.

Fig. 2
figure2

General workflow of our approach

In order to meet these requirements, we tested a number of different options and finally arrived to the following two-phase strategy named AMIGO2_DO+ICLOCS:

  • first phase using AMIGO2_DO, a hybrid stochastic-deterministic method based on control vector parameterization

  • second phase using ICLOCS, a simultaneous (complete discretization) fast local method

The main justification for the first phase is the handling of the multimodality issues using a robust approach that has the additional benefit of not requiring good initial guesses. We have used the latest version of the AMIGO2_DO solver, a sequential optimal control solver included in the AMIGO2 toolbox [169]. Although this solver allows the user to select different combinations of global and local methods, we have found that the enhanced scatter search (eSS) metaheuristic [164] provided the best performance. The control vector parametrization strategy implemented in this solver is easier to apply to arbitrary nonlinear dynamics, results in smaller optimization problems and relies on well tested initial value solvers.

For the second phase we used ICLOCS [170], a very efficient and fast optimal control solver based on complete discretization and deterministic local optimization methods. Since the second phase is initialized in a near-global solution provided by the first phase, its local optimization character is not an issue. And this second phase allows a very good approximation of the control profiles due to the use of the complete discretization approach. The simultaneous strategy leads to larger optimization problems but has the advantage of avoiding the repeated integration of the system dynamics at each iteration, so it can approximate highly discretized control profiles very efficiently.

Choosing the right settings for these solvers is of key importance, but this task can be a real challenge for non-expert users. Here we provide the best settings that we found for the class of problems considered.

Recommended settings for the AMIGO2_DO phase:

  • control discretization options: an initial piecewise-linear discretization of the control using 5–10 elements proved successful. When used alone, the AMIGO2_DO mesh-refinement options can be used to approximate better difficult controls. When used as part of AMIGO2_DO+ICLOCS, we found more efficient to refine the controls during the ICLOCS phase

  • optimization solver: best results were obtained using the eSS (enhanced scatter search) global solver [164] with FSQP [171] as local solver

  • initial value problem solver: CVODES [172], with relative and absolute integration tolerances settings of at least 1.0E–7. It is important that the optimization criteria tolerance is at least two orders of magnitude larger then the IVP integration tolerance. Otherwise, the integration numerical noise introduces non-smoothness making the optimization solver to converge less efficiently or fail.

Recommended settings for the ICLOCS phase:

  • initial guesses: a good initial guesses for the unknown parameters and state trajectories is key to improve convergence. When used in the hybrid approach, the solution obtained by AMIGO2_DO was used as the initial guess for ICLOCS.

  • transcription method: trapezoidal

  • derivative generation: analytic. Providing analytic expressions for the gradient of the cost function, Jacobian of the constraints and Hessian of the Lagrangian was found extremely important. Generating this information can be very cumbersome for non-trivial problems, so we have coded an auxiliary function that automates it using symbolic manipulation

  • NLP solver: IPOPT [173]

Multiplicity of solutions

Some optimal control problems can have non-unique solutions, i.e. different control trajectories corresponding to the same cost function value. This is an often overlooked issue that, although mathematically correct, might have unforseen consequences when trying to explain or predict biological behaviour.

In a strict mathematical sense, such an issue is related with the system’s structure. For example, a dynamic system with highly correlated controls that can compensate their impact on the cost function might result in multiple solutions with different controls, yet identical cost function. In particular, in optimal control theory, a very well known issue is the existence of a singular arc. In a singular arc, the controls appear linearly in the system’s Hamiltonian and therefore can not be uniquely identified.

However, from a computational point of view, practical multiplicity of solutions can also be defined. When solving a non-linear optimal control problem one can obtain solutions of different control trajectories that correspond to very similar or, even, tolerance-wise same cost function values. Such a finding implies that the cost function is practically insensitive to at least one of the controls in at least a certain interval of the time horizon.

In this work we investigate the non-uniqueness of solutions from a practical and numerical point of view considering the ensemble of solutions found in the close vicinity of the global solution (typically using a threshold of \(0.5\%\)). We also illustrate how to distill valuable information about the role of constraints using the Lagrange multipliers.

Results

Here we illustrate and evaluate our new two-phase approach, AMIGO2_DO+ICLOCS, by solving three challenging case studies:

  • LPN3B: a three step linear pathway with transition time and accumulation of intermediates as objectives

  • SC: the central carbon metabolism of Saccharomyces cerevisiae during diauxic shift, which was originally formulated as a single-objective problem in [74], and subsequently considered as a multi-objective problem by de Hijas-Liste et al[88]

  • BSUB: the central carbon metabolism in Bacillus subtilis during a nutrient shift, extended and adapted from a single-objective formulation described in Töpfer[174].

For each case study, we present and discuss the results obtained with our combination strategy, AMIGO2_DO+ICLOCS. In order to illustrate its advantages, we also provide a comparison with the separate solvers, AMIGO2_DO and msICLOCS (that is, a multistart of ICLOCS, which was only evaluated in this way because single runs would likely result in local solutions). Software source code with the implementations of the above case studies for these solvers is available at https://doi.org/10.5281/zenodo.3793620.

Three-step linear pathway (LPN3B)

This case study is a relatively simple yet non-trivial problem presented in [88], extending the formulation of Bartl et al[83]. The problem represents a simple linear pathway of enzymatic reactions, described by mass action kinetic, where a substrate \(S_1\) is converted to a final product \(S_4\) in three steps (two intermediate metabolites \(S_{2-3}\)). Here we consider a multiobjective formulation with two objectives: minimization of transition time and minimization of the intermediates accumulation. The transition time is taken as the necessary time for a given amount of product to be reached. Note that the substrate is considered to be in abundance, so its concentration remains constant. The network representation is given in Fig. 3.

Fig. 3
figure3

Network representation for the LPN3B case study

The mathematical formulation of the multiobjective optimal control problem is:

$$\begin{aligned} \min _{\mathbf {e}(t), t_f}{\mathbf {J}[\mathbf {S},\mathbf {e}]} \end{aligned}$$
(20)

Where:

$$\begin{aligned} \mathbf {J}[\mathbf {S},\mathbf {e}]=\begin{bmatrix} t_f , \int _{t_0}^{t_f}(S_2+S_3) dt \end{bmatrix}^T \end{aligned}$$
(21)

Subject to the system dynamics:

$$\begin{aligned} \frac{d{\mathbf {S}}}{dt}=\mathbf {N}\varvec{\upsilon } \end{aligned}$$
(22)

Where the states’ vector is:

$$\begin{aligned} \mathbf {S}=[S_1, S_2, S_3, S_4] \end{aligned}$$
(23)

While the stoichiometric matrix N is:

$$\begin{aligned} \mathbf {N} = \left[ \begin{array}{ccc} 0 &{} 0 &{} 0\\ 1 &{} -1 &{} 0\\ 0 &{} 1 &{} -1\\ 0 &{} 0 &{} 1\\ \end{array} \right] \end{aligned}$$

The kinetics are described by:

$$\begin{aligned} {\upsilon }_{\mathbf {i}}=k_{i}\cdot S_{i}\cdot e_{i} \end{aligned}$$
(24)

With the following end-point constraint:

$$\begin{aligned} S_{4}(t_{f})=P(t_{f}) \end{aligned}$$
(25)

and path constraint:

$$\begin{aligned} \displaystyle \sum _{i=1}^3 e_{i}\le E_{T} \end{aligned}$$
(26)

with: \(E_{T}=1\) M, \(S_{1}(t_{0})=1\) M, \(S_{i}(t_{0})=0\) for \(i=2,3,4\), \(k_{1-3}=1\) and \(P(t_{f})=0.9\) M. The total amount of enzyme concentrations is bounded by the path constraint (26), representing the assumption that cells can only allocate a limited amount of enzymes to a given pathway, and also supported by the concentration limitations arising from molecular crowding [74, 83]. The end-point equality constraint (25) assigns the product target value that defines the transition time. The path constraint (26), representing the biological limitation for total enzyme concentration, makes this problem quite difficult to solve, and in fact several existing optimal control software packages tried in [88] failed to converge, or converged to local solutions.

Here we computed the Pareto front of this multi-objective optimal control with the three different numerical strategies described above. The results are summarized in the Pareto front presented in Fig. 4. The optimal controls corresponding to the extreme points (A and C) and the knee point (B) of the Pareto front in Fig. 4 are shown in Fig. 5. This comparison allows the visualization of the impact on the optimal control policies of the different trade-offs between the criteria considered. The path constraint (26) was found to be active throughout the whole time-horizon for all the solutions in the Pareto set.

Fig. 4
figure4

Case study LPN3B: Pareto front computed with three different approaches

Fig. 5
figure5

Case study LPN3B: Optimal controls for different points (A, B and C) on the Pareto front. The solutions presented here were computed with a multistart of ICLOCS (msICLOCS)

Fig. 6
figure6

Network representation of the SC case study

Fig. 7
figure7

Case study SC: Pareto front computed with the three different approaches. Point A and C correspond to the extreme points of the three approaches while B and K are in the vicinity of the knee point

Fig. 8
figure8

Case study SC: comparison of the solution histograms corresponding to the strategies AMIGO2_DO+ICLOCS (on the left) and msICLOCS (on the right), for point K on the Pareto front shown in Fig. 7

Fig. 9
figure9

Case study SC: an illustration of the impact of the ATP critical value path constraint on the Pareto front previously shown in Fig. 7 is presented in subfigure I. Subfigure II shows the impact of this path constraint on the optimal state trajectories corresponding to the extreme point of maximum \(t_f\)

Fig. 10
figure10

Network representation of case study BSUB

Fig. 11
figure11

Case study BSUB: subfigure I shows the Pareto front obtained with the three different strategies. Subfigure II compares the corresponding optimal state trajectories for point B of the Pareto (subfigure I)

These results are in close agreement with the best Pareto set obtained in [88]. However it should be noted that, while several of the solvers used in [88] resulted in bad local solutions, here the three strategies considered (AMIGO2_DO+ICLOCS, AMIGO2_DO and msICLOCS) converged to essentially the same Pareto set of near-globally optimal solutions.

We further support this claim by providing additional detailed comparisons of the optimal controls and optimal state trajectories computed by the three approaches for several points in the Pareto front (see Additional file 1: LPN3B). It is worth mentioning that the solution of the problem is very sensitive to the switching times in the optimal controls. The different strategies considered managed to find the optimal switching times either by using a control discretization with elements of varying size (AMIGO2_DO) or by using a very refined control mesh (msICLOCS).

In Additional file 1: LPN3B, we also provide a detailed computational comparison of the three strategies. From these results, we conclude that the fine-tuned msICLOCS was the best numerical strategy for this case study, providing very good results with very modest computational costs (around 20 s on a standard PC). Fine-tuning msICLOCS was critical, providing great benefits with respect to the default settings: it accelerated convergence by at least one order of magnitude, and it eliminated convergence to local solutions. The AMIGO2_DO solver and the combination strategy AMIGO2_DO+ICLOCS also managed to provide very good final solutions, but with a larger computational cost. The convergence curves of the three approaches are also given in the additional file, showing the superiority of msICLOCS for this problem.

Finally, we checked the possible existence of near-optimal solutions with significantly different control profiles. This was done by performing an analysis of solution multiplicity and sensitivity of the cost function with respect to control profiles from 100 runs of AMIGO2_DO+ICLOCS. Details are given in Additional file 1: LPN3B, showing how the optimal controls for those near-optimal solutions differ significantly from the global one, loosing the clear sequential wave-like behavior of the enzyme activation. These results indicate that, in order to obtain a clear cut characterization of the optimal enzyme activation, one needs to enforce convergence to the close vicinity of the global solution for this type of problems.

Central metabolism of Saccharomyces cerevisiae during diauxic shift (SC)

This problem, formulated in [88], considers a simplified network of the central carbon metabolism of Saccharomyces cerevisiae, accounting for the pathways of upper and lower glycolysis, TCA cycle and respiratory chain, as depicted in Figure 6. It is modeled as a mass-action kinetic metabolic model representing glucose, triose phosphates, pyruvate and ethanol by states \(X_{1-4}\).

The scenario examined is a diauxic shift under glucose depletion, where the main metabolic route changes dynamically through enzyme activation from glycolysis to aerobic utilization of ethanol deposits. This strategy of the yeast cell can be interpreted as extending its survival time as much as possible using ethanol as an alternative substrate. The survival of the cell is modeled taking into account critical lower bound values for NADH and ATP concentrations, implemented as path constraints, Eqs. (3132), in the optimization formulation.

If we formulate an optimal control problem for this scenario with a single objective, the maximization of the survival time, as done in [88], the corresponding optimal control involves an unrealistically large amount of enzymes. Therefore, it makes more sense to formulate a multicriteria optimal control problem with two objectives: maximization of the survival and minimization of the protein investment (enzyme synthesis effort). In this way, we obtain a good range of the cost/benefit trade-offs which can be subsequently used to explain the observed biological behavior.

The mathematical formulation is given as follows:

$$\begin{aligned} \max _{\mathbf {e}(t), t_f}{\mathbf {J}}[\mathbf {S},\mathbf {e}] \end{aligned}$$
(27)

Where:

$$\begin{aligned} \mathbf {J}[\mathbf {S},\mathbf {e}]=\begin{bmatrix} t_f, -\int _{t_0}^{t_f}\sum _{i=1}^6 e_i dt \end{bmatrix}^T \end{aligned}$$
(28)

Subject to:

$$\begin{aligned} \frac{\text {d}\mathbf {S}}{\text {d}t}&= {} \mathbf {N} \mathbf {u}, \text { with } \mathbf {S}(t_0)=\mathbf {S}_{\mathbf {0}} \end{aligned}$$
(29)
$$\begin{aligned} \sum _{i=1}^6 e_i\le & {} E_T \end{aligned}$$
(30)
$$\begin{aligned} NADH\ge & {} NADH_c \end{aligned}$$
(31)
$$\begin{aligned} ATP\ge & {} ATP_c \end{aligned}$$
(32)

Where the states’ vector is:

$$\begin{aligned} \mathbf {S}=[X_1, X_2, X_3, X_4, NADH, ATP, NAD, ADP] \end{aligned}$$
(33)

The stoichiometric matrix along with the reactions’ kinetics corresponds to (34) and (35) respectively and are given by:

$$\begin{aligned} \mathbf {N}&=\begin{bmatrix} -1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 2 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0&{} 1&{} -1&{} 1&{} -1&{} 0&{} 0&{} 0\\ 0&{}0&{}1&{}-1&{}0&{}0&{}0&{}0\\ 0&{}1&{}-1&{}1&{}4&{}-1&{}0&{}-1\\ -2&{}2&{}0&{}0&{}0&{}3&{}-1&{}0\\ 0&{}-1&{}1&{}-1&{}-4&{}1&{}0&{}1\\ 2&{}-2&{}0&{}0&{}0&{}-3&{}1&{}0 \end{bmatrix} \end{aligned}$$
(34)
$$\begin{aligned} u_1&=e_1 \cdot X_1 \cdot ATP\nonumber \\ u_2&=e_2 \cdot X_2 \cdot NAD \cdot ADP\nonumber \\ u_3&=e_3 \cdot X_3 \cdot NADH\nonumber \\ u_4&=e_4 \cdot X_4 \cdot NAD\nonumber \\ u_5&=e_5 \cdot X_3 \cdot NAD\nonumber \\ u_6&=e_6 \cdot NADH \cdot ADP\nonumber \\ u_7&=3 \cdot ATP\nonumber \\ u_8&=0.1 \cdot NADH \end{aligned}$$
(35)

With the following initial values:

$$\begin{aligned} \mathbf {S}_{\mathbf {0}}=\begin{bmatrix} 1&1&1&10&0.7&0.8&0.3&0.2\end{bmatrix} \end{aligned}$$
(36)

All concentrations are arbitrary and the time is given in hours. A path constraint on the total amount of enzymes at any time is implemented in Eq. (30) where \(E_T\) is fixed at 11.5. Equations (31) and (32) are path constraints representing the critical values of NADH and ATP for cell survival.

We solved the above multicriteria optimal control problem with the three approaches, obtaining essentially the same Pareto front with all of them, as depicted in Fig. 7. More detailed results are given in Additional file 2: SC. These results are also in close agreement with those reported in [88], not only in terms of the Pareto front but also in terms of a qualitative comparison with available transcriptomics data (see Additional file 2: SC), illustrating how optimal control can explain the observed biological behavior.

In contrast with the previous case study, we found that AMIGO2_DO+ICLOCS was the best numerical strategy for this problem, arriving to the best solution in only 5 min of computation using a standard PC, and avoiding convergence to local solutions. Interestingly, msICLOCS converged to local solutions or crashed rather frequently, even with the use of fine-tuning. In Fig. 8 we present a comparison of the frequency histograms of the solutions reached by these two strategies for one of the points in the Pareto. Further computational details are given in Additional file 2: SC.

We also performed an analysis of solution multiplicity and sensitivity of the cost function with respect to the controls by analyzing 100 runs of the hybrid approach. Considering the solutions within \(0.5\%\) of the best, most of the controls (time-dependent enzyme concentrations) present a considerable spread (details can be found in Additional file 2: SC). In the case of the states, the spread is also significant, although in most cases the qualitative behaviour is retained. In any case, these results indicate the need of using rather tight convergence criteria to ensure a consistent quantitative computation of the optimal control and state dynamics involved.

This case study includes two path constraints representing the critical values of ATPc and NADHc which are needed to ensure the survival of the cell. We performed a numerical analysis to evaluate the impact of these path constraints on the optimal solution. In Fig. 9 we present a brief overview of the results corresponding to the analysis of the critical ATP value. Subfigure I shows the different Pareto fronts obtained for several values of this constraint, illustrating its very significant impact. In subfigure II, the corresponding state dynamics can be found. It can be seen that although the survival times changed considerably, most of the state dynamics retained the same qualitative shape, including the initial ethanol production phase. Interestingly, although the ATP values are almost always active at the critical value, confirming its high sensitivity, this is not always the case for the NADH dynamics. This was further studied by performing a similar analysis for the NADHc values, obtaining results which confirmed its less significant impact on the optimal solutions (details in Additional file 2: SC).

Next we performed an analysis of the adjoint variables (in practice, the Lagrange multipliers) to get further insights regarding the local sensitivity of the performance index with respect to the constraints. That is, the adjoint variables for specific constraints or bounds indicate the local sensitivity of the solution with respect to their incremental change. Detailed results for the Lagrange multipliers are presented in the Additional file 2: SC, confirming large values for ATPc, and relatively important ones for NADHc, in agreement with the results shown in Fig. 9 and discussed above. It should be noted that this concept becomes also interesting to evaluate the role of uncertainty and/or variability of the constraints (both prominent in biological modelling). For example, are the dynamics predicted by optimal control robust with respect to variability in critical constraints? Or, which are the most important constraints in terms of impact on the optimal cost?

Central carbon metabolism in Bacillus subtilis during a change of substrates in the environment (BSUB)

This case study is based on a simplified kinetic model of the central carbon metabolism of B.subtilis taken from [174]. The model considers important pathways such as upper and lower glycolysis, TCA cycle, glyconeogenesis, overflow metabolism and biomass production. The network representation is given in Fig. 10.

Here, the objectives considered are the maximization of the overall ATP production and the minimization of the protein investment (enzyme synthesis). We aim to find the cost/benefit trade-offs, where the benefit is computed as the integral of ATP levels along the time horizon, and the cost is given by the integral of the total enzyme concentrations for the whole time horizon.

The corresponding multicriteria optimal control formulation is as follows:

$$\begin{aligned} \min _{\mathbf {a}(t)}{\mathbf {J[\mathbf {X},\mathbf {a}]}} \end{aligned}$$
(37)

Where:

$$\begin{aligned} \mathbf {J}[\mathbf {X},\mathbf {a}]=\begin{bmatrix}-\int _{t_0}^{t_f}X_6 dt, \int _{t_0}^{t_f}\sum _{i=8}^{20} X_i dt \end{bmatrix}^T \end{aligned}$$
(38)

Subject to:

$$\begin{aligned} \frac{\text {d}\mathbf {X}}{\text {d}t}&=\mathbf {f}[\mathbf {X}(t),\mathbf {a}(t),t], \mathbf {X}(t_0)=\mathbf {X}_{\mathbf {0}} \end{aligned}$$
(39)
$$\begin{aligned} \sum _{i=8}^{20} X_i&\le E_T \end{aligned}$$
(40)
$$\begin{aligned} 0.0025&\le \mathbf {a}(t)\le 0.125 \end{aligned}$$
(41)

Where the right hand side of the differential equations corresponds to:

$$\begin{aligned} f_{1}&= X_{21} X_{8} X_{2} X_{6} - X_{9} X_{1} - X_{10} X_{1} X_{7} + X_{11} X_{2}\nonumber \\ f_{2}&= - X_{21} X_{8} X_{2} X_{6} + 2 X_{10} X_{1} X_{7} - 2 X_{11} X_{2} - X_{12} X_{2} X_{7} + X_{19} X_{6} X_{5}\nonumber \\ f_{3}&= X_{21} X_{8} X_{2} X_{6} + X_{12} X_{2} X_{7} - X_{13} X_{3} - X_{14} X_{3} - X_{15} X_{3} X_{5} - X_{18} X_{3}\nonumber \\ f_{4}&= X_{15} X_{3} X_{5} - X_{16} X_{4} - X_{17} X_{4} X_{7}\nonumber \\ f_{5}&= 3 X_{22} X_{20} + X_{17} X_{4} X_{7} - X_{15} X_{3} X_{5} + X_{18} X_{3} - X_{19} X_{6} X_{5}\nonumber \\ f_{6}&= - X_{21} X_{8} X_{2} X_{6} + 2 X_{10} X_{1} X_{7} + X_{12} X_{2} X_{7} + 5 X_{17} X_{4} X_{7} - X_{19} X_{6} X_{5} - 8 X_{6}\nonumber \\ f_{7}&= - ( - X_{21} X_{8} X_{2} X_{6} + 2 X_{10} X_{1} X_{7} + X_{12} X_{2} X_{7} + 5 X_{17} X_{4} X_{7} - X_{19} X_{6} X_{5} - 8 X_{6})\nonumber \\ f_{8}&= a_{1} - \beta X_{8}\nonumber \\ f_{9}&= a_{2} - \beta X_{9}\nonumber \\ f_{10}&= a_{3} - \beta X_{10}\nonumber \\ f_{11}&= a_{4} - \beta X_{11}\nonumber \\ f_{12}&= a_{5} - \beta X_{12}\nonumber \\ f_{13}&= a_{6} - \beta X_{13}\nonumber \\ f_{14}&= a_{7} - \beta X_{14}\nonumber \\ f_{15}&= a_{8} - \beta X_{15}\nonumber \\ f_{16}&= a_{9} - \beta X_{16}\nonumber \\ f_{17}&= a_{10} - \beta X_{17}\nonumber \\ f_{18}&= a_{11} - \beta X_{18}\nonumber \\ f_{19}&= a_{12} - \beta X_{19}\nonumber \\ f_{20}&= a_{13} - \beta X_{20}\nonumber \\ f_{21}&= - 0.01 X_{21} X_{8} X_{2} X_{6}\nonumber \\ f_{22}&= - 0.03 X_{22} X_{20} \end{aligned}$$
(42)

With the vector of states corresponding to the following metabolites, enzymes and substrates as presented in the network representation:

$$\begin{aligned} \mathbf {X}= \begin{bmatrix} FBP&PEP&PYR&CIT&MAL&ATP&ADP&E_{1\ldots 13}&G&M\end{bmatrix} \end{aligned}$$
(43)

The terms \(f_{i}\) are the right hand sides of the kinetics of the different reactions \(i=1..22\). In this simplified model of the central carbon metabolism, several consecutive reactions have been merged for the sake of simplicity. This is the case of reaction 1, from GluEx (external glucose) to FBP (fructose-bis-phosphate), where the glucose uptake has been merged with the consecutive conversions \(\hbox {Glu} \rightarrow \hbox {G6P} \rightarrow \hbox {F6P} \rightarrow \hbox {FBP}\). The import of glucose is coupled to the conversion of phospho-enol-pyruvate (PEP) to pyruvate (Pyr), while in the subsequent conversions there is consumption of ATP. Reactions 3, 4 and 5 represent glycolysis, yielding 4 ATP per FBP conversion to 2 Pyr. Reactions 2, 6 and 7 correspond to gluconeogenesis, excretion of pyruvate through fermentation and biomass production respectively. Reactions 8 and 10 describe the respiratory pathway of the TCA cycle yielding 5 ATP per Pyr. Reaction 9 describes glutamate (Glu) production to be used in amino acid synthesis. Malate uptake from the environment is described by reaction 13. Malate can be also created from Pyr (reaction 11) and then converted to PEP (reaction 12). Enzyme dynamics are taken into account using a simple linear dynamic expression (\(\mathbf {f}_{8-20}\) in (42)) with a rate constant of degradation (\(\beta =0.25\)) and production rates (\(\mathbf {a}\)) as the decision variables (controls) of the optimal control formulation. The total enzyme capacity is implemented in Eq. (40) with a bound of 6.5 (arbitrary units).

In this case study we are considering two dynamic scenarios. In the first scenario (named G-M) we consider a culture growing in an environment where glucose is the only substrate. At a certain time-point we add malate and we observe the dynamic behavior. The second scenario (M-G) is vice-versa. That is, the culture is initially growing on malate, before instantly adding glucose as a substrate and then observe the dynamics. The first phase of both scenarios is computed starting from the same initial values and reaching a steady state for the consumption of the substrate available.

The initial conditions for scenario G-M are:

$$\begin{aligned} \mathbf {X}_{\mathbf {0}}= [1, 0.02, 1, 2, 2, 1, 1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 12, 0] \end{aligned}$$
(44)

The initial conditions for scenario M-G are:

$$\begin{aligned} \mathbf {X}_{\mathbf {0}}= [1, 0.02, 1, 2, 2, 1, 1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0, 12] \end{aligned}$$
(45)

Considering both scenarios, we solved the corresponding multicriteria optimal control problems using the three strategies, the hybrid AMIGO2_DO+ICLOCS and the separate solvers, AMIGO2_DO and msICLOCS. All of them produced almost identical Pareto fronts for each scenario. Figure 11(I) shows the Pareto front corresponding to scenario G-M. The corresponding optimal state trajectories for point B in the Pareto front are presented in Fig. 11(II), illustrating how the three strategies converged to essentially the same near-global optimal solution. Similar results were found for scenario M-G. Detailed results can be found in Additional file 3: Bsub.

Even though the three methods arrived to essentially the same multicriteria optimal control Pareto sets, msICLOCS was the fastest, converging to very good results in very modest computation times (typically less than 150 s per ICLOCS run on a standard PC when fine tuning was used). As in the previous case studies, fine-tuning ICLOCS was found to be critical to improve its convergence speed and to avoid convergence to local solutions. In Additional file 3: Bsub we provide detailed results including the effect of fine-tuning on the performance and robustness of msICLOCS, and comparisons with the other strategies.

Finally, we performed an analysis of possible solution multiplicity and sensitivity of the cost function with respect to the controls. We were not able to identify multiplicity of solutions as shown in the respective figures in Additional file 3: Bsub.

Discussion

The above results show that the novel combined strategy AMIGO2_DO+ICLOCS was the best in terms of robustness, solving these three challenging case studies without any issues, while requiring a very reasonable computational cost for all of them. Although the multi-start strategy msICLOCS was faster in two of the problems, it often failed when attempting the solve the SC case study.

The performance of the AMIGO2_DO+ICLOCS strategy also compares very favorably with the previous results of the different optimal control methods considered by de Hijas-Liste et al [88] for the LNP3B and SC cases. Our new strategy allows a significant reduction of computation times (more than \(50\%\) for the SC case). Further, AMIGO2_DO+ICLOCS has also shown good scalability, solving without issues problems such as BSUB, which has many controls that require fine discretizations and which could not be handled by the previous methods considered in [88]. Finally, our approach provides adjoint information that is particularly useful to assess the role of the different constraints in the optimal solutions.

Conclusions

Here we have considered the exploitation of optimality principles to explain dynamics in biochemical networks in the absence of complete prior knowledge about the regulatory mechanisms. After carefully reviewing the previous literature, we have presented a multicriteria optimal control framework to generate biologically meaningful predictions for rather complex metabolic pathways. This strategy requires, as a starting point, a calibrated dynamic model of the pathway and the set of cost functions to be considered in the multicriteria formulation. Next, the approach uses a Pareto optimality hypothesis to compute the dynamic effects of regulation in a metabolic pathway. Such computation is performed without the need of making any assumption about the pathway regulation.

We have also discussed the many possible challenges and pitfalls involved in the solution of these nonlinear multicriteria optimal control problems. In order to surmount these difficulties, here we have considered several numerical strategies with the aim of finding a method suitable to handle realistic networks of arbitrary topology. We have carefully evaluated their performance with several challenging case studies considering the central carbon metabolism of S. cerevisiae and B. subtilis. Our results indicate that the two-phase strategy AMIGO2_DO+ICLOCS presented excellent scalability and a good compromise between efficiency and robustness. We also show how this framework allows us to explore the interplay and trade-offs between different biological cost functions and constraints in biological systems.

Our vision is that multicriteria optimal control can play a major role in fully exploiting the explanatory and predictive power of kinetic models in computational systems biology. In future work, we will illustrate its usefulness for the analysis of cell signaling pathways, and for the solution of metabolic engineering problems considering kinetic models. Another interesting area for future research is that of optimal control using Boolean networks, a topic attracting increasing attention [175,176,177]. This class of problems is especially interesting for the design and analysis of intervention strategies in complex genetic or signalling networks.

Availability of data and materials

Scripts corresponding to the case studies presented in this paper and illustrating how the reported solutions have been computed are available at https://doi.org/10.5281/zenodo.3793620. Supplementary material is also provided in Additional files 1-3 corresponding to the three case studies presented here.

Abbreviations

ADP:

Adenosine diphosphate

ATP:

Adenosine triphosphate

Cit:

Citrate

CVP:

Control vector parameterization

DAE:

Differential-algebraic equation

eSS:

Enhanced scatter search

FBP:

Fructose-bis-phosphate

Glu:

Glutamate

IVP:

Initial value problem

MAL:

Malate

NAD:

Nicotinamide adenine dinucleotide

NADH:

Nicotinamide adenine dinucleotide, reduced form

NLP:

Nonlinear Programming

OCP:

Optimal control problem

ODE:

Ordinary differential equation

PEP:

Phospho-enol-pyruvate

Pyr:

Pyruvate

R5P:

Ribose-5-phosphate

TCA cycle:

Tricarboxylate acid cycle

References

  1. 1.

    Doyle FJ, Stelling J. Systems interface biology. J R Soc Interface. 2006;3(10):603–16.

    PubMed  PubMed Central  Article  Google Scholar 

  2. 2.

    DiStefano J III. Dynamic systems biology modeling and simulation. London: Academic Press; 2015.

    Google Scholar 

  3. 3.

    Wolkenhauer O. Why model? Front Physiol. 2014;5:21.

    PubMed  PubMed Central  Article  Google Scholar 

  4. 4.

    Wolkenhauer O, Mesarović M. Feedback dynamics and cell function: why systems biology is called systems biology. Mol BioSyst. 2005;1(1):14–6.

    CAS  PubMed  Article  Google Scholar 

  5. 5.

    Aldridge BB, Burke JM, Lauffenburger DA, Sorger PK. Physicochemical modelling of cell signalling pathways. Nat Cell Biol. 2006;8(11):1195.

    CAS  PubMed  Article  Google Scholar 

  6. 6.

    Chen WW, Niepel M, Sorger PK. Classic and contemporary approaches to modeling biochemical reactions. Genes Dev. 2010;24(17):1861–75.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  7. 7.

    Sherman A. Dynamical systems theory in physiology. J General Physiol. 2011;138(1):13–9.

    CAS  Article  Google Scholar 

  8. 8.

    Crampin EJ, Halstead M, Hunter P, Nielsen P, Noble D, Smith N, et al. Computational physiology and the physiome project. Exp Physiol. 2004;89(1):1–26.

    PubMed  Article  Google Scholar 

  9. 9.

    Almquist J, Cvijovic M, Hatzimanikatis V, Nielsen J, Jirstrand M. Kinetic models in industrial biotechnology-improving cell factory performance. Metab Eng. 2014;24:38–60.

    CAS  PubMed  Article  Google Scholar 

  10. 10.

    Le Novere N. Quantitative and logic modelling of molecular and gene networks. Nat Rev Genet. 2015;16(3):146.

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  11. 11.

    Srinivasan S, Cluett WR, Mahadevan R. Constructing kinetic models of metabolism at genome-scales: A review. Biotechnol J. 2015;10(9):1345–59.

    CAS  PubMed  Article  Google Scholar 

  12. 12.

    Heinemann T, Raue A. Model calibration and uncertainty analysis in signaling networks. Curr Opin Biotechnol. 2016;39:143–9.

    CAS  PubMed  Article  Google Scholar 

  13. 13.

    Saa PA, Nielsen LK. Formulation, construction and analysis of kinetic models of metabolism: A review of modelling frameworks. Biotechnol Adv. 2017;35(8):981–1003.

    CAS  PubMed  Article  Google Scholar 

  14. 14.

    Widmer LA, Stelling J. Bridging intracellular scales by mechanistic computational models. Curr Opin Biotechnol. 2018;52:17–24.

    CAS  PubMed  Article  Google Scholar 

  15. 15.

    Tummler K, Klipp E. The discrepancy between data for and expectations on metabolic models: How to match experiments and computational efforts to arrive at quantitative predictions? Curr Opin Syst Biol. 2018;8:1–6.

    Article  Google Scholar 

  16. 16.

    Fröhlich F, Loos C, Hasenauer J. Scalable Inference of Ordinary Differential Equation Models of Biochemical Processes. In: Methods in molecular biology. vol. 1883. New York, NY: Springer New York; 2019. pp. 385–422.

  17. 17.

    Strutz J, Martin J, Greene J, Broadbelt L, Tyo K. Metabolic kinetic modeling provides insight into complex biological questions, but hurdles remain. Curr Opin Biotechnol. 2019;59:24–30.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  18. 18.

    Wolkenhauer O, Ullah M, Wellstead P, Cho KH. The dynamic systems approach to control and regulation of intracellular networks. FEBS Lett. 2005;579(8):1846–53.

    CAS  PubMed  Article  Google Scholar 

  19. 19.

    Kremling A, Saez-Rodriguez J. Systems biology: an engineering perspective. J Biotechnol. 2007;129(2):329–51.

    CAS  PubMed  Article  Google Scholar 

  20. 20.

    Wellstead P, Bullinger E, Kalamatianos D, Mason O, Verwoerd M. The role of control and system theory in systems biology. Ann Rev Control. 2008;32(1):33–47.

    Article  Google Scholar 

  21. 21.

    Iglesias PA, Ingalls BP. Control theory and systems biology. New York: MIT Press; 2010.

    Google Scholar 

  22. 22.

    Blanchini F, Hana ES, Giordano G, Sontag ED. Control-theoretic methods for biological networks. In: 2018 IEEE Conference on Decision and Control (CDC). IEEE; 2018. pp. 466–483.

  23. 23.

    Thomas PJ, Olufsen M, Sepulchre R, Iglesias PA, Ijspeert A, Srinivasan M. Control theory in biology and medicine. Biol Cybern. 2019;113(1):1–6.

    CAS  PubMed  Article  Google Scholar 

  24. 24.

    Arcak M, Blanchini F, Vidyasagar M. Editorial to the special issue of L-CSS on control and network theory for biological systems. IEEE Control Syst Lett. 2019;3(2):228–9.

    Article  Google Scholar 

  25. 25.

    Menolascina F, Siciliano V, Di Bernardo D. Engineering and control of biological systems: a new way to tackle complex diseases. FEBS Lett. 2012;586(15):2122–8.

    CAS  PubMed  Article  Google Scholar 

  26. 26.

    He F, Murabito E, Westerhoff HV. Synthetic biology and regulatory networks: where metabolic systems biology meets control engineering. J R Soc Interface. 2016;13(117):20151046.

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  27. 27.

    Prescott TP, Harris AWK, Scott-Brown J, Papachristodoulou A. Designing feedback control in biology for robustness and scalability. In: IET/SynbiCITE Engineering Biology Conference. Institution of Engineering and Technology; 2016. pp. 2–2(1).

  28. 28.

    Del Vecchio D, Dy AJ, Qian Y. Control theory meets synthetic biology. J R Soc Interface. 2016;13(120):20160380.

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  29. 29.

    Hsiao V, Swaminathan A, Murray RM. Control theory for synthetic biology: recent advances in system characterization, control design, and controller implementation for synthetic biology. IEEE Control Syst Mag. 2018;38(3):32–62.

    Article  Google Scholar 

  30. 30.

    Liu ET. Systems biology, integrative biology, predictive biology. Cell. 2005;121(4):505–6.

    CAS  PubMed  Article  Google Scholar 

  31. 31.

    Hood L, Heath JR, Phelps ME, Lin B. Systems biology and new technologies enable predictive and preventative medicine. Science. 2004;306(5696):640–3.

    CAS  PubMed  Article  Google Scholar 

  32. 32.

    Kell DB. Metabolomics and systems biology: making sense of the soup. Curr Opin Microbiol. 2004;7(3):296–307.

    CAS  PubMed  Article  Google Scholar 

  33. 33.

    Sutherland WJ. The best solution. Nature. 2005;435(June):569.

    CAS  PubMed  Article  Google Scholar 

  34. 34.

    Hess W. Das Prinzip des kleinsten Kraftverbrauchs im Dienste hämodynamischer Forschung Archiv Anat. Archiv Anat Physiol. 1914;p. 1–62.

  35. 35.

    Murray CD. The physiological principle of minimum work: I. The vascular system and the cost of blood volume. Proc Nat Acad Sci USA. 1926;12(3):207.

    CAS  PubMed  Article  Google Scholar 

  36. 36.

    Rosen R. Optimality principles in biology. Berlin: Springer; 1967.

    Google Scholar 

  37. 37.

    Rosen R. Optimality in biology and medicine. J Math Anal Appl. 1986;119(1):203–22.

    Article  Google Scholar 

  38. 38.

    Makela A, Givnish TJ, Berninger F, Buckley TN, Farquhar GD, Hari P. Challenges and opportunities of the optimality approach in plant ecology. Silva Fennica. 2002;36(3):605–14.

    Article  Google Scholar 

  39. 39.

    Smith JM. Optimization theory in evolution. Annu Rev Ecol Syst. 1978;9(1):31–56.

    Article  Google Scholar 

  40. 40.

    Parker GA, Smith JM, et al. Optimality theory in evolutionary biology. Nature. 1990;348(6296):27–33.

    Article  Google Scholar 

  41. 41.

    McFarland D. Decision making in animals. Nature. 1977;269(5623):15–21.

    Article  Google Scholar 

  42. 42.

    Pardalos PM, Romeijn HE. Handbook of optimization in medicine, vol. 26. Berlin: Springer; 2009.

    Google Scholar 

  43. 43.

    Mendes P, Kell D. Non-linear optimization of biochemical pathways: applications to metabolic engineering and parameter estimation. Bioinformatics. 1998;14(10):869–83.

    CAS  PubMed  Article  Google Scholar 

  44. 44.

    Torres NV, Voit EO. Pathway analysis and optimization in metabolic engineering. Cambridge: Cambridge University Press; 2002.

    Google Scholar 

  45. 45.

    Banga JR. Optimization in computational systems biology. BMC Syst Biol. 2008;2(1):47.

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  46. 46.

    de Vos MG, Poelwijk FJ, Tans SJ. Optimality in evolution: new insights from synthetic biology. Curr Opin Biotechnol. 2013;24(4):797–802.

    PubMed  Article  CAS  Google Scholar 

  47. 47.

    Savageau MA. Optimal design of feedback control by inhibition. J Mol Evol. 1974;4(2):139–56.

    CAS  PubMed  Article  Google Scholar 

  48. 48.

    Heinrich R, Hermann-Georg H, Stefan S. A theoretical approach to the evolution and structural design of enzymatic networks; linear enzymatic chains, branched pathways and glycolysis of erythrocytes. Bull Math Biol. 1987;49(5):539–95.

    CAS  PubMed  Article  Google Scholar 

  49. 49.

    Schuster S, Heinrich R. Minimization of intermediate concentrations as a suggested optimality principle for biochemical networks. J Math Biol. 1991;29(5):425–42.

    Article  Google Scholar 

  50. 50.

    Heinrich R, Schuster S, Holzhütter HG. Mathematical analysis of enzymic reaction systems using optimization principles. FEBS J. 1991;201(1):1–21.

    CAS  Google Scholar 

  51. 51.

    Heinrich R, Montero F, Klipp E, Waddell TG, Meléndez-Hevia E. Theoretical approaches to the evolutionary optimization of glycolysis. Eur J Biochem. 1997;243(1–2):191–201.

    CAS  PubMed  Google Scholar 

  52. 52.

    Meléndez-Hevia E, Torres NV. Economy of design in metabolic pathways: further remarks on the game of the pentose phosphate cycle. J Theor Biol. 1988;132(1):97–111.

    PubMed  Article  Google Scholar 

  53. 53.

    Varma A, Palsson BO. Metabolic flux balancing: basic concepts, scientific and practical use. Biotechnology. 1994;12(10):994.

    CAS  Article  Google Scholar 

  54. 54.

    Meléndez-Hevia E, Waddell TG, Montero F. Optimization of metabolism: the evolution of metabolic pathways toward simplicity through the game of the pentose phosphate cycle. J Theor Biol. 1994;166(2):201–20.

    Article  Google Scholar 

  55. 55.

    Hatzimanikatis V, Floudas CA, Bailey JE. Analysis and design of metabolic reaction networks via mixed-integer linear optimization. AIChE J. 1996;42(5):1277–92.

    CAS  Article  Google Scholar 

  56. 56.

    Stephanopoulos G, Aristidou AA, Nielsen J. Metabolic engineering: principles and methodologies. London: Elsevier; 1998.

    Google Scholar 

  57. 57.

    Segre D, Vitkup D, Church GM. Analysis of optimality in natural and perturbed metabolic networks. Proc Nat Acad Sci. 2002;99(23):15112–7.

    CAS  PubMed  Article  Google Scholar 

  58. 58.

    Dekel E, Alon U. Optimality and evolutionary tuning of the expression level of a protein. Nature. 2005;436(7050):588.

    CAS  PubMed  Article  Google Scholar 

  59. 59.

    Nielsen J. Principles of optimal metabolic network operation. Mol Syst Biol. 2007;3(1):126.

    PubMed  PubMed Central  Article  Google Scholar 

  60. 60.

    Orth JD, Thiele I, Palsson BØ. What is flux balance analysis? Nat Biotechnol. 2010;28(3):245.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  61. 61.

    Goryanin I. Computational optimization and biological evolution. London: Portland Press Limited; 2010.

    Google Scholar 

  62. 62.

    Berkhout J, Bruggeman FJ, Teusink B. Optimality principles in the regulation of metabolic networks. Metabolites. 2012;2(3):529–52.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  63. 63.

    Heirendt L, Arreckx S, Pfau T, Mendoza SN, Richelle A, Heinken A, et al. Creation and analysis of biochemical constraint-based models using the COBRA Toolbox v. 3.0. Nat Protoc. 2019;14(3):639–702.

  64. 64.

    Norsigian CJ, Pusarla N, McConn JL, Yurkovich JT, Dräger A, Palsson BO, et al. BiGG Models 2020: multi-strain genome-scale models and expansion across the phylogenetic tree. Nucleic Acids Res. 2020;48(D1):D402–6.

    CAS  PubMed  Google Scholar 

  65. 65.

    Heinrich R, Schuster S. The regulation of cellular systems. Berlin: Springer; 1996.

    Google Scholar 

  66. 66.

    Klipp E, Liebermeister W, Wierling C, Kowald A, Herwig R. Systems biology: a textbook. London: Wiley-VCH; 2016.

    Google Scholar 

  67. 67.

    Bryson A, Ho Y. Applied optimal control: optimization, estimation and control. Abingdon-on-Thames: Taylor and Francis; 1975.

    Google Scholar 

  68. 68.

    Liberzon D. Calculus of variations and optimal control theory: a concise introduction. Princeton: Princeton University Press; 2012.

    Google Scholar 

  69. 69.

    Swan GW. Optimal control applications in biomedical engineering–a survey. Optimal Control Appl Methods. 1981;2(4):311–34.

    Article  Google Scholar 

  70. 70.

    Lenhart S, Workman JT. Optimal control applied to biological models. Baco Raton: CRC Press; 2007.

    Google Scholar 

  71. 71.

    Itzkovitz S, Blat IC, Jacks T, Clevers H, van Oudenaarden A. Optimality in the development of intestinal crypts. Cell. 2012;148(3):608–19.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  72. 72.

    Pavlov MY, Ehrenberg M. Optimal control of gene expression for fast proteome adaptation to environmental change. Proc Nat Acad Sci USA. 2013;110(51):20527–32.

    CAS  PubMed  Article  Google Scholar 

  73. 73.

    Petkova MD, Tkačik G, Bialek W, Wieschaus EF, Gregor T. Optimal decoding of cellular identities in a genetic network. Cell. 2019;176(4):844–55.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  74. 74.

    Klipp E, Heinrich R, Holzhütter HG. Prediction of temporal gene expression. Metabolic optimization by re-distribution of enzyme activities. Eur J Biochem. 2002;269(22):5406–13.

    CAS  PubMed  Article  Google Scholar 

  75. 75.

    DeRisi JL, Iyer VR, Brown PO. Exploring the metabolic and genetic control of gene expression on a genomic scale. Science. 1997;278(5338):680–6.

    CAS  PubMed  Article  Google Scholar 

  76. 76.

    Laub MT, McAdams HH, Feldblyum T, Fraser CM, Shapiro L. Global analysis of the genetic network controlling a bacterial cell cycle. Science. 2000;290(5499):2144–8.

    CAS  PubMed  Article  Google Scholar 

  77. 77.

    Grünenfelder B, Rummel G, Vohradsky J, Röder D, Langen H, Jenal U. Proteomic analysis of the bacterial cell cycle. Proc Nat Acad Sci. 2001;98(8):4681–6.

    PubMed  Article  Google Scholar 

  78. 78.

    Zaslaver A, Mayo AE, Rosenberg R, Bashkin P, Sberro H, Tsalyuk M, et al. Just-in-time transcription program in metabolic pathways. Nat Genet. 2004;36(5):486–91.

    CAS  PubMed  Article  Google Scholar 

  79. 79.

    Ewald J, Bartl M, Kaleta C. Deciphering the regulation of metabolism with dynamic optimization: an overview of recent advances. Biochem Soc Trans. 2017;45(4):1035–43.

    CAS  PubMed  Article  Google Scholar 

  80. 80.

    Kalir S, McClure J, Pabbaraju K, Southward C, Ronen M, Leibler S, et al. Ordering genes in a flagella pathway by analysis of expression kinetics from living bacteria. Science. 2001;292(5524):2080–3.

    CAS  PubMed  Article  Google Scholar 

  81. 81.

    Chechik G, Oh E, Rando O, Weissman J, Regev A, Koller D. Activity motifs reveal principles of timing in transcriptional control of the yeast metabolic network. Nat Biotechnol. 2008;26(11):1251.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  82. 82.

    Oyarzún DA, Ingalls BP, Middleton RH, Kalamatianos D. Sequential activation of metabolic pathways: a dynamic optimization approach. Bull Math Biol. 2009;71(8):1851–72.

    PubMed  Article  Google Scholar 

  83. 83.

    Bartl M, Li P, Schuster S. Modelling the optimal timing in metabolic pathway activation–Use of Pontryagin’s maximum principle and role of the Golden section. Biosystems. 2010;101(1):67–77.

    CAS  PubMed  Article  Google Scholar 

  84. 84.

    Oyarzún D. Optimal control of metabolic networks with saturable enzyme kinetics. IET Syst Biol. 2011;5(2):110–9.

    Article  Google Scholar 

  85. 85.

    de Hijas-Liste GM, Balsa-Canto E, Banga JR. Prediction of activation of metabolic pathways via dynamic optimization. Comput Chem Eng. 2011;29:1386–90.

    Google Scholar 

  86. 86.

    Wessely F, Bartl M, Guthke R, Li P, Schuster S, Kaleta C. Optimal regulatory strategies for metabolic pathways in Escherichia coli depending on protein costs. Mol Syst Biol. 2011;7(1):515.

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  87. 87.

    Bartl M, Kötzing M, Schuster S, Li P, Kaleta C. Dynamic optimization identifies optimal programmes for pathway regulation in prokaryotes. Nat Commun. 2013;4(1):1–9.

    Article  Google Scholar 

  88. 88.

    de Hijas-Liste GM, Klipp E, Balsa-Canto E, Banga JR. Global dynamic optimization approach to predict activation in metabolic pathways. BMC Syst Biol. 2014;8:1.

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  89. 89.

    Bayon L, Grau J, Ruiz M, Suarez P. Optimal control of a linear unbrached chemical process with N steps: the quasi-aalytical solution. J Math Chem. 2014;52(4):1036–49.

    CAS  Article  Google Scholar 

  90. 90.

    Waldherr S, Oyarzún DA, Bockmayr A. Dynamic optimization of metabolic networks coupled with gene expression. J Theor Biol. 2015;365:469–85.

    PubMed  Article  Google Scholar 

  91. 91.

    Ewald J, Kötzing M, Bartl M, Kaleta C. Footprints of optimal protein assembly strategies in the operonic structure of prokaryotes. Metabolites. 2015;5(2):252–69.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  92. 92.

    de Hijas-Liste GM, Balsa-Canto E, Ewald J, Bartl M, Li P, Banga JR, et al. Optimal programs of pathway control: dissecting the influence of pathway topology and feedback inhibition on pathway regulation. BMC Bioinf. 2015;16(1):163.

    Article  Google Scholar 

  93. 93.

    Giordano N, Mairet F, Gouzé JL, Geiselmann J, De Jong H. Dynamical allocation of cellular resources as an optimal control problem: novel insights into microbial growth strategies. PLoS Comput Biol. 2016;12(3):e1004802.

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  94. 94.

    Nimmegeers P, Telen D, Logist F, Van Impe J. Dynamic optimization of biological networks under parametric uncertainty. BMC Syst Biol. 2016;10(1):86.

    PubMed  PubMed Central  Article  Google Scholar 

  95. 95.

    Ewald J, Bartl M, Dandekar T, Kaleta C. Optimality principles reveal a complex interplay of intermediate toxicity and kinetic efficiency in the regulation of prokaryotic metabolism. PLoS Comput Biol. 2017;13(2):e1005371.

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  96. 96.

    Yegorov I, Mairet F, De Jong H, Gouzé JL. Optimal control of bacterial growth for the maximization of metabolite production. J Math Biol. 2019;78(4):985–1032.

    PubMed  Article  Google Scholar 

  97. 97.

    Bayón L, Ayuso PF, Otero J, Suárez P, Tasis C. Influence of enzyme production dynamics on the optimal control of a linear unbranched chemical process. J Math Chem. 2019;57(5):1330–43.

    Article  CAS  Google Scholar 

  98. 98.

    Cinquemani E, Mairet F, Yegorov I, de Jong H, Gouzé JL, Optimal control of bacterial growth for metabolite production: The role of timing and costs of control. In, 18th European Control Conference (ECC). IEEE. 2019;2019:2657–62.

  99. 99.

    Ewald J, Sieber P, Garde R, Lang SN, Schuster S, Ibrahim B. Trends in mathematical modeling of host-pathogen interactions. Cell Mol Life Sci. 2020;77:468–80.

    Article  CAS  Google Scholar 

  100. 100.

    Garland T. Trade-offs. Curr Biol. 2014;24(2):R60–1.

    CAS  PubMed  Article  Google Scholar 

  101. 101.

    Frank SA. The trade-off between rate and yield in the design of microbial metabolism. J Evol Biol. 2010;23(3):609–13.

    CAS  PubMed  Article  Google Scholar 

  102. 102.

    Byrne D, Dumitriu A, Segrè D. Comparative multi-goal tradeoffs in systems engineering of microbial metabolism. BMC Syst Biol. 2012;6(1):127.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  103. 103.

    Molenaar D, Van Berlo R, De Ridder D, Teusink B. Shifts in growth strategies reflect tradeoffs in cellular economics. Mol Syst Biol. 2009;5(1):323.

    PubMed  PubMed Central  Article  Google Scholar 

  104. 104.

    Weisse AY, Oyarzun DA, Danos V, Swain PS. Mechanistic links between cellular trade-offs, gene expression, and growth. Proc Nat Acad Sci USA. 2015;112(9):E1038–47.

    CAS  PubMed  Article  Google Scholar 

  105. 105.

    Reimers AM, Knoop H, Bockmayr A, Steuer R. Cellular trade-offs and optimal resource allocation during cyanobacterial diurnal growth. Proc Nat Acad Sci USA. 2017;114(31):E6457–65.

    CAS  PubMed  Article  Google Scholar 

  106. 106.

    Terradot G, Beica A, Weiße A, Danos V. Survival of the fattest: evolutionary trade-offs in cellular resource storage. Electron Notes Theor Comput Sci. 2018;335:91–112.

    Article  Google Scholar 

  107. 107.

    Szekely P, Sheftel H, Mayo A, Alon U. Evolutionary tradeoffs between economy and effectiveness in biological homeostasis systems. PLoS Comput Biol. 2013;9(8):e1003163.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  108. 108.

    Adiwijaya BS, Barton PI, Tidor B. Biological network design strategies: discovery through dynamic optimization. Mol BioSyst. 2006;2(12):650–9.

    CAS  PubMed  Article  Google Scholar 

  109. 109.

    Radivojevic A, Chachuat B, Bonvin D, Hatzimanikatis V. Exploration of trade-offs between steady-state and dynamic properties in signaling cycles. Phys Biol. 2012;9(4):045010.

    CAS  PubMed  Article  Google Scholar 

  110. 110.

    Mancini F, Marsili M, Walczak AM. Trade-offs in delayed information transmission in biochemical networks. J Stat Phys. 2016;162(5):1088–129.

    Article  Google Scholar 

  111. 111.

    Tullock G. Biological externalities. J Theor Biol. 1971;33(3):565–76.

    CAS  PubMed  Article  Google Scholar 

  112. 112.

    Heinrich R, Schuster S. The modelling of metabolic systems. Structure, control and optimality. Biosystems. 1998;47(1–2):61–77.

    CAS  PubMed  Article  Google Scholar 

  113. 113.

    Samad HE, Khammash M, Homescu C, Petzold L. Optimal performance of the heat-shock gene regulatory network. IFAC Proc Vol. 2005;38(1):19–24.

    Article  Google Scholar 

  114. 114.

    Vera J, De Atauri P, Cascante M, Torres NV. Multicriteria optimization of biochemical systems by linear programming: application to production of ethanol by Saccharomyces cerevisiae. Biotechnol Bioeng. 2003;83(3):335–43.

    CAS  PubMed  Article  Google Scholar 

  115. 115.

    Sendín OH, Vera J, Torres NV, Banga JR. Model based optimization of biochemical systems using multiple objectives: a comparison of several solution strategies. Math Comput Model Dyn Syst. 2006;12(5):469–87.

    Article  Google Scholar 

  116. 116.

    Vo TD, Greenberg HJ, Palsson BO. Reconstruction and functional characterization of the human mitochondrial metabolic network based on proteomic and biochemical data. J Biol Chem. 2004;279(38):39532–40.

    CAS  PubMed  Article  Google Scholar 

  117. 117.

    Sendín JOH, Alonso AA, Banga JR. Multi-objective optimization of biological networks for prediction of intracellular fluxes. Adv Soft Comput. 2009;49:197–205.

    Article  Google Scholar 

  118. 118.

    Schuetz R, Zamboni N, Zampieri M, Heinemann M, Sauer U. Multidimensional optimality of microbial metabolism. Science. 2012;336(6081):601–4.

    CAS  PubMed  Article  Google Scholar 

  119. 119.

    Shoval O. Evolutionary trade-offs, pareto optimality, and the geometry of phenotype space. Science. 2012;1157:2012.

    Google Scholar 

  120. 120.

    Higuera C, Villaverde AF, Banga JR, Ross J, Morán F. Multi-criteria optimization of regulation in metabolic networks. PLoS ONE. 2012;7(7):e41122.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  121. 121.

    Poelwijk FJ, De Vos MGJ, Tans SJ. Tradeoffs and optimality in the evolution of gene regulation. Cell. 2011;146(3):462–70.

    CAS  PubMed  Article  Google Scholar 

  122. 122.

    Oyarzun DA, Stan GBV. Synthetic gene circuits for metabolic control: design trade-offs and constraints. J R Soc Interface. 2013;10(78):20120671.

    PubMed  PubMed Central  Article  Google Scholar 

  123. 123.

    Otero-Muras I, Banga JR. Multicriteria global optimization for biocircuit design. BMC Syst Biol. 2014;8(1):1–12.

    Article  CAS  Google Scholar 

  124. 124.

    Bhatnagar R, El-Samad H. Tradeoffs in adapting biological systems. Eur J Control. 2016;30:68–75.

    Article  Google Scholar 

  125. 125.

    Otero-Muras I, Banga JR. Automated design framework for synthetic biology exploiting pareto optimality. ACS Synth Biol. 2017;6(7):1180–93.

    CAS  PubMed  Article  Google Scholar 

  126. 126.

    Handl J, Kell DB, Knowles J. Multiobjective optimization in bioinformatics and computational biology. IEEE/ACM Trans Comput Biol Bioinf. 2007;4(2):279–92.

    CAS  Article  Google Scholar 

  127. 127.

    Seoane LF. Multiobjetive optimization in models of synthetic and natural living systems. Universitat Pompeu Fabra; 2016.

  128. 128.

    Vijayakumar S, Conway M, Lió P, Angione C. Seeing the wood for the trees: a forest of methods for optimization and omic-network integration in metabolic modelling. Briefings Bioinf. 2017;19(6):1218–35.

    Google Scholar 

  129. 129.

    Tsiantis N, Balsa-Canto E, Banga JR. Optimality and identification of dynamic models in systems biology: an inverse optimal control framework. Bioinformatics. 2018;34(14):2433–40.

    CAS  PubMed  Article  Google Scholar 

  130. 130.

    Betts JT, Campbell SL, Digirolamo C. Initial guess sensitivity in computational optimal control problems. Numer Algebra Control Optim. 2020;10(1):39.

    Article  Google Scholar 

  131. 131.

    Conway BA. A survey of methods available for the numerical optimization of continuous dynamic systems. J Optim Theory Appl. 2012;152(2):271–306.

    Article  Google Scholar 

  132. 132.

    Trélat E. Optimal control and applications to aerospace: some results and challenges. J Optim Theory Appl. 2012;154(3):713–58.

    Article  Google Scholar 

  133. 133.

    Esposito WR, Floudas CA. Deterministic global optimization in nonlinear optimal control problems. J Global Optim. 2000;17(1–4):97–126.

    Article  Google Scholar 

  134. 134.

    Houska B, Chachuat B. Branch-and-lift algorithm for deterministic global optimization in nonlinear optimal control. J Optim Theory Appl. 2014;162(1):208–48.

    Article  Google Scholar 

  135. 135.

    Houska B, Chachuat B. Global optimization in Hilbert space. Math Prog. 2019;173(1–2):221–49.

    Article  Google Scholar 

  136. 136.

    Peitz S, Dellnitz M. A survey of recent trends in multiobjective optimal control–Surrogate models, feedback control and objective reduction. Math Comput Appl. 2018;23(2):30.

    Google Scholar 

  137. 137.

    Bellman R. Dynamic programming and Lagrange multipliers. Proc Nat Acad Sci. 1956;42(10):767–9.

    CAS  PubMed  Article  Google Scholar 

  138. 138.

    Bertsekas DP. Dynamic programming and optimal control. Athena: Athena Scientific Belmont; 1995.

    Google Scholar 

  139. 139.

    Betts JT. Practical methods for optimal control and estimation using nonlinear programming. 2nd ed. Pheliphida: SIAM; 2010.

    Google Scholar 

  140. 140.

    Teo KL, Goh CJ, Wong KH. A unified computational approach to optimal control problems. Longman Scientific and Technical; 1991.

  141. 141.

    Vassiliadis V, Sargent R, Pantelides C. Solution of a class of multistage dynamic optimization problems. 2. Problems with path constraints. Ind Eng Chem Res. 1994;33(9):2123–33.

    CAS  Article  Google Scholar 

  142. 142.

    Barton PI, Allgor RJ, Feehery WF, Galán S. Dynamic optimization in a discontinuous world. Ind Eng Chem Res. 1998;37(3):966–81.

    CAS  Article  Google Scholar 

  143. 143.

    Lin Q, Loxton R, Teo KL. The control parameterization method for nonlinear optimal control: a survey. J Ind Manag Optim. 2014;10(1):275–309.

    Google Scholar 

  144. 144.

    Vassiliadis VS, Pantelides CC, Sargent RWH. Solution of a class of multistage dynamic optimization problems. 1. Problems without path constraints. Ind Eng Chem Res. 1994;33(9):2111–22.

    CAS  Article  Google Scholar 

  145. 145.

    Biegler LT, Cervantes AM, Wächter A. Advances in simultaneous strategies for dynamic process optimization. Chem Eng Sci. 2002;57(4):575–93.

    CAS  Article  Google Scholar 

  146. 146.

    Biegler LT. Advanced optimization strategies for integrated dynamic process operations. Comput Chem Eng. 2018;114:3–13.

    CAS  Article  Google Scholar 

  147. 147.

    Henry CS, Broadbelt LJ, Hatzimanikatis V. Thermodynamics-based metabolic flux analysis. Biophys J. 2007;92(5):1792–805.

    CAS  PubMed  Article  Google Scholar 

  148. 148.

    Stucki JW. The optimal efficiency and the economic degrees of coupling of oxidative phosphorylation. Eur J Biochem. 1980;109(1):269–83.

    CAS  PubMed  Article  Google Scholar 

  149. 149.

    Kremling A, Geiselmann J, Ropers D, de Jong H. Understanding carbon catabolite repression in Escherichia coli using quantitative models. Trends Microbiol. 2015;23(2):99–109.

    CAS  PubMed  Article  Google Scholar 

  150. 150.

    Gal T. Shadow prices and sensitivity analysis in linear programming under degeneracy. Oper Res Spektrum. 1986;8(2):59–71.

    Article  Google Scholar 

  151. 151.

    Müller-Merbach H. Operations Research: Methoden und Modelle der Optimalplanung. 1971;.

  152. 152.

    Reali F, Priami C, Marchetti L. Optimization algorithms for computational systems biology. Front Appl Math Stat. 2017;3:6.

    Article  Google Scholar 

  153. 153.

    Papamichail I, Adjiman CS. A rigorous global optimization algorithm for problems with ordinary differential equations. J Global Optim. 2002;24(1):1–33.

    Article  Google Scholar 

  154. 154.

    Singer AB, Barton PI. Global optimization with nonlinear ordinary differential equations. J Global Optim. 2006;34(2):159–90.

    Article  Google Scholar 

  155. 155.

    Chachuat B, Singer AB, Barton PI. Global methods for dynamic optimization and mixed-integer dynamic optimization. Ind Eng Chem Res. 2006;45(25):8373–92.

    CAS  Article  Google Scholar 

  156. 156.

    Diedam H, Sager S. Global optimal control with the direct multiple shooting method. Optimal Control Appl Methods. 2018;39(2):449–70.

    Article  Google Scholar 

  157. 157.

    Michalewicz Z, Janikow CZ, Krawczyk JB. A modified genetic algorithm for optimal control problems. Comp Math Appl. 1992;23(12):83–94.

    Article  Google Scholar 

  158. 158.

    Banga JR, Seider WD. Global optimization of chemical processes using stochastic algorithms. In: State of the art in global optimization. Springer; 1996. pp. 563–583.

  159. 159.

    Banga JR, Alonso AA, Singh RP. Stochastic dynamic optimization of batch and semicontinuous bioprocesses. Biotechnol Prog. 1997;13(3):326–35.

    CAS  Article  Google Scholar 

  160. 160.

    Ali M, Storey C, Törn A. Application of stochastic global optimization algorithms to practical problems. J Optim Theory Appl. 1997;95(3):545–63.

    Article  Google Scholar 

  161. 161.

    Cruz IL, Van Willigenburg L, Van Straten G. Efficient differential evolution algorithms for multimodal optimal control problems. Appl Soft Comput. 2003;3(2):97–122.

    Article  Google Scholar 

  162. 162.

    Wall BJ, Conway BA. Genetic algorithms applied to the solution of hybrid optimal control problems in astrodynamics. J Global Optim. 2009;44(4):493.

    Article  Google Scholar 

  163. 163.

    Banga JR, Balsa-Canto E, Moles CG, Alonso AA. Dynamic optimization of bioprocesses: efficient and robust numerical strategies. J Biotechnol. 2005;117(4):407–19.

    CAS  PubMed  Article  Google Scholar 

  164. 164.

    Egea JA, Balsa-Canto E, Garcia MSG, Banga JR. Dynamic optimization of nonlinear processes with an enhanced scatter search method. Ind Eng Chem Res. 2009;48(9):4388–401.

    CAS  Article  Google Scholar 

  165. 165.

    Feehery WF, Barton PI. Dynamic optimization with state variable path constraints. Comput Chem Eng. 1998;22(9):1241–56.

    CAS  Article  Google Scholar 

  166. 166.

    Chen W, Ren Y, Zhang G, Biegler LT. A simultaneous approach for singular optimal control based on partial moving grid. AIChE J. 2019;65(6):e16584.

    Article  CAS  Google Scholar 

  167. 167.

    Rao AV. A survey of numerical methods for optimal control. Adv Astron Sci. 2009;135(1):497–528.

    Google Scholar 

  168. 168.

    Rodrigues HS, Monteiro MTT, Torres DF. Optimal control and numerical software: an overview. arXiv preprint arXiv:14017279. 2014.

  169. 169.

    Balsa-Canto E, Henriques D, Gábor A, Banga JR. AMIGO2, a toolbox for dynamic modeling, optimization and control in systems biology. Bioinformatics. 2016;32(21):3357–9.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  170. 170.

    Falugi P, Kerrigan E, Wyk EV. Imperial College London Optimal Contorl Software User Guide. 2010; 1–86.

  171. 171.

    Zhou JL, Tits AL. User’s Guide for FSQP Version 3.0 c: A FORTRAN Code for Solving Constrained Nonlinear (Minimax) Optimization Problems, Generating Iterates Satisfying All Inequality and Linear Constraints; 1992.

  172. 172.

    Serban R, Hindmarsh AC. CVODES: the sensitivity-enabled ODE solver in SUNDIALS. In: ASME 2005 international design engineering technical conferences and computers and information in engineering conference. American Society of Mechanical Engineers Digital Collection; 2005. pp. 257–269.

  173. 173.

    Wächter A, Biegler LT. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math Progr. 2006;106(1):25–57.

    Article  Google Scholar 

  174. 174.

    Töpfer N. Optimisation of Enzyme Profiles in Metabolic Pathways. Diploma Thesis, Humboldt-Universität zu Berlin; 2010.

  175. 175.

    Pal R, Datta A, Dougherty ER. Optimal infinite-horizon control for probabilistic Boolean networks. IEEE Trans Signal Process. 2006;54(6):2375–87.

    Article  Google Scholar 

  176. 176.

    Imani M, Braga-Neto UM. Control of gene regulatory networks using Bayesian inverse reinforcement learning. IEEE/ACM Trans Comput Biol Bioinf. 2018;16(4):1250–61.

    Article  Google Scholar 

  177. 177.

    Wu Y, Sun XM, Zhao X, Shen T. Optimal control of Boolean control networks with average cost: A policy iteration approach. Automatica. 2019;100:378–87.

    Article  Google Scholar 

Download references

Acknowledgements

Preliminary results from this study were presented at the 6th IFAC Conference on Foundations of Systems Biology in Engineering (FOSBE 2016) in Magdeburg, Germany.

Funding

This research has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 675585 (MSCA ITN “SyMBioSys”) and from the Spanish Ministry of Science, Innovation and Universities and the European Union FEDER under project grant SYNBIOCONTROL (DPI2017-82896-C2-2-R). Nikolaos Tsiantis is a Marie Skłodowska-Curie Early Stage Researcher at IIM-CSIC (Vigo, Spain) under the supervision of Prof Julio R. Banga. The funding bodies played no role in the design of the study, the collection and analysis of the data or in the writing of the manuscript.

Author information

Affiliations

Authors

Contributions

JRB conceived, planned and coordinated the study. NT implemented the methods and carried out all the computations. NT and JRB analysed the results. JRB drafted the manuscript. NT contributed to the drafting of the methods and results. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Julio R. Banga.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

Author Julio R. Banga is an Associate Editor in this Journal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1

. Solution details for case study LPN3B.

Additional file 2

. Solution details for case study SC.

Additional file 3

. Solution details for case study BSUB.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Tsiantis, N., Banga, J.R. Using optimal control to understand complex metabolic pathways. BMC Bioinformatics 21, 472 (2020). https://doi.org/10.1186/s12859-020-03808-8

Download citation

Keywords

  • Optimal control
  • Dynamic modeling
  • Multi-criteria optimization
  • Pareto optimality