The introduced approach is more general than those methods developed in [22] as it does not require the network to be uniterminal. Without loss of generality and for the sake of simplicity, we will be using the well-known Edelstein network, which is uniterminal. Note, the steps of the approach would be exactly the same regardless of terminality (uniterminal, biterminal, etc.) of the network. Originally, the Edelstein network was introduced in [33] and we choose to use the reduced form presented in Lecture 3 of [26]. The Edelstein network is shown in Fig. 8.

In the following subsections we develop our framework using aspects of CRNT. Here we have \(\mathbf{k } = (k_1, k_2, k_3, k_4, k_5, k_6)^T\) and \(\mathbf{c } = (c_1, c_2, c_3)^T\) with \(c_1 = [A], c_2 = [B], c_3 = [C]\).

### Step 1: *Constructing the full ODE system*

The first step of the approach is to construct the ODEs describing species’ concentration dynamics for the given reaction network. Assuming mass action kinetics, we obtain the following representation for our autonomous ODEs:

$$\dot{\mathbf{c }} = YA \uppsi (\mathbf{c }) \Longleftrightarrow \begin{array}{*{20}l} {\dot{c}_{1} = k_{1} c_{1} - k_{2} c_{1}^{2} - k_{3} c_{1} c_{2} + k_{4} c_{3} } \\ {\dot{c}_{2} = (k_{4} + k_{5} )c_{3} - k_{3} c_{1} c_{2} + k_{6} c_{2} } \\ {\dot{c}_{3} = k_{3} c_{1} c_{2} - (k_{4} + k_{5} )c_{3} + k_{6} c_{2} } \\ \end{array}$$

(2)

The molecularity matrix *Y* is a \(N \times M\) matrix where \(Y_{ij}\) corresponds to the molecularity of species *i* in complex *j*. For our example the complexes are \({\mathcal {C}}_1 = A, {\mathcal {C}}_2 = 2A, {\mathcal {C}}_3 = A+B, {\mathcal {C}}_4 = C,\) and \({\mathcal {C}}_5 = B\), which provide the molecularity matrix:

*A* is the \(M \times M\) kinetic constant matrix, where the diagonal elements of *A* contain the negative of the sum of the kinetic rate constants corresponding to the reactions going out from \({\mathcal {C}}_i\). While the off-diagonal elements, say \(A_{ij}\), contain the kinetic rate constants of the reactions going from \({\mathcal {C}}_j\) to \({\mathcal {C}}_i\). Here *i* and *j* correspond to the *i*th row and *j*th column of A, respectively.

The vector \(\uppsi (\mathbf{c }) = (\uppsi _1, \dots , \uppsi _M)^T\) defines the mass action monomials (a product of species’ concentrations) associated with each complex

$$\begin{aligned} \uppsi _j(\mathbf{c }) = \prod _{i=1}^N c_i^{Y_{ij}} , \quad j = 1, \dots , M. \end{aligned}$$

For our example we obtain:

$$\begin{aligned} \uppsi (\mathbf{c }) = (c_1, c_1^2, c_1 c_2, c_3, c_2)^T . \end{aligned}$$

### Step 2: *Computing the mass conservation laws*

The next step is to compute the mass conservation laws that govern the network. To do this, we first construct the species stoichiometric matrix *S* with dimension \(N \times R\). Its entries can be constructed from the *Y* matrix by noticing that every reaction from \({\mathcal {C}}_i\) to \({\mathcal {C}}_j\) has an associated vector \(Y_j-Y_i\) where \(Y_j\) and \(Y_i\) are the *j*th and *i*th columns of Y, respectively. In our example, this produces the stoichiometric matrix *S*:

We also denote the rank, that is the maximal number of linearly independent columns, of *S* as *s*, for notational convenience. For our example, \(s = 2\).

Note, that the number of mass conservation relations is \(\lambda = N-s\). Matrix *B* (of dimension \(N \times \lambda\)) that defines such relations is defined as the nullspace of \(S^T\), such that \(S^T B = 0\). It should be also noted that the matrix spanning the nullspace of \(S^T\) is not uniquely determined, and we choose *B* such that all of its entries are nonnegative. This choice of *B* is always possible provided that each conservation law represents the conservation of a chemical or moiety. This is because an individual species’ concentration cannot be negative. Thus, there must be a way for a basis vector, representing a conservation law, to be expressed only with nonnegative values.

For most reaction networks, constructing a *B* matrix with nonnegative entries can be done by using linear programming. The construction of the linear programming problem requires a firm understanding of polyhedra. Thus, we strongly recommend that the reader consider Sections 2.2 (in particular Subsection 2.2.3) and 2.4 of [34], which describe polyhedral sets and cones, and how the nullspace of the stoichiometric matrix can be described as a polyhedral cone, respectively. Additionally, it should be noted that the mathematical definitions and theorems in the suggested sections of [34] were obtained from [35], see for example Sections 1, 2, and 19 of [35].

We now briefly describe the linear programming problem construction. The optimization problem is formed by first considering the intersection of \(Null(S^T)\) and the nonnegative vectors of \({\mathbb {R}}^N\), which is an infinite cone. In order to obtain a finite set of this intersection, one can consider an intersection of the infinite cone with the set of vectors that have the sum of their coordinates equal to one. The solution to this intersection is a finite convex polytope. This conclusion can be seen by considering Definition 2.29 and 2.33 of [34]. It should be noted that [34] presents Enumeration Problems in general (Section 2.3) as a way to search for the vertices of this convex polytope. We instead search for these vertices beginning from random directions via the optimization problem (3), which is realized by the Simplex optimization method [36].

$$\begin{aligned} \begin{array}{ll} \text {Minimize}&{} \mathbf{w }^T \mathbf{x } \\ \text {subject to: }&{} \\ &{} -{\tilde{B}} \mathbf{x } \le \mathbf{0 }, \\ &{} \Bigg (\sum _{i=1}^N {\tilde{B}}_{i, 1}, \dots , \sum _{i=1}^N {\tilde{B}}_{i, \lambda } \Bigg )^T \mathbf{x } = 1, \\ &{} -\infty \le \mathbf{x } \le \infty \quad \mathbf{x } \in {\mathbb {R}}^{\lambda }, \\ &{} \mathbf{w }^T \in {\mathbb {R}}^{\lambda }. \end{array} \end{aligned}$$

(3)

In optimization problem (3), \(\mathbf{w }^T\) is the vector of random search directions and \({\tilde{B}}\) is the initial *B* matrix with at least one negative entry. For our purposes, \({\tilde{B}}\) is initially set to the basis vectors that compose the span of \(Null(S^T)\). The endpoints (i.e. the minimized \(\mathbf{x }\) from (3)) of these vertices are then used to form a basis vector (consisting of only nonnegative elements) of the nullspace. Thus, by taking the dot product, \({\tilde{B}} \cdot \mathbf{x }\), we can obtain a vector with nonegative entries, with values between zero and one. To obtain integer entries, one can then divide the vector by the smallest nonzero entry of the vector. By repeating this process multiple times and obtaining \(\lambda\) unique basis elements, one can then form a *B* matrix with nonnegative entries, as outlined in Algorithm 1 . It should be noted that \(\lambda\) is representative of the number of conserved chemical moieties. Thus, by definition we are certain that \(\lambda > 0\) if the system contains conservation laws. A noteworthy remark of Algorithm 1 is that it does not guarantee that one will find all \(\lambda\) basis vectors, given a fixed number of iterations. Thus, if the number of the found basis vectors is less than the pre-computed \(\lambda = N - s\), the number of iterations should be increased. For further details of Algorithm 1, we refer the reader to the Supplementary, Additional File 1. In addition to this approach, there exists alternative ways to obtain a *B* matrix with nonnegative entries, see [37,38,39].

Using \(S^T\) and the procedure outlined in Algorithm 1, we obtain the *B* matrix below:

where \(C_1\) again stands for the first conservation law. To obtain an explicit statement for the conservation laws, we simply take \(B^T \mathbf{c }\), giving the following conservation law:

$$\begin{aligned} C_1 = c_2 + c_3. \end{aligned}$$

### Step 3: *Determining the independent ODE system*

For any given reaction network, the number of independent ODEs describing the system’s dynamics is equal to the rank(*S*) \(= s \le R\) [9]. Since we will ultimately consider the Jacobian of the ODE system with respect to the concentrations \(\mathbf{c }\), it is necessary to remove those ODEs that can be represented as linear combinations of other ODEs in the system. We will let the independent system of ODEs be represented as follows:

$$\begin{aligned} \hbox {\.{\c{c}}}= \hat{\textit{f}}(\mathbf{c }, \mathbf{k }), \end{aligned}$$

where \(\hbox {\c{{\bf c}}}\in {\mathbb {R}}^s\) represents those species’ concentrations that form an independent ODE system. To obtain the independent system, we will first see if \(s = N\), if this is true, this implies that there are no conservation laws. This scenario is out of the scope of this manuscript. If \(s < N\) then we have conservation laws and we must continue by finding the independent system. It is at this point where knowledge of the response in the bifurcation analysis is used to determine which ODEs should remain in the independent ODE system.

For our particular example, we take \(C_1\) (the sum of \(c_2\) and \(c_3\)) as the input signal, and \(c_1\) as the output or readout of the system’s response. Since \(c_1\) (the concentration of species A) is taken as our system’s response, it is necessary for \({\dot{c}}_1\) to be included in \(\hbox {\.{\c{c}}}\). Using this fact and conservation law \(C_1\), we can see that \(c_2\) has a dependence on \(c_3\) of the form \(c_2 = C_1 - c_3\). For this simple example, this information is sufficient enough to eliminate \({\dot{c}}_2\) from \(\dot{\mathbf{c }}\). Indeed if we consider the full ODE system (2), we see that \({\dot{c}}_2 = -{\dot{c}}_3\). Thus, for our particular example we have \(\hbox {\.{\c{c}}}\) given by:

$$\begin{aligned} &{\dot{c}}_1 = k_1 c_1 - k_2 c_1^2 - k_3 c_1 c_2 + k_4 c_3 \\&{\dot{c}}_3 = k_3 c_1 c_2 - (k_4 + k_5) c_3 + k_6 c_2 . \end{aligned}$$

For larger systems the above methodology may not be straightforward. Thus, we suggest a procedure that exhaustively looks though combinations of species (excluding the response species), one from each conservation law, removes corresponding ODEs and checks if the rank of the remaining ODE system remains the same. The pseudocode for this procedure is provided in Algorithm 2.

Referring to Definition 5, we see that *u* and \(\epsilon _0\) correspond to \(\hbox {\c{{\bf c}}}\) and the input signal in our example, respectively, with \(u = (c_1, c_3)^T\) and \(\epsilon _0 = C_1\) From this definition we see that some modifications to \(\hbox {\.{\c{c}}}\) must be made, in particular, we need to include the conservation law \(C_1\). To eliminate all the dependent species (*DS*) we express them in terms of \(\hbox {\c{{\bf c}}}\) and the corresponding conservation law. In our example, this is done by replacing \(c_2\) with \(C_1 - c_3\). For our example, we then have \(\hbox {\.{\c{c}}}\) given as follows:

$$\begin{aligned} &{\dot{c}}_1 = k_1 c_1 - k_2 c_1^2 - k_3 c_1(C_1 - c_3) + k_4 c_3 \\&{\dot{c}}_3 = k_3 c_1 (C_1 - c_3) - (k_4 + k_5) c_3 + k_6 (C_1 - c_3) . \end{aligned}$$

The system above is then equivalent to \(F\left( (c_1, c_3)^T, C_1 \right)\).

### Step 4 : *Preserving the steady state*

Now that we have our ODE system in terms of our input signal (or bifurcation parameter), we can now continue by constructing the expressions that are necessary to check the requirements of Definition 5. To do this, the first item we consider is the statement that \(F(u_0, \epsilon _0) = 0\), this denotes that the ODE system is at a steady state. Thus, we must ensure that \(\hbox {\.{\c{c}}}= \mathbf{0 }\). One straightforward way of doing this is to add the term \(\sum _{i=1}^{s} [\hbox {\.{\c{c}}}_i(\mathbf{k },\hbox {\c{{\bf c}}})]^2\) to the objective function (equation (4) in Section 4.6), which represents the sum of the squared derivatives. Although this formulation is robust, the trade-off includes increased computational time.

To avoid this issue, we suggest an alternative, based on symbolic constraints, formulation of the optimization problem. In this section, we propose splitting the kinetic constants into two sets, one set that is free (\(\mathbf{k } \in {\mathbb {R}}^{R-s}\)) and another set that will be fixed using symbolic expressions (\(\tilde{\mathbf{k }} \in {\mathbb {R}}^s\)). We analytically solve for specific kinetic constants, \(\tilde{\mathbf{k }}\), to enforce a steady state. By solving for these expressions analytically, the domain of feasible points is reduced, providing a simpler optimization problem. This can be performed systematically because we are using mass action kinetics to form our ODEs. Furthermore, we are always guaranteed a unique analytical expression for a steady state (in terms of a particular choice of kinetic constants). To see this, consider the stoichiometic matrix *S*, which has columns corresponding to the reactions. Based on the law of mass action, the rate of the reaction is expressed as the product of the concentrations of the reactants, with each concentration raised to a power equal to the stoichiometric coefficient, multiplied by a corresponding kinetic constant. Thus, there is a one-to-one correspondence between the notation of reactions and kinetic constants. As stated in step 3 (Subsection 4.3), rank(*S*) \(= s \le R\), for any given reaction network. By definition, *s* corresponds to the number of linearly independent columns of *S* and since the columns of *S* correspond to the kinetic constants, we are provided with a set of kinetic constants that form a linearly independent system of equations.

In practice, it is quite simple to determine the expressions for \(\tilde{\mathbf{k }}\), that should be chosen to create a linear system. One particular way to choose these kinetic rate constants is to put *S* into row reduced echelon form (RREF) and choose those columns that contain pivots. In our example:

the pivots are in the columns 1 and 3, resulting in \(\tilde{\mathbf{k }} = (k_1, k_3)^T\).

Given we are looking for a steady state, we must consider the following linear system:

$$\begin{aligned} U \tilde{\mathbf{k }} = \mathbf{b }, \end{aligned}$$

Here \(U \in {\mathbb {R}}^{s \times s}\) corresponds to the coefficient matrix produced from \(\hbox {\.{\c{c}}}\) for a particular choice of \(\tilde{\mathbf{k }}\). Vector \(\mathbf{b } \in {\mathbb {R}}^{s}\) are those terms of \(\hbox {\.{\c{c}}}\) that do not contain kinetic constants from \(\tilde{\mathbf{k }}\). For our example we have the following system for our steady state:

$$\begin{aligned} \begin{pmatrix} c_1 &{} -c_1(C_1 - c_3) \\ 0 &{} c_1(C_1 - c_3) \end{pmatrix} \begin{pmatrix} k_1 \\ k_3 \end{pmatrix} = \begin{pmatrix} k_2 c_1^2 - k_4 c_3 \\ (k_4 + k_5) c_3 - k_6 (C_1 - c_3) \end{pmatrix}. \end{aligned}$$

Solving for \(k_1\) and \(k_3\) provides necessary conditions for a steady state

$$\begin{aligned} &k_1 = \frac{k_2 c_1^2 + k_5 c_3 - k_6 (C_1 - c_3)}{c_1} \\&k_3 = \frac{(k_4 + k_5) c_3 - k_6 (C_1 - c_3)}{c_1(C_1 - c_3)}. \end{aligned}$$

Note that for some systems, this may force a kinetic rate constant equal to zero. However, other choices of \(\tilde{\mathbf{k }}\) can yield nonzero values. To account for these cases, if zero values are found, we exhaustively search through all distinct combinations of independent columns to choose \(\tilde{\mathbf{k }}\) combinations and search for ones that yield no zero entries. If no combination provides nonzero values this means that no steady state exists when \(\mathbf{k } >0\) and \(\mathbf{c } > 0\). In such a scenario we suggest checking the Deficiency Zero and One theorems [26]. To obtain a network that can assume a steady state with \(\mathbf{k } > 0\) and \(\mathbf{c } > 0\), one can consider removing the reactions that have kinetic constants equal to zero in the aforementioned computation.

### Step 5: *Ensuring a zero eigenvalue*

From step 4, we have necessary conditions for a steady state. Thus, the next item we must satisfy is that \(D F(u_0, \epsilon _0)\) must have a zero eigenvalue with a multiplicity of one and all other eigenvalues have nonzero real parts, as stated in Definition 5. Effectively, \(D F(u_0, \epsilon _0)\) is the Jacobian of the right hand side of our independent ODE system with respect to \(\hbox {\c{{\bf c}}}\), we denote this as \(J_{\hbox {\c{{\bf c}}}}\). Given that we would like to formulate this as an optimization problem, it is easy to see that satisfying the criteria that an eigenvalue of zero must have a multiplicity of one and all other eigenvalues be nonzero, will create an expensive optimization problem because we would have to calculate the eigenvalues each time the objective function is evaluated. For this reason, we will simply search for a zero eigenvalue and then check afterwards if the eigenvalue criteria are satisfied.

To find a zero eigenvalue, consider the eigenvalue problem with eigenvalue \(z \in {\mathbb {C}}\) and corresponding eigenvector \(\mathbf{v } \in {\mathbb {C}}^s \backslash \{0\}\):

$$\begin{aligned} J_{\hbox {\c{{\bf c}}}} \mathbf{v } = z \mathbf{v }. \end{aligned}$$

The characteristic polynomial can then be formed by taking the determinant and setting it equal to zero:

$$\begin{aligned} det(J_{\hbox {\c{{\bf c}}}} - z I ) = 0 . \end{aligned}$$

However, since \(z = 0\), we have the following problem that reassures us that at least one eigenvalue is equal to zero:

$$\begin{aligned} det(J_{\hbox {\c{{\bf c}}}}) = 0 . \end{aligned}$$

It is this criteria that we will use to formulate our optimization problem.

### Step 6: *Searching for a Saddle-node using optimization*

Using the definitions from the previous steps we can create the optimization problem in two alternative ways. In both formulations, the optimization problems search for species’ concentrations \(\mathbf{c }\) and kinetic rate constants \(\mathbf{k }\) that provide a system in a steady state (\(\hbox {\.{\c{c}}}= 0\)) and at least one zero eigenvalue of the associated Jacobian \(J_{\hbox {\c{{\bf c}}}}\). One formulation enforces a steady state by explicitly requiring that the derivatives of the concentrations be zero through an additional term in the objective function:

$$\begin{aligned} \begin{array}{ll} \text {Minimize}&{} [det(J_{\hbox {\c{{\bf c}}}})(\mathbf{k },\hbox {\c{{\bf c}}})]^2 + \sum _{i=1}^{s} [\hbox {\.{\c{c}}}_i(\mathbf{k },\hbox {\c{{\bf c}}})]^2 \\ \text {subject to: } &{}\\ &{} \mathbf{c }_L \le \mathbf{c } \le \mathbf{c }_U \quad \mathbf{c } \in {\mathbb {R}}^N, \\ &{} \mathbf{k }_L \le \mathbf{k } \le \mathbf{k }_U \quad \mathbf{k } \in {\mathbb {R}}^{R}. \end{array} \end{aligned}$$

(4)

Here, \(\mathbf{c }_L\) and \(\mathbf{c }_U\) are the lower and upper bounds for the species’ concentrations, respectively and similarly, \(\mathbf{k }_L\) and \(\mathbf{k }_U\) are the lower and upper bounds for the kinetic rate constants, respectively. This results in a more complex objective function, which is due to the addition of more nonlinear terms in the objective function and a higher dimensional decision vector. This approach is more robust as it samples from a broader space of solutions, but computationally intensive. Another approach implicitly enforces a steady state by solving for kinetic constants (as in Step 4):

$$\begin{aligned} \begin{array}{ll} \text {Minimize} &{} [det(J_{\hbox {\c{{\bf c}}}})(\mathbf{x })]^2 \\ \text {subject to: }&{} \\ &{} \mathbf{c }_L \le \mathbf{c } \le \mathbf{c }_U \quad \mathbf{c } \in {\mathbb {R}}^N, \\ &{} \mathbf{k }_L \le \mathbf{k } \le \mathbf{k }_U \quad \mathbf{k } \in {\mathbb {R}}^{R}. \end{array} \end{aligned}$$

(5)

In this version of the optimization problem, the decision vector is \(\mathbf{x } = (\{ \mathbf{k } \} - \{ \tilde{\mathbf{k }} \}, \{\mathbf{c } \} )^T\), where \(\{\mathbf{k } \} - \{ \tilde{\mathbf{k }} \}\) corresponds to the set of kinetic rate constants in \(\mathbf{k }\) that are not in \(\tilde{\mathbf{k }}\) and \(\{\mathbf{c } \}\) is the set of species’ concentrations. The kinetic rate constants \(\tilde{\mathbf{k }}\) are then found by using the solutions found from solving the linear system in step 4, which reassure a steady state occurs, and the conservation constants \(C_1, \dots , C_\lambda\) are found using the conservation laws in step 2.

Since the approach is aimed towards analysis of biological pathways, we can apply bounds to the optimization problem based on the BioNumbers database [40]. To bound the species’ concentrations, one can choose the typical range of protein concentrations in a cell, \(5\times 10^{-13}\) to \(5\times 10^{-7}\) M. The complex formation kinetic constants can be bounded between \(10^4\) and \(10^8\) M\(^{-1}\)s\(^{-1}\). A common range for complex dissociation constants is from \(10^{-5}\) to \(10^{-3}\) s\(^{-1}\). Finally, the enzyme catalysis kinetic constants range from \(10^{-3}\) to 1 s\(^{-1}\). These ranges are the defaults in the CRNT4SBML Python package. However, they can be overwritten by the user to more narrow ranges if some more accurate prior knowledge exists. For further details we refer the reader to the corresponding CRNT4SBML documentation page.

Due to their nature, like the optimization problem in [22], (5) and (4) are non-convex and multi-modal. For this reason, a global optimization procedure should be used to search for the optimal solution of (5) and (4). For the results obtained throughout the manuscript, we utilize the global optimization algorithm Dual Annealing, starting from different random initial starting points that are bounded by the species’ concentrations and kinetic constant values. Dual Annealing is an optimization algorithm that combines Simulated Annealing [41] and additionally applies a local search on accepted locations [42]. In particular, we utilize the Nelder-Mead simplex algorithm for all local searches. Given that we are looking specifically for a zero eigenvalue, the objective function of the optimization should obtain a minimum of zero. All points that have a zero value (within machine precision) are considered for further evaluation. The next evaluation step is to check for i) a saddle-node using Definition 5 and discard points that produce more than one eigenvalue that is zero (which might indicate a codim 2 bifurcation such as the Bogdanov-Takens bifurcation [43]), and ii) a saddle-node bifurcation point using Theorem 1. This allows one to remove unnecessary runs of numerical continuation. Although explicitly checking the criteria for a saddle-node bifurcation can reduce the runtime of the approach, it should be noted that it is sufficient to conduct numerical continuation.

#### Bayesian stopping rule

If the optimization satisfies the condition for a saddle-node, one can state that a saddle-node exists. However, the inverse is not true: if we can’t find conditions that result in a zero eigenvalue, this doesn’t guarantee that a saddle-node does not exist. This is a consequence of using stochastic optimization instead of deterministic methods (based for example on interval analysis), which will provide this guarantee [44]. The issue with methods such as these is that they become computationally intractable for mass action reaction systems with more than two or three parameters, whereas stochastic optimization provides in general a good overall efficiency [45]. One of the key uncertainties in stochastic optimization is how exhaustively the procedure searched the space of all the possible solutions. To address this uncertainty we compute the probability that the achieved minimum value is the true global minimum. This way we have a quantitative estimate of the thoroughness of the optimization procedure. The probability calculation is based on the unified Bayesian stopping rule in [46] and Theorem 4.1 of [47], where the rule was first established.

For *n* starting optimization points, let \(f_k\) be the achieved local minimum for the *k*-th decision vector, \(f^*\) the true global minimum and \({\tilde{f}} = min \{f_1, \dots , f_n \}\). Our objective is to estimate the probability that \({\tilde{f}}\) is \(f^*\). Let \(\alpha _k\) and \(\alpha ^*\) denote the probabilities that a single run of the optimization routine has converged to \(f_k\) and \(f^*\), respectively. Assuming that \(\alpha ^* \ge \alpha _k\) for all local minimum values \(f_k\) we may then estimate the lower bound of the probability that \({\tilde{f}} = f^*\) is as follows:

$$\begin{aligned} Pr[{\tilde{f}} = f^*] \ge q(n, r) = 1 - \dfrac{(n + a + b - 1)! (2n + b - r - 1)!}{(2n + a + b - 1)! (n + b-r-1)!}, \end{aligned}$$

Here *a* and *b* are the parameters of the Beta distribution \(\beta (a, b)\), where we use \(a=1\) and \(b=5\) as suggested in [46]. The term *q*(*n*, *r*) is the confidence level, where *r* is the number of \(f_k\) for \(k = 1, \dots , n\) that are in the neighborhood of \({\tilde{f}}\).

We say that \(f_k\) is in the neighborhood of \({\tilde{f}}\) if the relative difference of \(f_k\) and \({\tilde{f}}\) is less than a predefined tolerance \(\epsilon\), which is by default set to 10% in the CRNT4SBML tool:

$$\begin{aligned} \dfrac{| {\tilde{f}} - f_k |}{{\tilde{f}}} \le \epsilon . \end{aligned}$$

Given the formulation of the optimization, the lowest possible minimal value is zero. Thus, if \({\tilde{f}}\) reaches a numerical zero value then we set \(q(n,r) = 1.0\), skipping the computation of *q*(*n*, *r*). Conventionally \(q(n,r) \ge 0.95\) is considered an acceptable confidence level to make the conclusion that \({\tilde{f}}\) is the global minimum of the objective function. If no zero value is found after achieving a high confidence level (\(q(n,r) \ge 0.95\)), then we suggest stopping the optimization routine as it is unlikely that there is a solution. Note that the accuracy of this Bayesian stopping rule has not been thoroughly evaluated for our optimization problem and method.

### Step 7: *Numerical continuation and direct simulation*

Once the optimization problem is solved, we have an independent ODE system with a steady state and at least one zero eigenvalue of the Jacobian. To ensure that this is a saddle-node bifurcation we need to check if this is the only zero eigenvalue and that Theorem 1 is satisfied. In practice, we check directly for the presence of the bifurcation diagram with switch-like behavior. The existence of such a bifurcation diagram rules out undesired findings such as a Jacobian with additional zero eigenvalues or eigenvalues with zero real parts. To generate the bifurcation diagram, we utilize either the technique of numerical continuation or directly simulate the ODEs. When performing numerical continuation, we use the values provided by the optimization routine and then leverage the established tool AUTO 2000 [48]. It is made accessible through the libroadrunner python library and its extension rrplugins [49]. To demonstrate numerical continuation, we use the following values provided by the optimization routine, which provide Fig. 9:

$$\begin{aligned} k_1 &= 0.05024595, k_2 = 0.01029913, k_3 = 0.03592955, k_4 = 0.01027423,\\ k_5 &= 0.01272131, k_6 = 0.01006076, c_1 = 1.48685156, c_2 = 2.07275022,\\ c_3 &= 5.72214036, \quad \text {and} \quad C_1 = 7.79489058. \end{aligned}$$

It should be noted that it is possible that the optimization routine finds a certain combination of kinetic rate constant values that force the Jacobian of the system to be ill-conditioned or even singular, even if species’ concentrations are varied. The situation when the Jacobian is always singular precludes the numerical continuation approach. To overcome this type of situation we offer the option of direct simulation, which varies the user defined signal and initial conditions values for the ODE system and then integrates the ODEs until a steady state occurs. The steady state is obtained when the species’ concentrations do not change between the ODE integration steps, within a predefined tolerance (here 1e-06). Given the direct simulation method is numerically integrating the system of ODEs, this method will often take longer than the numerical continuation routine. Although this is the case, direct simulation is more robust and may be able to provide a bifurcation diagram when numerical continuation cannot.

Considering high and low concentrations of species *A* (response) for the initial value, we obtain the ODE simulations in Fig. 10 by direct simulation, where \(B_{tot} = C_1\).

If we then consider the left and right plot at time 65,000 as the equilibrium, we can plot the values of \(B_{tot}\) vs the concentration of species *A*. The dose–response diagram obtained by direct simulation depicted in Fig. 11 mirrors the bifurcation diagram obtained by numerical continuation in Fig. 9, thus cross-validating both approaches.