The brute-force method of determining the parameter region that satisfies a certain behavioral specification
usually proceeds by Monte Carlo sampling of parameter sets, generating corresponding trajectories according to (1), checking whether those satisfy *S* and finally retaining only those parameter sets that led to satisfied specification *S*. There are two immediate downsides of this approach. First, most draws will be unsuccessful for high dimensional parameter spaces, for tight specifications, or for both. Different approaches using an optimized sampling [11, 17] have been developed to mitigate this problem, but are not solving it as they require convergence of the sampling. Second, drawing parameter points in
does not provide guarantees that those points belong to a connected domain of consistent parameter sets. Here we provide first attempts to tackle both problems.

The main idea is to locally linearize the forward map

*F* around some point and then locally invert it. Hence, a small enough local patch in feature space can be mapped backward to a small patch in parameter space. By successively sampling expansion points in their neighborhoods (e.g. by the ball-walk algorithm [

18]) we can systematically cover the entire specification

*S* and obtain the corresponding parameter region. A series expansion of

*F* around some initial parameter set

*k*
^{0} reads

Defining d

*f ≡ F* (

*k*
^{0} + d

*k*)

*− F* (

*k*
^{0}) we see that a neighborhood d

*f* in feature space to first order can be mapped backward using the Moore-Penrose pseudo-inverse

that we define with care as

where

*L* denotes the linearized forward map and hence is just the

*m × p* matrix

Note, that the limit in (2) exists even if the inverse of

*L*
^{
T
}
*L* and

*LL*
^{
T
} do not exist. Such situations are encountered as soon as the number of specification features

*m* are less than the number of parameters, i.e. the dimension

*p* of the parameter space. Importantly, we can compute (3) efficiently using the variational equation for the system (1). Observe that

where the last terms in the integral is just the sensitivity of the solution of (1) to perturbations in

*k* around

*k*
^{0}. According to the variational equation the sensitivity obeys the following ordinary

*n × p* matrix differential equation

where we skipped the explicit dependency on

*k*
^{0} for brevity. Note, that (4) is equivalent to the transient sensitivity analysis of metabolic networks [

9,

10], proposed as an extension of classical metabolic control analysis that only deals with steady state sensitivities. For a certain

*k*
^{0} the sensitivity of the kernel

*g* is a constant

*m × n* matrix that can be computed explicitly. Thus, by jointly solving (1) and (4) for some

*k*
^{0} together with

up to time

*T* we obtain the linearized map

*L* =

*L*(

*T*). Hence, for every sampled

*k*
^{0} and associated feature point

*f*
^{0} we propose to design a feature ball

and map it backward using

*L*
^{†}. According to the singular value decomposition

*L*
^{†} =

*U*Σ

*V* with Σ a diagonal matrix with non-negative entries [

16], the backward transformation needs to be a sequence of a rotation, a scaling and another rotation and hence the image of

under

*L*
^{†} can only be a ellipsoid in the parameter space

Clearly, sampling a multivariate region with balls of same dimension allow for a complete coverage of the region - something that can only be extrapolated when using pointwise sampling [11]. The question to efficiently sample a region with balls has been addressed in computational geometry and efficient randomized algorithms are available [18].

We remark that the map

*L* is not the best local approximation to

*F*(

*k*) in some norm sense. More specifically we can improve on

*L* if we are giving additional samples of the neighborhood

. Consider we draw another

, then we can construct a rank-one update to

*L*
where Δ

*F ≡ F*(

*k*
^{
i
})

*− F*(

*k*
^{0}) and Δ

*k ≡ k*
^{
i
}
*− k*
^{0}. In particular, the rank-one term (5) captures the nonlinear part of

*F*. From (5) it follows that the matrix

satisfies the consistency property

Thus, knowing how to construct rank-one updates over the domain of interest is equivalent to knowing

*F*(

*k*) locally. In fact,

is the matrix closest to

*L*, with respect to the Frobenius norm, that satisfies (6). Subsequently we will use this improved linear approximation to

*F* to bound the error that one can incurrs if one uses the pseudoinverse

*L*
^{†} for the backward map. This will also provide means to determine the maximal ball size

*δ* to stay below a certain error bound. We quantify the error in the feature space by the backward map followed by a forward map. That is, we want to find a

*δ* such that

for all

Now suppose we know a bound

*ρ*(

*δ*) for the Frobenius norm of the rank-one perturbation, i.e.

in the local domain of interest. Note, that

*ρ*(

*δ*) could and need to be estimated by sampling. Given a

the maximal error of the inverse-forward map is

which is known from robust linear squares [

16] to be equivalent to the error

Assuming that

*L* has linearly independent rows,

*LL*
^{†} is the identity matrix and thereby the error simplifies to

This result provides one way to determine the radius of the feature ball

*δ* when relying on the pseudo-inverse