Exact reconstruction of gene regulatory networks using compressive sensing
 Young Hwan Chang^{1},
 Joe W Gray^{2} and
 Claire J Tomlin^{1, 3}Email author
DOI: 10.1186/s1285901404004
© Chang et al.; licensee BioMed Central. 2014
Received: 18 April 2014
Accepted: 27 November 2014
Published: 14 December 2014
Abstract
Background
We consider the problem of reconstructing a gene regulatory network structure from limited time series gene expression data, without any a priori knowledge of connectivity. We assume that the network is sparse, meaning the connectivity among genes is much less than full connectivity. We develop a method for network reconstruction based on compressive sensing, which takes advantage of the network’s sparseness.
Results
For the case in which all genes are accessible for measurement, and there is no measurement noise, we show that our method can be used to exactly reconstruct the network. For the more general problem, in which hidden genes exist and all measurements are contaminated by noise, we show that our method leads to reliable reconstruction. In both cases, coherence of the model is used to assess the ability to reconstruct the network and to design new experiments. We demonstrate that it is possible to use the coherence distribution to guide biological experiment design effectively. By collecting a more informative dataset, the proposed method helps reduce the cost of experiments. For each problem, a set of numerical examples is presented.
Conclusions
The method provides a guarantee on how well the inferred graph structure represents the underlying system, reveals deficiencies in the data and model, and suggests experimental directions to remedy the deficiencies.
Keywords
Gene regulatory networks Inference Compressive sensingIntroduction
Mathematical modeling of biological signaling pathways can provide an intuitive understanding of their behavior [1]–[3]. However, since typically only incomplete knowledge of the network structure exists and the system dynamics is known to be sufficiently complex, the challenge has become to show that the identified networks and corresponding mathematical models are enough to adequately represent the underlying system. In the last years, many datadriven mathematical tools have been developed and applied to reconstruct graph representations of gene regulatory networks (GRNs) from data. These include Bayesian networks, regression, correlation, mutual information and systembased approaches [4]–[10]. Also, these approaches either focus on static or on time series data. The latter approach has the advantage of being able to identify dynamic relationships between genes.
However, datadriven reconstruction of the network structure itself remains in general a difficult problem; nonlinearities in the system dynamics and measurement noise make this problem even more challenging. For linear time invariant (LTI) systems, there exist necessary and sufficient conditions for network reconstruction [11]. However, for timevarying or nonlinear systems, there has not been as yet any statistical guarantee on how well the inferred model represents the underlying system [12]–[15]. The recent work [16] addresses the problem of datadriven network reconstruction, together with measurement noise and unmodelled nonlinear dynamics, yet this work points out that these complications impose a limit on the reconstruction, and with strong nonlinear terms the method fails. Additionally, identifying whether important nodes in the graph structure are missing, how many are missing, and where these nodes are in the interconnection structure remains a challenging problem.
Moreover, in order to continue to have an impact in systems biology, identification of the graph topology from data should be able to reveal deficiencies in the model and suggest new experimental directions [17]. For example, Steinke et al. [18] presented a Bayesian method for identifying a gene regulatory network from microarray measurements in perturbation experiments and showed how to use optimal design to minimize the number of measurements.
Since biological regulatory networks are known to be sparse [19]–[23], meaning that most genes interact with only a small number of genes compared with the total number in the network, many methods [12]–[15],[24]–[27] take advantage of the sparsity. The methods typically use l _{1}norm optimization, which leads to a sparse representation of the network and improves the ability to find the actual network structure. Even though many methods [14],[15] show that the reconstruction results are fairly good, the methods cannot guarantee exact recovery. This stems from the fact that the socalled incoherence condition is typically not satisfied for the matrix Ω in a linear measurement model Y=Ω q, where q is the signal to be reconstructed, Y is the measurement, and Ω is known as the sensing matrix. In this paper, we construct Y and Ω from time series protein or gene expression data with candidate basis functions or kinetic features, and q reflects the (unknown) underlying GRN structure to be reconstructed. Roughly, incoherence is a measure of the correlation between columns of the sensing matrix. Since the incoherence condition of the sensing matrix provides a metric of performance, this is one of the motivating factors for the use of compressive sensing (CS) [28] in GRN reconstruction. CS is a signal processing technique for efficiently acquiring and reconstructing a signal by taking advantage of the signal’s sparsity and allowing the entire signal to be determined from relatively few measurements under a certain condition, i.e., it requires that the incoherence condition to be satisfied. In the Human Epidermal Growth Factor Receptor2 (HER2) positive breast cancer signaling pathway that we studied in [29],[30], time series data sets consist of only 8 time point measurements of 20 protein signals, and we would like to use this limited data to identify a graph structure which could have 20×20 or 400 edges.
In this paper, we consider reconstruction of biochemical reaction networks (proteinprotein interaction) or GRNs (including mRNA, transcription factors). Biological processes in cells are properly performed by gene regulation, signal transduction and interactions between proteins and thus, the network structure that we consider is an abstraction of the system’s biochemical reaction dynamics, describing the manifold ways in which one substance affects all the others to which it is connected. Thus, we are interested in directed graph representations of signaling pathways and propose models of biochemical reaction networks or GRNs as a set of basis functions or features that best explain a set of time series observations such as protein expression or gene expression time series data. We develop a new algorithm for GRN reconstruction based on CS. First, we focus on sparse graph structures using limited time series data with all nodes accessible and no measurement noise. We test the network reconstruction algorithm on a simulated biological pathway in which the structure is known a priori. We explain the limitation of the proposed method’s performance when the dataset naturally has high coherence and propose a way to overcome this limitation by designing additional effective experiments. Thus, one big part of the whole study is the method for proposing new experiments. As systems biology matures, experimental study design becomes more important in systems biology rather than simply building computational models of experimental data. Thus, the proposed method can be useful for experimental design. By revealing deficiencies in the data and model, the method can suggest experimental directions to remedy the deficiencies. By doing this, we can minimize the number of experiments and reduce the cost of experiments.
Next, the proposed algorithm is extended to a more general problem: we consider partially corrupted data with data inconsistencies due to model mismatch, and measurement noise affecting all the data. Typically, data inconsistencies may result from missing nodes in the model; or in some cases arbitrary data corruption may result from human errors such as mislabeling or the improper use of markers or antibodies. The question is whether one can still recover the graph structure reliably under these conditions. Inspired by a robust error correction method [31], the exact recovery of the graph structure can be guaranteed under suitable conditions on the node dynamics, provided that hidden nodes can affect relatively few nodes in the graph structure. Also, a set of numerical examples is provided to demonstrate the method, including some from an RPPA (Reverse Phase Protein Array) dataset [32] collected from HER2 positive breast cancer cell lines.

The CS framework uses the coherence of the sensing matrix as a performance index, which allows us to assess and optimize mathematically network reconstruction.

Coherence also provides a guideline for optimizing experiment design for network reconstruction.

By utilizing an error correction method in conjunction with the CS framework, network reconstruction may be performed even when there are hidden nodes and measurement noise.
Background
Overview: compressive sensing
where $\mathit{\Omega}\in {\mathbb{R}}^{m\times n}$ is called the sensing matrix.

If m>n and Ω is a full rank matrix, then the problem is overdetermined. If m=n and Ω is a full rank matrix, the problem is determined and may be solved uniquely for q.

If m<n, the problem is underdetermined even if Ω has full rank. We can restrict $\mathbf{q}\in {\mathbb{R}}^{n}$ to the subspace which satisfies Y=Ω q. However, q cannot be determined uniquely.
For the underdetermined case, the least squares solution ${\mathbf{q}}^{\ast}=arg\underset{\mathbf{q}}{min}{\Vert \mathbf{q}\parallel}_{{l}_{2}}={\mathit{\Omega}}^{\ast}{\left({\mathit{\Omega \Omega}}^{\ast}\right)}^{1}\mathbf{Y}$ is typically used as the “best guess” in many applications. However, if q is known to be sparse, meaning that many of its components are zero, one might expect that fewer than n measurements are needed to recover it. It is thus of interest to obtain a good estimator for underdetermined problems in which q is assumed to be ssparse, meaning that s of the elements of q are nonzero. In principle, the theory of compressive sensing (CS) asserts that the number of measurements needed to recover $\mathbf{q}\in {\mathbb{R}}^{n}$ is a number proportional to the compressed size of the signal (s), rather than the uncompressed size (n) [34]. To be able to recover q, CS relies on two properties: sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing matrix.
Proposition 1.
[33]. Suppose that any 2s columns of an m×n matrix Ω are linearly independent (this is a reasonable assumption if m≥2s). Then, any ssparse signal $\mathbf{q}\in {\mathbb{R}}^{n}$ can be reconstructed uniquely from Ω q.
The l _{1}minimization in Equation (3) requires mild oversampling, more specifically, m≥c·μ(Φ,Ψ)^{2}·s logn for some positive constant c where Ω can be decomposed as a product of a sparsity basis Ψ, and an orthogonal measurement system Φ [28]. We have m measurement in the Φ domain (sensing modality) under bases Ψ (signal model) and μ represents the coherence defined as follows:
Definition 1.
where Ω _{ j } and Ω _{ k } denote columns of Ω . In plain English, the coherence measures that the largest correlation between any two columns in Ω .
Several theoretical results [35] ensure that the l _{1}minimization guarantees exact recovery whenever the sensing matrix Ω is sufficiently incoherent. For example, we can say that Ω is incoherent if μ is small. Coherence is a key property in the compressed sensing framework because if two columns are closely correlated, it would be very difficult to distinguish whether the influence from the components in the sparse signal comes from one or the other (recall that the measurement Y(=Ω q) is a linear combination of each columns of the sensing matrix Ω with the components in q as coefficients). Also, numerical experiments suggest that in practice, most ssparse signals are in fact recovered exactly once m≥4s [28]. Here, “exact recovery” means that we find the sparsest solution (q) such that Y=Ω q.
Therefore, if the sensing matrix Ω satisfies the incoherence condition (m≥c·μ(Φ,Ψ)^{2}·s logn, or in practice, m≥4s), a sufficiently sparse signal q can be exactly recovered from the limited dataset without any prior knowledge of the number of nonzero elements, their locations, and their values. On the other hand, if the condition is not satisfied, exact recovery cannot be guaranteed [17],[34]. However, it is possible to use the property of coherence to guide biological experiment design, basically to collect a more informative dataset. As we will discuss in this paper, this can be done by inhibiting or stimulating certain genes to manipulate the gene expression.
CS can help reconstruct GRNs
In graph theory, a digraph can be represented by G=(V,E) where V and E represent nodes and edges respectively. For GRNs, each node represents a gene and each edge represents an influence map which models how genes affect each other. For example, the interactions could be how genes inhibit or stimulate each other. Since the connectivities of GRNs are typically unknown, often the best we can do is to select a set of possible candidate functions encoding possible unknown connectivities between genes.
In this section, we formulate a datadriven network identification problem into CS framework: first, we define a dynamical model of gene regulatory network. Then, we encode system dynamics into the sensing matrix (Ω ) and denote unknown connectivities between genes by q, a signal to be recovered.
A dynamical model of gene regulatory networks (GRNs)
where $\mathbf{x}\in {\mathbb{R}}^{n}$ denotes the concentrations of the ratelimiting species which can be measured in experiments; $\stackrel{\u0307}{\mathbf{x}}={\left[{\stackrel{\u0307}{x}}_{1}\phantom{\rule{2.77626pt}{0ex}}{\stackrel{\u0307}{x}}_{2}\phantom{\rule{2.77626pt}{0ex}}\dots \phantom{\rule{2.77626pt}{0ex}}{\stackrel{\u0307}{x}}_{n}\right]}^{\top}\in {\mathbb{R}}^{n}$ is a vector whose elements are the change in concentrations of the n species over time; $\mathbf{f}(\xb7):{\mathbb{R}}^{n}\to {\mathbb{R}}^{n}$ represents biochemical reactions, such as those governed by mass action kinetics, MichaelisMenten, or Hill kinetics. Thus, f(·) can include functions of known form such as product of monomials, monotonically increasing or decreasing Hill functions, simple linear terms, and constant terms [24]. $\mathbf{u}\in {\mathbb{R}}^{n}$ denotes the control input which could represent inhibitions and stimulations; For example, if we consider proteinprotein interaction, u represents druginduced perturbation such as inhibition and stimulation. For a GRN, we consider the use of siRNA as a control input to the system for gene knockdown; $\mathbf{g}(\xb7):{\mathbb{R}}^{{n}_{h}}\to {\mathbb{R}}^{n}$ represents influence from hidden nodes ${\mathbf{x}}_{h}\in {\mathbb{R}}^{{n}_{h}}$, which cannot be measured in experiments; n _{ h } is the number of hidden nodes and unknown; and $\mathbf{w}\in {\mathbb{R}}^{n}$ represents energybounded process noise or measurement noise. Here x, $\stackrel{\u0307}{\mathbf{x}}$ and u is assumed to be known where x can be measured in experiments and $\stackrel{\u0307}{\mathbf{x}}$ may not be measured directly in experiments but we could calculate these quantities by interpolating x and using numerical derivatives. In [36], the authors pointed out that although data on time derivative can be difficult to obtain especially in the presence of noise, it is possible to estimate the gene expressions relatively accurately by repeating measurement with careful instrumentation and statistics [37].
Formulating a dynamical system as a GRN
where ${\mathbf{S}}_{q}=\left[\begin{array}{ccc}{q}_{11}& \dots & {q}_{N1}\\ \dots & \dots & \dots \\ {q}_{1n}& \dots & {q}_{\mathit{\text{Nn}}}\end{array}\right]=\left[\begin{array}{c}{\mathbf{q}}_{1}^{\top}\\ \dots \\ {\mathbf{q}}_{n}^{\top}\end{array}\right]\in {\mathbb{R}}^{n\times N}$ reflects the (unknown) underlying GRN structure, ${\mathbf{q}}_{i}={\left[{q}_{1i}\phantom{\rule{2.77626pt}{0ex}}{q}_{2i}\phantom{\rule{2.77626pt}{0ex}}\dots \phantom{\rule{2.77626pt}{0ex}}{q}_{\mathit{\text{Ni}}}\right]}^{\top}\in {\mathbb{R}}^{N}$ and ${\mathbf{F}}_{b}\left(\mathbf{x}\right)={\left[\phantom{\rule{0.3em}{0ex}}{f}_{b,1}\left(\mathbf{x}\right),\phantom{\rule{2.77626pt}{0ex}}\dots ,\phantom{\rule{0.3em}{0ex}}{f}_{b,N}\left(\mathbf{x}\right)\right]}^{\top}\in {\mathbb{R}}^{N}$ is the vector field which includes possible candidate basis functions. In this way, any biochemical reactions can be represented by a linear map S _{ q } and a function F _{ b }(x) where S _{ q } encodes the underlying graph structure and F _{ b }(x) includes all possible candidate functions that could be included in the biochemical reactions.
In practice, we can construct F _{ b }(x) by selecting the most commonly used candidate basis functions to model GRNs or proteinprotein interaction, for example, all monomials, binomials, other combinations and Hill function.
Example 1.
where v= [ v _{1},0,0]^{⊤}and w= [ w _{1},w _{2},w _{3}]^{⊤}.
Formulating GRN into the CS framework
where ${\mathbf{y}}_{i}={\left[{y}_{i}\left({t}_{1}\right),\phantom{\rule{2.77626pt}{0ex}}{y}_{i}\left({t}_{2}\right),\phantom{\rule{2.77626pt}{0ex}}\dots ,\phantom{\rule{2.77626pt}{0ex}}{y}_{i}\left({t}_{M}\right)\right]}^{\top}\in {\mathbb{R}}^{M}$ represents the M successive data points, $\mathit{\Phi}\in {\mathbb{R}}^{M\times N}$ consists of N possible candidate bases which are functions of given time series data x and ${\mathbf{q}}_{i}\in {\mathbb{R}}^{N}$ represents the unknown influence map corresponding to the ith species. Since a biochemical reaction network is typically sparse, as a consequence, q _{ i } is sparse and we have N≫M for Φ because we assume the limited time series data and may choose a larger set of basis functions than necessary.
Now, Equation (12) is in the form of the CS formulation in Equation (1) where $\mathbf{Y}\in {\mathbb{R}}^{n\xb7M}$ is the measurement, $\mathit{\Omega}\in {\mathbb{R}}^{n\xb7M\times n\xb7N}$ is the sensing matrix which consists of basis functions for the given time series data, $\mathbf{v}\triangleq \left[{\mathbf{v}}_{1};\phantom{\rule{2.77626pt}{0ex}}{\mathbf{v}}_{2};\phantom{\rule{2.77626pt}{0ex}}\dots ;\phantom{\rule{2.77626pt}{0ex}}{\mathbf{v}}_{n}\right]\in {\mathbb{R}}^{n\xb7M}$ represents possibly large corruption from hidden nodes and $\mathbf{w}\triangleq \left[{\mathbf{w}}_{1};\phantom{\rule{2.77626pt}{0ex}}{\mathbf{w}}_{2};\phantom{\rule{2.77626pt}{0ex}}\dots ;\phantom{\rule{2.77626pt}{0ex}}{\mathbf{w}}_{n}\right]\in {\mathbb{R}}^{n\xb7M}$ represents process or measurement noise.
In this paper, we make the following assumptions:
Assumption 1.

(ideal case) We assume that we can measure all states x (i.e., there is no hidden node (x _{ h }), thus v=0) and there is no measurement noise (w=0). Also, there are enough columns in Φ in Equation (11) to represent the underlying system in Equation (6).

(extension) We consider hidden node (x _{ h }) where v(=g(x _{ h })) is assumed to be sparse, and process noise (w≠0). Here, we can also consider the case where the columns (also known as the dictionaries in the CS literature [28]) of Φ may not be able to represent the underlying system. In this case, we consider the influence from the missing dictionaries as v.
We first consider the ideal case for simplicity in explaining the main results, and then extend the proposed method to the more general case.
Results and discussion
Formulating GRN identification problem into CS
Many existing algorithms [16],[24] consider the n independent linear regression problems in Equation (11) separately. Since the columns of matrix Φ are composed of time series data as in Equation (10), it is difficult to a priori guarantee low correlation and sometimes Φ even suffers rank deficiency. Also, Pan et al. [24] pointed out that correlation between the columns of Φ is usually high (μ(Φ ) is close to 1).
Intuitively, if two columns of the sensing matrix are highly correlated, it is hard to distinguish the corresponding components in the sparse signal q (note that the measurement Y is a linear combination of each column of the sensing matrix with the components in q as coefficients). In order to deal with high coherence in Equation (11), many method combines CS with different techniques such as Bayesian formulation [24], Kalman filter [25], and Granger causality [26]. Also, it is a well known problem in the lasso formulation of network inference: if there is high coherence in the sensing matrix, one can use an elastic net which combines the l _{1} and l _{2} norms. Although each reconstruction result [24]–[26] might be the optimal solution in the sense of its formulation (i.e., maximum likelihood), the identified graph may not represent the underlying GRN. In other words, if the data set is not informative enough to fully explore the underlying system, while the identified graph structure based on the given data set may be an optimal solution of the particular optimization problem, it may not represent the true system.
In this paper, our goal is to get the smallest data informative enough to recover the underlying graph structure exactly. Since the proposed method maintains the CS framework by reducing coherence of the sensing matrix, the method is fundamentally different than any other methods which make use of different techniques in conjunction with the l _{1} optimization [24]–[26] which leads to a sparse representation of the network. Hence, we use all the properties of CS in order to access the ability to exactly reconstruct the underlying graph structure, reveal deficiencies in the data and model, and design new experiments to remedy the deficiencies if necessary.
While maintaining the CS framework, in order to deal with high coherence, we formulate Equation (12) instead of Equation (11) since we have strongly uncorrelated columns in Ω . In other words, since Ω has many independent columns, we have more degrees of freedom to reduce coherence of Ω . We will show that by using a transformation, the components of the sensing matrix can be made more uniformly distributed so that we could reduce coherence.
Moreover, since each q _{ i } has different degrees of sparsity in general, if we consider n independent equations in Equation (11), Φ should satisfy the incoherence condition M≥2 maxi(‖q _{ i }‖_{0}) stated in Proposition 1. On the other hand, Ω only needs to satisfy the condition $M\xb7n\ge 2\xb7\sum _{i=1}^{n}{\Vert {\mathbf{q}}_{i}\Vert}_{{l}_{0}}$. Since the averaged sparsity $\left(=\frac{\sum _{i=1}^{n}{\Vert {\mathbf{q}}_{i}\Vert}_{{l}_{0}}}{n}\le \underset{i}{max}\left({\Vert {\mathbf{q}}_{i}\Vert}_{{l}_{0}}\right)\right)$ is smaller than $\underset{i}{max}\left({\Vert {\mathbf{q}}_{i}\Vert}_{{l}_{0}}\right)$, we can reduce the required number of samples (M). Also, in case of rank deficiency, we can simply remove the corresponding rows (note that the rank deficiency is more likely caused by the row since N≫M).
Optimal design of sensing matrix
Reducing coherence by transformation
where ${\psi}_{\mathit{\text{ij}}}^{s}\in {\mathbb{R}}^{n}$ represents the jth column of ${\mathit{\Psi}}_{i}^{s}$. In this paper, we propose a heuristic approach and a novel way to find ${\mathit{\Psi}}_{i}^{s}$ by solving the optimization iteratively to reduce coherence (see Method: Section ‘Randomly chosen matrix Ψ for details).
Example 2.
Example 3.
Designing effective experiments

Since we solve the relaxed problem in Equation (24) iteratively, P might be suboptimal.

If the time series data of two different gene expressions, x _{ i } and x _{ j } are highly correlated, it might be difficult to reduce coherence. In this case, we need to design a new experiment to remedy deficiencies in the data.
As we mentioned, the incoherence of the sensing matrix can be used not only as a good metric to guarantee exact recovery but also as a guideline for designing new experiments. For example, from the coherence distribution, we can identify which columns of the sensing matrix have high coherence, i.e., f _{b,i}(x) and f _{b,j}(x) in Equation (6). Intuitively, in order to reduce ambiguities from the highly correlated columns of the sensing matrix, we should perturb either f _{b,i}(x) or f _{b,j}(x). Thus, it is possible to use this property of coherence to guide biological experiment design, to collect a more informative dataset. By doing this, we can minimize the number of experiments and reduce the cost of experiments.
Example 4.
To show the effectiveness of the new experiment, we design two different experiment sets and compare the reconstruction results with each other; for Exp#2, we perturb x _{3} and for Exp#3, we perturb x _{2}. As we mentioned earlier, intuitively, we expect that Exp#3 to be a more informative experiment to identify the graph structure since we would like to reduce the coherence between x _{2} and x _{4}. As we expected, in Exp#3, we can reduce the coherence more than that of Exp#2 as shown in Figure 4, and recover the exact graph structure as shown in Figure 5. On the other hand, the coherence of Exp#2 remains almost the same as that of Exp#1 and we fail to recover the exact graph structure. This numerical example illustrates that by using the property of coherence, we can guide biological experiment design more effectively.
Example 5.
where the subscript ${\mathbf{Z}}_{i},{\stackrel{\u0304}{\mathit{\Omega}}}_{i},{\stackrel{\u0304}{\mathbf{v}}}_{i},{\stackrel{\u0304}{\mathbf{w}}}_{i}$ represents the ith experiment. As the number of measurements increase (M), one may be able to reduce the coherence. However, one can reduce the coherence only if the additional measurements provide us more useful information. As a trivial example, one could stack exactly the same data on top of the first, and increase M to 2M, however the coherence is exactly the same as that of the original dataset.
Recovery of gene regulatory networks
In this section, we present reconstruction of the exact graph structure and show how the condition for exact recovery will be used. First, we consider the ideal case where there are no hidden nodes and no measurement noise. Second, we extend the ideal case to the more general case.
Reconstructing gene regulatory networks (ideal case)
where $\stackrel{~}{\mathbf{Z}}\triangleq \mathcal{P}\mathbf{Z}$, $\stackrel{~}{\mathit{\Omega}}\triangleq \mathcal{P}\stackrel{\u0304}{\mathit{\Omega}}$ and is the optimal transformation in Equation (24).
Proposition 2.
If the sensing matrix $\stackrel{~}{\mathit{\Omega}}$ constructed from time series data, multiplied by the optimal transformation, , has 2s linearly independent columns, then any ssparse network structure q can be reconstructed uniquely from $\stackrel{~}{\mathbf{Z}}=\stackrel{~}{\mathit{\Omega}}\mathbf{q}$.
Proof.
(Suppose not), then there are two ssparse graph structures q _{1},q _{2} with $\stackrel{~}{\Omega}{\mathbf{q}}_{1}=\stackrel{~}{\mathit{\Omega}}{\mathbf{q}}_{2}$ (or $\stackrel{~}{\mathit{\Omega}}({\mathbf{q}}_{1}{\mathbf{q}}_{2})=0$). However, q _{1}−q _{2} is 2ssparse, so there is a linear dependence between 2s columns of $\stackrel{~}{\mathit{\Omega}}$ (contradiction).
The requirement of 2s linearly independent columns in Proposition2 may be translated to an incoherence condition on the sensing matrix. That is, if the unknown ssparse signal q is reasonably sparse, it is possible to recover q under the incoherence condition on the sensing matrix. Although the sensing matrix consists of redundant dictionaries, the coherence of the sensing matrix can be reduced. In a heuristic way, we multiply the redundant dictionaries Φ _{k,:} by a randomly chosen matrix ${\mathit{\Psi}}_{k}^{s}$ at each time step k and iterate this step until the coherence is decreased. Or, we can find the optimal transformation (or P) in Equation (24) to reduce the coherence. In the previous numerical examples, we illustrated that the coherence of the sensing matrix is decreased by transformation and showed the exact reconstruction of graph structure.
Example 6.
Graph reconstruction with hidden nodes
The main contributions of the proposed method in the previous section is the conversion of the problem of inferring graph structure into the CS framework. Then, we demonstrate that one could recover sparse graph structures from only a few measurements. However, for practical use, the proposed method needs to be able to deal with both sparsely corrupted signals (v=g(x _{ h })) and measurement noise (w) in Equation (5).
In general, the assumption of accessibility or observability of all nodes [17] is not satisfied. Thus, we focus on the case in which the hidden node affects observable nodes directly as shown in Figure 1(B). Also, without loss of generality, the hidden node dynamics could be any arbitrary dynamic model. Or, even if there is no hidden node, a small portion of the biological experiment dataset could be in practice contaminated by large error resulting from, for example, mislabeling, or improper use of markers or antibodies. Moreover, all biological datasets are contaminated by at least a small amount of noise from measurement devices. Therefore, the proposed method should be robust. We note this goes beyond the results in [11] due to the consideration of hidden node dynamics with measurement noise.
Here, the question is whether it is still possible to reconstruct the graph structure reliably when measurements are corrupted. Since hidden nodes and measurement noise are considered, the number of time points is assumed to be greater than that of the previous case [17] (i.e., no corruption and no measurement noise). Thus, the number of rows of the sensing matrix is assumed to be greater than the number of columns. If the number of the time points M(<N) is limited, then we can stack $\stackrel{\u0304}{\mathbf{z}}\left(j\right)$ with different ${\mathit{\Psi}}_{k}^{s}$ or including different dataset as described in Equation (16).
In CS literature [31], two decoding strategies for recovering the signal from a corrupted measurement are introduced, where the corruption includes both a possible sparse vector of large errors and a vector of small error affecting all the entries. It is shown that two decoding schemes allow the recovery of the signal with nearly the same accuracy as if no sparse large errors occurred. Our contribution is converting the problem of inferring the graph structure with hidden nodes into the highly robust error correction method framework [31] (see Method: Section ‘Twostep refinements’) and showing how this can improve the reliability of reconstruction.
Example 7.
In practice, a specific node is corrupted by a hidden node and a small portion of the dataset can be largely corrupted by human error. Also, since we choose the set of possible candidate basis functions of the sensing matrix in Equation (6), the columns of the sensing matrix may not be able to represent the underlying system (i.e., missing dictionaries). Then, we can consider the influence from these missing dictionaries as v.
Example 8.
Geometric view
In Equation (17), since we assume all nodes are accessible and perfect measurement (meaning that there is no hidden node, v=0 and no measurement noise, w=0), we can solve Equation (27) directly without filtering out the unmodelled dynamics in Equation (26) (i.e., $\mathbf{Z}=\stackrel{\u0304}{\mathit{\Omega}}\mathbf{q}$ in Equation (27)). If there exist hidden nodes or measurement noise, we can still provide an unambiguous indication of the existence of these corruption (e≠0).

$\stackrel{\u0304}{\mathbf{Z}}={\mathbf{Q}}^{\ast}\mathbf{Z}=\mathbf{0}$: there is no hidden node $\phantom{\rule{2.77626pt}{0ex}}\Rightarrow \phantom{\rule{2.77626pt}{0ex}}\mathbf{Z}\in \mathcal{R}\left(\stackrel{\u0304}{\mathit{\Omega}}\right)$

$\stackrel{\u0304}{\mathbf{Z}}\ne \mathbf{0}$: Z cannot be represented by $\stackrel{\u0304}{\mathit{\Omega}}\mathbf{q}$ so there might be hidden nodes or our dictionaries in the sensing matrix $\stackrel{\u0304}{\mathit{\Omega}}$ are not sufficient to represent Z (revealing deficiencies in our model or dictionaries).
HER2 Overexpressed breast cancer
Also, an abstract model of the breast cancer signal pathway proposed by M. Moasser [38] is considered, as shown in Additional file 1: Figure S3(B) where PHLPP isoforms are a pair of protein phosphatases, PHLPP 1 and PHLPP 2, which are important regulators of Akt serinethreonine kinases (Akt 1, Akt 2, Akt 3) and conventional protein kinase C (PKC) isoforms. PHLPP may act as a tumor suppressor in several types of cancer due to its ability to block growth factorinduced signaling in cancer cells [39]. PHLPP dephosphorylates Ser 4 7 3 (the hydrophobic motif) in Akt, thus partially inactivating the kinase [40]. Unfortunately, in our RPPA dataset, we do not have PHLPP so we simply consider three nodes (Ak t _{p T308},Ak t _{p S473} and mTOR). Figure 11(right) shows the result of the proposed method using the RPPA dataset. The reconstructed graph structure matches up to the known structure (Additional file 1: Figure S3(B)). Specifically, our result can capture the partial inactivating characteristics of PHLPP (i.e., mTOR(→PHLPP)⊣Ak t _{p S473}).
Discussion
Many network inference algorithms use l _{1}norm optimization to find the actual network structure, but they cannot guarantee exact recovery. In this paper, we propose a novel approach to reconstruct GRNs based on compressive sensing with a dynamic system model. We demonstrate that the incoherence condition is essential for exact recovery. This is not properly considered in inference methods based on l _{1}norm optimization. Also, we illustrate how the incoherence condition can be used to design new experiments effectively. Finally, we consider a more general setting, in which hidden genes exist and all measurements are contaminated by noise, and we show that the proposed method leads to reliable reconstruction.
We compare all the results with l _{1} and l _{2}norm optimization, since l _{1}norm optimization is widely used in network inference and many inference methods combine the l _{1} and l _{2}norms (for example, elastic net). We highlight the improvement by the proposed method compared with these commonly used methods. Our use of the method to propose new experiments, and its extension to a more general setting goes beyond the results in any network inference methods.
From the lessons learned from the numerical study using both synthetic datasets and real RPPA datasets of HER2 overexpressed breast cancer signaling pathways, the proposed method can be applied to subnetworks in order to satisfy our assumption about the number of hidden nodes relative to the network size, and has great potential for reconstructing GRN by designing effective experiments. Note that the proposed method is not limited to the network size itself but rather by this assumption and the incoherence condition. In the ideal case with no unobservable species (genes) and no noise, the method can be applied to entire networks if the incoherence condition is satisfied (i.e., n·M≥2s, practically n·M≥4s) where n is the number of nodes, M is the number of time points and s represents the sparsity of the network structure. For example, suppose we only have one time point measurement (M=1) and then, the proposed method cannot be applicable for any network (n>2 where n is the number of nodes) since it is impossible to satisfy the incoherence condition. As an example, if we consider n nodes which are simply connected (i.e., s=n−1), then n·M<2s. With a reasonable number of time points (M), the incoherence condition could be satisfied. Similarly, in the practical case with unobservable species (genes) and measurement noise, in order to guarantee the exact recovery of the graph structure, we should satisfy Assumption 1 for a given network (hidden node only affects relatively few nodes in the given network) and the incoherence condition. Suppose we choose a small subnetwork where a hidden node or unobservable node affects all nodes in the chosen subnetwork: then, it is impossible to infer the exact structure of the subnetwork. However, if a hidden node only affects relatively few nodes in the given network, we can still reconstruct the exact structure of the subnetwork by inferring the hidden node influence first. Therefore, the proposed method is not limited to the network size itself but rather by Assumption 1 and the incoherence condition. In practice, we can work on a subnetwork at a time and integrate the identified subnetworks.
Finally, we evaluate performance with respect to the size of the problem under typical parameters (the number of genes, the number of kinetic features and connectivity) in (Additional file 1: Table S1) which provides us a brief guideline of application, e.g., for an ngenes network with certain kinetic features, how many time points/how much resolution are appropriate for the reconstruction.
In this paper, we do not consider any a priori knowledge of connectivity. However, since we may have partial information of connectivity, we can also use prior knowledge of connectivity in the form of known nonzero elements in q allowing fewer experiments to be performed. Also, here we only consider protein expression or gene expression levels but we can also consider many other types of useful information that can be incorporated into the network reconstruction process, such as sequence motifs and direct binding measurements (e.g., ChIPchip and ChIPseq). Since it has been shown that ChIPseq signals of Histone modification are more correlated with transcription factor motifs at promoter sites in comparison to RNA level, time series Histone modification ChIPseq could provide a more reliable inference of GRNs in comparison to method based on expression level. We are interested in continuing this research direction for largescale networks.
Conclusion
We proposed a method for reconstructing sparse graph structures based on time series gene expression data without any a priori information. We demonstrated that the proposed method can reconstruct graph structure reliably. Also, we illustrated that coherence in the sensing matrix can be used as a guideline for designing effective experiments.
Second, the proposed method is extended to the cases in which dynamics is corrupted by hidden nodes and the measurement is corrupted by human error in addition to the measurement noise. Using a twostep refinement procedure, we demonstrate good performance for the reconstruction of graph structure. A set of numerical examples is implemented to illustrate the method and its performance. Also, a biological example of HER2 overexpressed breast cancer using an RPPA dataset is studied. We are currently applying our method to recover the HER2 signaling pathway, where a significant part of the network is currently unknown.
Method
Reducing coherence by transformation
Rearranging the sensing matrix
Randomly chosen matrix Ψ
The optimization problem in Equation (14) is not trivial because of the constraint. One simple and heuristic approach is that we select ${\mathit{\Psi}}_{i}^{s}$ by (normalized) randomly chosen matrix with independent identically distributed (i.i.d.) random variable where $\stackrel{\u0304}{\mathit{\Omega}}$ has full row rank, calculate $\mu \left(\stackrel{\u0304}{\mathit{\Omega}}\right)$ and run this with several times to reduce coherence. Since the randomly chosen matrix, ${\mathit{\Psi}}_{i}^{s}$, spreads out the component of Φ _{i,:} uniformly, we can reduce coherence.
Finding the optimal transformation
We can also combine ‖·‖_{ ∞ } and ‖·‖_{1} to reduce the coherence. Note that for ‖·‖_{ ∞ }, we minimize the maxim coherence and for ‖·‖_{1}, we minimize the sum of the all possible combinations of the columns of $\stackrel{\u0304}{\mathit{\Omega}}$, i.e., ${\stackrel{~}{\mu}}_{\mathit{\text{ij}}}$. Thus, if certain bases are highly correlated, P or $\mathcal{P}$ makes the components of the sensing matrix spread out enough to differentiate the influences from those bases. Since we ignore the denominator, the optimal solution of Equation (24) may be suboptimal. Thus, we can also combine heuristic way and Equation (24) iteratively to reduce coherence of the sensing matrix in practice.
Twostep refinements
Sparse large corruption with no measurement noise
where $\stackrel{\u0304}{\mathit{\Psi}}\triangleq \text{diag}\phantom{\rule{0.3em}{0ex}}\left\{{\mathit{\Psi}}_{1}^{s},{\mathit{\Psi}}_{2}^{s},\dots ,{\mathit{\Psi}}_{M}^{s}\right\}$ and v(j) represents sparse large corruption at the jth time point, that could result from the existence of hidden nodes. We assume that the influence from hidden nodes is sparse and unknown (i.e., v(j) is assumed to be sparse). In other words, hidden nodes can affect only a few nodes’ dynamics (intuitively, if hidden nodes affect all nodes, there is no way to reconstruct the graph structure). Then, we consider the reconstruction of graph structure q from the corrupted signal Z.
Then, the following twostep optimization problem enables us to compute q:
If we could somehow get an accurate estimate $\widehat{\mathbf{v}}$ from Equation (26), Equation (27) represents the problem of reconstructing the graph structure q. The intuition is that $\mathbf{Z}(=\stackrel{\u0304}{\mathit{\Omega}}\mathbf{q}+\stackrel{\u0304}{\mathit{\Psi}}\mathbf{v})$ can be decomposed as the superposition of modelled dynamics and anomalies caused by hidden node or unmodelled dynamics.
where Ξ can be used as a tuning matrix for satisfying the incoherence condition. A geometric view in Results and discussion: Section ‘Geometric view’. enables us to understand how we could reveal deficiencies in our model.
Sparse large corruption with measurement noise
where the parameters ε _{1},ε _{2} above depend on the magnitude of the small errors ε, which can be determined as in [31].
Endnotes
Additional file
Declarations
Acknowledgements
This research was supported by the NIH NCI under the ICBP and PSOC programs (5U54CA11297008), and by the Stand Up To CancerAmerican Association for Cancer Research Dream Team Translational Cancer Research Grant SU2CAACRDT0409 to JWG.
Authors’ Affiliations
References
 Ideker T, Ozier O, Schwikowski B, Siegel AF: Discovering regulatory and signaling circuits in molecular interaction networks . Bioinfomatics. 2002, 18: 233240. 10.1093/bioinformatics/18.suppl_1.S233.View ArticleGoogle Scholar
 Wiley HS, Shvartsman SY, Lauffenburger DA: Computational modeling of the egfreceptor system: a paradigm for systems biology . Trends Cell Biol. 2003, 13 (1): 4350. 10.1016/S09628924(02)000090.View ArticlePubMedGoogle Scholar
 Ma L, Iglesiass PA: Quantifying robustness of biochemical network models . BMC Bioinformatics. 2002, 3 (38): 113.Google Scholar
 Sontag ED: Network reconstruction based on steadystate data . Essays Biochem. 2008, 45: 161176. 10.1042/BSE0450161.View ArticlePubMedGoogle Scholar
 Zechnera C, Ruessa J, Krenn P, Pelet S, Peter M, Lygeros J, Koeppl H: Momentbased inference predicts bimodality in transient gene expression . Proc Natl Acad Sci U S A. 2012, 109 (21): 83408345. 10.1073/pnas.1200161109.View ArticleGoogle Scholar
 Zavlanos MM, Julius AA, Boyd SP, Pappas GJ: Identification of stable genetic networks using convex programming . Proceedings of the American Control Conference (ACC) . 2008, IEEE, Seattle, WA, 27552760.Google Scholar
 Cooper NG, Belta CA, Julius AA: Genetic regulatory network identification using multivariate monotone functions . Proceedings of the IEEE conference on Decision and Control and European Control Conference (CDCECC) . 2011, IEEE, Orlando, FL, 22082213.View ArticleGoogle Scholar
 Porreca R, Drulhe S, de Jong H, FerrariTrecate G: Structural identification of piecewiselinear models of genetic regulatory networks . J Comput Biol. 2008, 15 (10): 13651380. 10.1089/cmb.2008.0109.View ArticlePubMedGoogle Scholar
 Bernardo DD, Gardner TS, Collins JJ: Robust identification of large genetic networks . Pac Symp Biocomput. 2004, 9: 486497.Google Scholar
 Richard G, Julius A. A, Belta C: Optimizing regulation functions in gene network identification . IEEE Conference on Decision and Control (CDC) . 2013, IEEE, Firenze, Italy, 745750.Google Scholar
 Gonçalves JM, Warnick SC: Necessary and sufficient conditions for dynamical structure reconstruction of lti networks . IEEE Trans Automatic Control. 2008, 53 (7): 16701674. 10.1109/TAC.2008.928114.View ArticleGoogle Scholar
 Sontag E, Kiyatkin A, Kholodenko BN: Inferring dynamics architecture of cellular network using time series of gene expression, protein and metabolite data . Bioinformatics. 2004, 20 (12): 18771886. 10.1093/bioinformatics/bth173.View ArticlePubMedGoogle Scholar
 Han S, Yoon Y, Cho KH: Inferring biomolecular interaction networks based on convex optimization . Comput Biol Chem. 2007, 31 (56): 347354. 10.1016/j.compbiolchem.2007.08.003.View ArticlePubMedGoogle Scholar
 Napoletani D, Sauer TD: Reconstructing the topology of sparsely connected dynamical networks . Phys Rev E. 2008, 77 (2): 02610310.1103/PhysRevE.77.026103.View ArticleGoogle Scholar
 Napoletani D, Sauer T, Struppa DC, Petricoin E, Liotta L: Augmented sparse reconstruction of protein signaling networks . J Theor Biol. 2008, 255 (1): 4052. 10.1016/j.jtbi.2008.07.026.View ArticlePubMedGoogle Scholar
 Yuan Y, Stan GB, Warnick S, Gonçalves J: Robust dynamical network structure reconstruction . Automatica. 2011, 47: 12301235. 10.1016/j.automatica.2011.03.008.View ArticleGoogle Scholar
 Chang YH, Tomlin CJ: Datadriven graph reconstruction using compressive sensing . IEEE Conference on Decision and Control (CDC) . 2012, IEEE, Maui, HI, 10351040.Google Scholar
 Steinke F, Seeger M, Tsuda K: Experimental design for efficient identification of gene regulatory networks using sparse bayesian models . BMC Syst Biol. 2007, 1: 5110.1186/17520509151. doi: 10.1186/17520509151,View ArticlePubMed CentralPubMedGoogle Scholar
 Gardner TS, di Bernardo D, Lorenz D, Collins JJ: Inferring genetic networks and identifying compound mode of action via expression profiling . Science. 2003, 301 (5629): 102105. 10.1126/science.1081900.View ArticlePubMedGoogle Scholar
 Tegner J, Yeung MKS, Hasty J, Collins JJ: Reverse engineering gene networks: integrating genetic perturbations with dynamical modeling . Proc Natl Acad Sci. 2003, 100: 59445949. 10.1073/pnas.0933416100.View ArticlePubMed CentralPubMedGoogle Scholar
 Jeong H, Mason SP, Barabasi AL, Oltvai ZN: Lethality and centrality in protein networks . Nature. 2001, 411 (6833): 4142. 10.1038/35075138.View ArticlePubMedGoogle Scholar
 Milo R, ShenOrr S, Itzkovitz S, Kashtan N, Chklovskii D, Alon U: Network motifs: Simple building blocks of complex networks . Science. 2002, 298 (5594): 824827. 10.1126/science.298.5594.824.View ArticlePubMedGoogle Scholar
 Burda Z, Krzywicki A, Martin OC, Zagorski M: Motifs emerge from function in model gene regulatory networks . Proc Natl Acad Sci. 2011, 108 (42): 1726317268. 10.1073/pnas.1109435108.View ArticlePubMed CentralPubMedGoogle Scholar
 Pan W, Yuan Y, Gonçalves J, Stan GB: Reconstruction of arbitrary biochemical reaction networks: A compressive sensing approach . IEEE Conference on Decision and Control (CDC) . 2012, IEEE, Maui, HI, 23342339.Google Scholar
 Noor A, Serpedin E, Nounou M, Nounou H: Reverse engineering sparse gene regulatory networks using cubature kalman filter and compressed sensing . Adv Bioinformatics. 2013, 2013: 11doi:10.1155/2013/205763,Google Scholar
 Emad A, Milenkovic O: CaSPIAN: A causal compressive sensing algorithm for discovering directed interactions in gene networks . PLoS ONE. 2014, 9 (3): e9078110.1371/journal.pone.0090781. doi:10.1371/journal.pone.0090781,View ArticlePubMed CentralPubMedGoogle Scholar
 Wang WX, Lai YC, Grebogi C, Ye J: Network reconstruction based on evolutionarygame data via compressive sensing . Phys Rev X. 2011, 1: 021021Google Scholar
 Candès E, Romberg J: Sparsity and incoherence in compressive sampling . Inverse Probl. 2006, 23: 969985. 10.1088/02665611/23/3/008.View ArticleGoogle Scholar
 Chang YH, Gray J, Tomlin CJ: Optimizationbased inference for temporally evolving boolean networks with applications in biology . Proceedings of the American Control Conference (ACC) . 2011, IEEE, San Francisco, CA, 41294134.Google Scholar
 Chang YH, Gray J, Tomlin CJ: Optimizationbased inference for temporally evolving networks with applications in biology . J Comput Biol. 2012, 19 (12): 13071323. 10.1089/cmb.2012.0190.View ArticlePubMed CentralPubMedGoogle Scholar
 Candès EJ, Randall PA: Highly robust error correction by convex programming . IEEE Trans Inf Theory. 2008, 54 (7): 28292840. 10.1109/TIT.2008.924688.View ArticleGoogle Scholar
 Hennessy BT, Lu Y, GonzalezAngulo AM, Carey MS, Myhre S, Ju Z, Davies MA, Liu W, Coombes K, MericBernstam F, Bedrosian I, McGahren M, Agarwal R, Zhang F, Overgaard J, Alsner J, Neve RM, Kuo WL, Gray JW, BorresenDale AL, Mills GB: A technical assessment of the utility of reverse phase protein arrays for the study of the functional proteome in nonmicrodissected human breast cancers. Clinical Proteomics, 6(4):129–151.
 Tao T: Compressed sensing (Or: the equation Ax = b, revisited), Mahler Lecture Series . 2009, University of California, Los AngelesGoogle Scholar
 Candès E, Wakin M: An introduction to compressive sampling . IEEE Signal Process Mag. 2008, 25 (2): 2130. 10.1109/MSP.2007.914731.View ArticleGoogle Scholar
 Candès EJ, Romberg JK, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information . IEEE Trans Inf Theory. 2004, 52: 489509. 10.1109/TIT.2005.862083.View ArticleGoogle Scholar
 Yeung MKS, Tegner J, Collins JJ: Reverse engineering gene networks using singular value decomposition and robust regression . Proc Natl Acad Sci. 2002, 99 (9): 61636168. 10.1073/pnas.092576199.View ArticlePubMed CentralPubMedGoogle Scholar
 Ideker T, Thorsson V, Siegel AF, Hood LE: Testing for differentiallyexpressed genes by maximumlikelihood analysis of microarray data . J Comput Biol. 2000, 7 (6): 805817. 10.1089/10665270050514945.View ArticlePubMedGoogle Scholar
 Moasser M: Understanding the network topology underlying addiction to HER. Integrative Cancer Biology Program (ICBP) Retreat (Days of Science, the Sequel) . 2012, Waverly Club, SoutheastWaverly Drive, PortlandGoogle Scholar
 Brognard J, Newton AC: Phlipping the switch on akt and protein kinase c signaling . Trends Endocrinol Metab. 2008, 19 (6): 22330. 10.1016/j.tem.2008.04.001.View ArticlePubMed CentralPubMedGoogle Scholar
 Gao T, Furnari F, Newton AC: Phlpp: a phosphatase that directly dephosphorylates akt, promotes apoptosis, and suppresses tumor growth . Mol Cell. 2005, 18 (1): 1324. 10.1016/j.molcel.2005.03.008.View ArticlePubMedGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.