A Scalable Algorithm for Structure Identification of Complex Gene Regulatory Network from Temporal Expression Data

Motivation: Gene regulatory interactions are of fundamental importance to various biological functions and processes. However, only a few previous computational studies have claimed success in revealing genome-wide regulatory landscapes from temporal gene expression data, especially for complex eukaryotes like human. Moreover, recent work suggests that these methods still suffer from the curse of dimensionality if network size increases to 100 or higher. Result: We present a novel scalable algorithm for identifying genome-wide regulatory network structures. The highlight of our method is that its superior performance does not degenerate even for a network size on the order of $10^4$, and is thus readily applicable to large-scale complex networks. Such a breakthrough is achieved by considering both prior biological knowledge and multiple topological properties (i.e., sparsity and hub gene structure) of complex networks in the regularized formulation. We also illustrate the application of our algorithm in practice using the time-course expression data from an influenza infection study in respiratory epithelial cells. Availability and Implementation: The algorithm described in this article is implemented in MATLAB$^\circledR$. The source code is freely available from https://github.com/Hongyu-Miao/DMI.git. Contact: jliu@cs.rochester.edu; hongyu.miao@uth.tmc.edu Supplementary information: Supplementary data are available online.


Introduction
Gene regulatory network (GRN), consisting of multiple regulators and their target molecules, plays critical roles in numerous biological processes by modulating the expression levels of RNAs and proteins [26]. While remarkable successes in dissecting single genes that are responsible for certain biological functions, behavior or diseases have been achieved over the past few decades, it has been increasingly recognized that elucidating gene functions and interactions in the context of networks becomes more and more important to gain novel insight into mechanisms, effects and interventions of molecular, cellular or organ-level biological processes [4,5,27]. Clearly, one of the prerequisites for investigators to harvest the benefits of such systematic network approaches is whether the structures of gene regulatory networks can be accurately revealed from experimental data.
Modern high-throughput experimental technologies such as next generation sequencing [40] can generate time-course data at a much more affordable cost [10], thus provide unprecedented opportunities for researchers to systematically investigate the temporal patterns of gene expressions and infer gene regulatory relationships. However, two well-known major obstacles have significantly hampered our ability to interrogate such data for novel scientific findings. First, limited by resources or technical and ethic issues, the sampling frequency of time-course gene expression profiling data is low (e.g., most of the time-course GEO datasets [14] have less than 6 time points), which renders the sample size far less than the number of unknown parameters in the context of GRN structure identification. Targeting at such scenarios, it is of significant importance to borrow information from additional sources (e.g., previous biological knowledge). Second, considering the fact that for complex eukaryotes like human, the number of protein-coding genes is approximately 19,000 [15] so the problem dimension is ultra-high (i.e., tens of thousands or even millions unknown parameters are involved). The development of novel and more efficient algorithms that can scale to such high-dimensional networks is still necessary.
A number of modeling and computational approaches have been developed for gene network structure identification [35], including information theory method [e.g. 45], clustering method [e.g. 30], Boolean network [49], Bayesian network [e.g. 19], state-space model [21], regression model [52], and differential equation model [e.g . 13]. The well-known methods for static expression data include the Pearson correlation coefficient (PCC) and the mutual information (MI) methods [2,36], the ARACNE algorithm [38] based on the data processing inequality concept [12], the context likelihood of relatedness (CLR) approach [16] that extends the relevance networks method [8], the GENIE3 algorithm [23] that uses tree-based gene ranking and ensemble, and the TIGRESS algorithm [20] that combines the least angle regression with stability selection. However, only a few algorithms were specifically developed for time-course data. [53] proposed to calculate the time-delayed dependencies between gene expressions using mutual information, and developed and tested the TimeDelay-ARACNE algoithm on networks with less than 30 genes. Our group introduced the concept of background network to confine the parameter space using prior biological knowledge and built the SITPR pipeline based on a dynamic Bayesian network (DBN) model regularized by L 1 norm [33]. More recently, [24] developed the Jump3 method by combining the on/off stochastic differential equation model with decision trees, where the marginal likelihood of each node was used as the decision function.
The DREAM Challenge [48] makes it feasible to generate authoritative benchmark data for algorithm performance evaluation. To the best knowledge of our authors, none of the algorithms above have been evaluated on DREAM networks of a size greater than 100; actually, their performances all become unsatisfactory at a network size of 100, as verified in several previous studies [2,33,36]. Therefore, a breakthrough in algorithm development is necessarily needed to obtain accurate and reliable results for large-scale GRN inference (e.g., a network size O(10 4 )). The work of [33] is one of the few studies that systematically borrow information from previous biological knowledge to deal with the low sampling frequency and the high dimensionality issues in GRN inference; however, even after the incorporation of prior knowledge, the number of unknown parameters is still on the order of 10 5 so the high dimensionality issue remains. Since GRNs are typical complex networks with structural sparsity [31], it is common to impose sparsity on the inference problem formulation in previous studies [20,33,52]; however, it turned out that sparsity alone cannot satisfyingly address the high dimensionality problem [33]. This observation leads us to the hypothesis that considering additional structural or topological properties of large-scale complex networks may result in novel scalable algorithms for GRN inference that are not subject to the curse of dimensionality.
In this study, we adopt the background network approach developed in our previous study [33] for parameter space confinement, and we modify other selected state-of-the-art algorithms to take the advantage of the same background network for fairness of algorithm performance comparison. More importantly, a breakthrough in algorithm performance is achieved by considering both structural sparsity and the existence of hub genes, which is a prominent topological feature of GRNs due to preferential gene attachment [5]. We describe the mathematical formulation and computational steps of the novel algorithm in Section 2. We evaluate and compare the performance of our algorithm with other state-of-the-art approaches using DREAM4 benchmark data that are generated from networks of a size up to 2 × 10 4 , and we apply the proposed method to real temporal gene expression data from an influenza H1N1 virus infection study to illustrate its usage in practice. The related computational settings and results are presented in Section 3. Finally, we discuss the advantages and limitations of this work in Section 4.

Background regulatory network
For ultra-high dimensional problems considered in this study, it is important to constrain the parameter space based on prior biological information to improve computing efficiency and avoid over-fitting, especially for datasets with limited sample sizes. For this purpose, the concept of background regulatory network was introduced in our previous work [33], and we recently also developed a curated database called RegNetwork for public use [32]. Here we present the key features of the background network and describe how it is employed in this study for GRN inference. First, the background regulatory network is comprehensive so it contains not only experimentallyobserved regulatory interactions but also physically, chemically or biologically feasible interactions. For this purpose, information from over 20 selected databases or knowledgebases (e.g., FANTOM [46], TRANS-FAC [39], JASPAR [7], KEGG [25], HPRD [42], and ENCODE [18]) have been collected and integrated. In addition, potential interactions between transcription factors and target genes are predicted based on sequence motifs obtained from Ensembl [17]. In this way, the background network also allows the discovery of novel regulatory relationships that have not been officially reported or documented. Second, the background network is not cell type-or condition-specific but it allows the detection of cell type-or condition-specific regulatory relationships. The reason is that under different conditions (e.g., different diseases), only certain regulatory interactions in certain types of cells will be activated in the background network, and an appro-priately designed algorithm should be able to detect a reasonable number of activated interactions with the presence of the inactive ones. Third, the background network is different from random networks [3] in terms of certain properties like characteristic path length and node degree distribution [1]; also, its node degrees are found to satisfy the power-law distributions so the background network is scale-free.
Here we use the background regulatory network concept in both the simulation studies and the real data example. That is, our primary task is to identify the activated interactions in the confined parameter space specified by the background network. We recognize that some existing algorithms under comparison start with the full network structure; therefore, for fairness, we make necessary efforts to modify their computer codes to take the background network as the parameter space. Also, it should be mentioned that the same weight is assigned to all the edges in the background network in this study for simplicity. That means, our algorithm will automatically determine the sparse structure and the hub gene structures without informing itself the potential location of such structures beforehand. The algorithm performance can be further improved if edge weights can be assigned based on evidence strength in, e.g., literature; however, even with a simple equal-weight scheme, our algorithm can already achieve a satisfying performance (see the Results Section). Finally, for simulation studies, the background network is introduced by adding additional random edges to the DREAM4 network structures; for real data example, the background network for human fetched from the RegNetwork database is used, which contains 23,079 nodes and 372,774 edges.

Model formulation of topological features of GRN
While sparsity of GRNs has been extensively explored in previous studies, the hub gene structure has rarely been addressed in existing formulations of GRN inference. This study suggests that in addition to sparsity, the hub gene structure is also of significant importance to the development of scalable algorithms that are not subject to the curse of dimensionality.
Throughout this paper, the following notations are used: • n denotes the total number of nodes in a network; • x t ∈ R n denotes the expression level of n genes (and miRNAs) at time t; • denotes the Hadamard product (i.e., the product of the entries of two matrices at the same position). For example, [1,3] [0, 2] = [0, 6].
• · F is the Frobenius norm of a matrix M : • . is the Spectral norm (i.e., the maximal singular value of a matrix); • C ·j : jth column of C; The dynamic Bayesian network (DBN) model [54] is considered in this study for GRN inference. It has been shown by previous studies [28,54] that under the normality and Markov assumptions, the DBN model can be reduced to a vector autoregression (VAR) model as follows: where T is the total number of time points, and P ∈ R n×n is the coefficient matrix, characterizing the change from time t to the next time point t + 1. P ij = 0 indicates the existence of regulatory relationship between regulator j and target i. Also, for simplicity and robustness, uneven time intervals are treated as of equal length. We consider an equivalent form of Eq. (1): where C = P − I. Our goal is to estimate the coefficient matrix C (or equivalently P ) given the time-course observations {x t } T t=1 . The objective function can be formulated as follows where Since the total number of distinct time points are usually small, the solution obtained by directly solving (3) suffers from the overfitting issue. To overcome this problem, the commonly used strategy is to confine the model space using regularizations or constraints; for example, the 1 norm regularization for imposing model sparsity. However, in addition to sparsity, there exist other prominent topological characteristics for complex networks like GRNs [11]. To illustrate this, we use the Yeast gene network as an example. The top left graph in Fig. 1 shows a typical substructure of the Yeast gene network; the bottom left figure is the corresponding coefficient matrix C, where the blue dots indicate nonzeros (3.6%) and the white areas indicate zeros (96.4%) in C. Sparsity is thus a clear feature to tell from Fig. 1, and we also consider the following network characteristics in our model formulation.
Hub gene structure separation : As suggested in Fig. 1, there exist a small number of gene nodes with a high degree, called the "hub" genes. If we define the hub gene structure as the set of outgoing edges from a hub gene node to its immediate neighbors, we can separate this structure from the original GRN such that matrix C can be decomposed into where matrix A corresponds to the hub gene structure (the upper middle graph in Fig. 1) and matrix B corresponds to the remaining edges (the upper right graph in Fig. 1). It can be told from the lower middle plot of Fig. 1 that matrix A for hub gene structure has a column group sparse structure (that is, it only contains a few nonzero columns), while the matrix B for the remaining edges shows a more uniform sparsity pattern. Therefore, we consider the 2,1 norm convex regularization on A to enforce the group sparsity structure [22], and consider the commonly used 1 regularization [9] on B to enforce the uniform sparsity, Small number of parent nodes : Based on the real GRN structures previously reported (e.g., Yeast), we find that while one gene may directly modulate the expressions of many other genes, a single gene is usually co-regulated by only a few regulators. That is, in a GRN represented by a directed acyclic graph, the number of parent nodes of a node is often small. This observation suggests the non-zero elements in any row of matrix C should be small (see the bottom left of Fig. 1). We thus can impose the following constraint on the rows of matrix C where ξ i is a predefined upper bound of the number of incoming edges to gene i (e.g., 0.7 times the indegree of a node in the background network).
Background network : The mask of the background network can be denoted as Ω ∈ {0, 1} n×n , where Ω ij = 1 corresponds to the non-existence of any regulatory interaction between genes i and j and Ω ij = 0 corresponds to the existence of a possible interaction between genes i and j that needs to be determined from temporal expression data. Therefore, the background network constraints imposed on matrix C can be denoted as follows: P Ω (C) := C Ω = 0.
Combining all the constraints above, the overall GRN structure identification formulation is given below: where α and β are two penalty coefficients that balance the weights on group sparsity and uniform sparsity.

Computing algorithm
Existing solvers cannot deal with the proposed formulation above, so we introduce an efficient optimization algorithm, called Decomposable Multi-structure Identification (DMI) (Algorithm 1), that can handle a network size greater than 20k on an average computer [6]. We first define the Augmented Lagrangian function: where ρ is a positive real number, I condition (·) denotes the function that gives 0 if the argument variable satisfies the condition and gives positive infinity otherwise, and U is a dual matrix of the same size as C introduced to solve the optimization problem. More specifically, we can break the optimization problem into minimization w.r.t. A and B, minimization w.r.t. C, and minization w.r.t. U as follows.
Minimization w.r.t. A and B. Since some terms in Eq. (5) such as I Ci· 0≤ξi (C) contain no A or B, the optimization problem can be simplified: which can be solved using the coordinate block descent method [50]. By taking partial derivatives of the objective function, we can derive and by applying the proximal gradient descent method [44], the following update rules for A and B at each iteration can be obtained where sign(a) = 1 if a > 0, sign(a) = 0 if a = 0, and otherwise −1. Usually we choose γ = 1 X 2 +ρ as the safe step length for each update to guarantee the monotonic decreasing of the objective function (6). More details about the accelerated proximal gradient descent method are given in Algorithm 2.
We thus update C by the following rule where P ξ is a projection of C i· 0 ≤ ξ i by retaining the largest ξ i elements in the i-th row of C.
Minimization w.r.t. U . Similar to the derivation of the objective function (7) above, we have min We update U using U = U + ρ(A + B − C) for each iteration.

Algorithmic complexity
Let |Ω| denote the total number of zeros in the background network matrix Ω ∈ R n×n used in Algorithms 1 and 2. Algorithm 1 is assumed to run J iterations to converge, and for each iteration of Algorithm 1, we run Algorithm 2 for K iterations. Therefore, the complexity of Algorithm 2 is O(|Ω| × T × K), where X, Y ∈ R n×(T −1) , T n and |Ω| n 2 . Therefore, the complexity of DMI is O(|Ω| × T × K × J).

Network specification and simulated data
To evaluate and compare the proposed method with other state-of-the-art algorithms, we employ GeneNetWeaver [37,48], the official DREAM Challenge tool for time-course expression data generation. More specifically, GeneNetWeaver uses the GRNs from Yeast (4,441 nodes, 12,873 edges) or Ecoli (1,565 nodes, 3,785 edges) as templates to generate network structures with typical complex network properties, of a network size up to ∼ 4, 400; and then ordinary differential equation (ODE) models are built upon the previously generated network structures to produce time-course gene expression data at pre-specified time points.
In our simulation studies, we use GeneNetWeaver to generate the network structures for a given network size n = 10, 100, 1000, or 20, 000, and we treat such networks as the ground truth. Let m denote the number of edges in a ground truth network, we then randomly add m additional edges to the ground truth network to generate the background network with 2m edges. Consequently, the goal of GRN inference algorithms in the simulation studies is to identify as many ground truth edges as possible from all the background network edges, with a controlled false positive rate. It should be stressed that networks of a size 20, 000 cannot be generated from the Yeast or Ecoli templates, so we supply the large-scale GRNs from the RegNetwork database [32] to GeneNetWeaver as templates (e.g., the human GRN in RegNetwork has ∼ 23, 000 nodes and ∼ 370, 000 edges). For each simulated dataset, one data point is generated at each of 6 distinct time points, respectively, for a pre-specified noise level to match the observation scheme in the real data example in Section 3.2.

Fairness of comparison and evaluation metrics
Since the state-of-the-art algorithms under comparison such as ARACNE [38] start with a full matrix C, for fairness of comparison, we make efforts for these existing algorithms to also take the advantage of the background network by confining the calculation of the performance evaluation criteria to the background network. The computing parameters used by existing algorithms are tuned as suggested in their original manuscripts (see Table S1 in Supplementary Data I). For our DMI algorithm, we set α = 0.08, β = 0.16 and ρ = 10 for all the simulated datasets,where α and β are determined using cross-validation.
Five commonly-used criteria are considered to measure algorithm performance, including sensitivity (SN ), specificity (SP ), accuracy (ACC), Matthews correlation coefficient (M CC), and the Area Under ROC Curve (AU C): where T P and T N denote the true positive and true negative, and F P and F N denote the false positive and false negative, respectively.

Performance evaluation
The first set of experiments are conducted to compare our DMI algorithm with nine representative algorithms, including PCC, ARACNE [38], CLR [16], MINET (the maximum relevance minimum redundancy method) [41], GENIE3 [23], TimeDelay-ARACNE [53], TIGRESS [20], SITPR [33], and Jump3 [24]. Specifically, SITPR [33] is a multi-step pipeline with the 1 norm penalty incorporated; to avoid manual intervention needed by certain SITPR steps, we simply compare the constrained LASSO step in SITPR with DMI. In the simulated data, the noise level is fixed at 10%, and the networks size ranges from 10 to 1,000 because other competing algorithms cannot handle a network size of 20, 000. For the second set of experiments, the noise level is increased from 10% to 30% to evaluate the robustness of our DMI algorithm against noise. The network size in the corresponding simulated data is 10, 100, 1000 or 20,000.
The two sets of experiment results are summarized in Tables 1 and 2, respectively. From Table 1, it can be clearly told that for network size ranging from 10 to 1,000, our method significantly outperform other state-of-the-art methods; e.g., the AUCs of our method remain 0.8 or higher for all cases, but the AUCs of all other methods are 0.6 or less and they drop to 0.36 ∼ 0.50 for n = 1, 000. From Table 2, one can tell that our DMI algorithm is robust against noise because the performance decrease (measured by AUC) is less than 0.04 as the noise level increases from 10% to 30%, even for a network size of 20,000. Additional experiment results can be found in Supplementary Data I.

Regulatory network identification in human respiratory epithelial cells during IAV infection
The real data example for illustrating the use of DMI in practice is from the recent study of [34], where human A549 cells were infected with influenza H1N1 virus (A/Mexico/InDRE4487/2009). Illumina HumanHT-12 v3 BeadChips and Febit miRBase 14 Geniom miRNA Biochips were employed to measure the mRNA and miRNA samples, respectively. Six replicates of the expression levels were collected at each of six time points (0, 4, 8, 24, 48, and 72 hours post infection), but one sample at hour 4 was excluded due to degradation. The within-sample normalized dataset is available from NCBI GEO (GSE36553 and GSE36461). We further conducted between-sample normalization on the data across the six time points. The original analyses in [34] did not use all the time points; instead, a small number of regulatory relationships between functional miRNAs and gene targets were inferred based on data on hr 0 and hr 8 only. The genome-wide regulatory landscape has not been revealed based on the entire dataset although [33] made an attempt to investigate the subnetwork structures of a size from 2 to 300. In this study, 1,572 genes and 14 non-coding RNAs were identified to be differentially expressed (Figure 2(a)) using the FPCA method [51] for a false discovery rate controlled at 0.05. From the human background network in RegNetwork, the DMI algorithm was applied to identify 1,926 regulatory interactions between the differentially expressed genes or miRNAs, as shown in Figure 2(b).
We conducted comprehensive network analyses on the topological structure and found that the A549 GRN in response to influenza H1N1 infection is a typical complex network, evidenced by, e.g., its power law degree distribution and a large clustering coefficient (see Supplementary Data I for details). A close examination of the inferred GRN by the DMI algorithm suggests consistency between the findings in this study and those in literature. For instance, 'CEBPB' (CCAAT/Enhancer-Binding Protein Beta) is a transcription factor (TF) that plays an important role in immune and inflammatory responses [47], and the subnetwork structure centered at 'CEBPB' has been previously reported in [33]. Our study also revealed a similar subnetwork structure around 'CEBPB' (e.g., interactions between 'CEBPB' and 'NCS1', 'ISG20', or 'ABCG1'. See Figure S2 in Supplementary Data I); however, some experimentally-verified interactions (e.g., between 'CEBPB' and 'NOLC1' in HPRD) were successfully identified in this study, but were missing in the work of [33]. More interestingly, we found a number of hub gene structures, e.g., centered at the Activating Transcription Factor genes ('ATF2' and 'ATF4') or the E2F Transcription Factor genes ('E2F6' and 'E2F7'). More specifically, 'ATF2' encodes a DNA binding protein of the leucine zipper family that can perform distinct functions via different mechanisms (e.g., formation of a homo/heterodimer with 'c-Jun'). Our results showed that 'ATF2' interacts with more than 130 other genes, including 'CEBPB' (Figure 2(c)). Although it has been previously reported that influenza A viral RNA molecules can indirectly activate transcription factors like ATF2 [29], the importance of ATF2 in the context of influenza infection has not been fully appreciated by previous experimental studies, considering the large number of target genes associated with ATF2 in the hub gene structure. The proposed DMI algorithm thus provides us not only an opportunity to investigate the genome-wide regulatory landscapes but also a way to identify interesting subnetworks like hub gene structures. Further experiments like ChiP-PCR for measuring gene-TF interactions can be conducted in the future to validate the results produced by DMI.

Conclusions
We proposed a novel scalable algorithm called DMI for large-scale GRN inference from temporal gene expression data in this study. Extensive simulation studies and real data application suggest the superiority of the DMI algorithm over other state-of-the-art approaches, and the success mainly relies on the incorporation of multiple topological characteristics of GRNs like sparsity and hub gene structures into our model formulation.
We also recognize several limitations of the current study: 1) DMI's performance can be further improved if the background network edges can be appropriately weighted; 2) Prediction of previously unseen interactions heavily depends on the background network preparation so further efforts need to be invested to the development of public knowledgebases like RegNetwork; 3) This work mainly focuses on solving the ultrahigh dimensional inference problems, and data heterogeneity and integration issues have not been tackled. We believe that it is necessary to address the aforementioned limitations in separate articles, and this study provides a solid basis for such future investigations.