Inferring regulatory interactions between genes from transcriptomics time-resolved data, yielding reverse engineered gene regulatory networks, is of paramount importance to systems biology and bioinformatics studies. Accurate methods to address this problem can ultimately provide a deeper insight into the complexity, behavior, and functions of the underlying biological systems. However, the large number of interacting genes coupled with short and often noisy time-resolved read-outs of the system renders the reverse engineering a challenging task. Therefore, the development and assessment of methods which are computationally efficient, robust against noise, applicable to short time series data, and preferably capable of reconstructing the directionality of the regulatory interactions remains a pressing research problem with valuable applications.

Results

Here we perform the largest systematic analysis of a set of similarity measures and scoring schemes within the scope of the relevance network approach which are commonly used for gene regulatory network reconstruction from time series data. In addition, we define and analyze several novel measures and schemes which are particularly suitable for short transcriptomics time series. We also compare the considered 21 measures and 6 scoring schemes according to their ability to correctly reconstruct such networks from short time series data by calculating summary statistics based on the corresponding specificity and sensitivity. Our results demonstrate that rank and symbol based measures have the highest performance in inferring regulatory interactions. In addition, the proposed scoring scheme by asymmetric weighting has shown to be valuable in reducing the number of false positive interactions. On the other hand, Granger causality as well as information-theoretic measures, frequently used in inference of regulatory networks, show low performance on the short time series analyzed in this study.

Conclusions

Our study is intended to serve as a guide for choosing a particular combination of similarity measures and scoring schemes suitable for reconstruction of gene regulatory networks from short time series data. We show that further improvement of algorithms for reverse engineering can be obtained if one considers measures that are rooted in the study of symbolic dynamics or ranks, in contrast to the application of common similarity measures which do not consider the temporal character of the employed data. Moreover, we establish that the asymmetric weighting scoring scheme together with symbol based measures (for low noise level) and rank based measures (for high noise level) are the most suitable choices.

Background

Recent evidence from fully-sequenced genomes suggests that organismal complexity arises more from the elaborate regulation of gene expression than from the genome size itself [1]. It is not surprising that determining the interactions between genes, which gives rise to particular system's function and behavior, represents the grand challenge of systems biology [2]. In addition to structural information about the regulatory interactions, a comprehensive understanding of the dynamic behavior of these interactions requires specification of: (1) the type of regulation (i.e., activation or inhibition) [3], (2) kinetics of interactions [4], and (3) the specificity of the interactions with respect to the investigated tissue and/or stress condition [5]. The elucidation of a complete network of regulatory interactions parameterized with kinetic information leading to a particular gene expression is, at present, still a challenging task even for well-studied model organisms whose networks have been partially assembled either for few selected processes and conditions or at the genome-wide level [6–9].

The ever-increasing throughput in experimental manipulation of gene activity coupled with the methods for quantitative assessment of transcriptome, proteome, and metabolome have begun to identify the effects of individual transcription factors, binding ligands, and post-translational modifications on regulated genes [10]. Moreover, such high-throughput transcriptomics data sets can be used to identify gene regulatory modules and entire networks. Understanding the complex network of gene regulatory interactions from a given transcriptome read-out necessitates the design, analysis, and testing of network-inference methods (so-called reverse engineering methods). These methods operate on two types of data sets from: (1) static perturbation experiments whose read-out is a pseudo steady-state expression level, and (2) time-resolved experiments yielding time series of gene expression.

Transcriptomics time series data hold the promise of identifying the dynamics of the key genes mapped into putative interactions and kinetic laws; consequently, the temporal information must not be neglected by the applied method for reverse engineering of gene regulatory networks. However, despite the decreasing costs of experiments relying on high-throughput technologies, systems biology studies still produce relatively short time series [11], largely due to the problems with gathering a big enough sample material and designing more complex experiments. In addition, time-resolved biological experiments usually involve sampling at irregular rates in order to capture processes spanning different time scales. These two challenges require a careful assessment of the existing methods for network inference from transcriptomics time series data. Moreover, most of the developed methods have been applied directly on real time series data without a prior assessment of their discerning capacity on a difficult synthetic benchmark [12].

The analysis of short time series is affected by the type of employed data representation. For instance, some approaches transform the discrete time series into continuous representations by different fitting methods; in addition, few studies have already considered transforming real valued time series into data-adaptive representations, including: symbols, strings, and trees (for a review, see [13]). Therefore, the extent to which a chosen data representation may affect the accuracy of the inferred networks should also be examined when assessing the strengths and weaknesses of different reverse engineering methods.

The simplest approach for network inference from time series data relies on applying similarity measures [12, 14–27]. Methods borrowed from Bayesian inference [28–32], regression analysis [33], and econometrics models (e.g., Granger causality [34–37]) have also been applied in this context. Although there are already two valuable reviews of methods for gene regulatory network (GRN) reconstruction from transcriptomics time series data [11, 12], we believe that there is a need for a careful assessment of the existing reverse engineering methods based on similarity measures operating on short time series data.

For this purpose, we first divide the existing similarity measures (Table 1) into fourclasses based on the representation on which they operate, namely: vectors, random variables, models (e.g., Granger causality), and symbols. We term the basic pairwise measures as simple, in comparison to their conditional and partial variants. The outcome of applying a similarity measure can further be refined via six scoring schemes: IDentity (ID), Algorithm for the Reconstruction of Accurate Cellular NEtworks (ARACNE), Context Likelihood of Relatedness (CLR), Maximum Relevance/minimum redundancy NETwork (MRNET), Time Shift (TS), and Asymmetric WEighting (AWE). The similarity measures and scoring schemes are schematically presented in Figure 1.

This table summarizes all measures of interaction included in this study. Those are marked by (*) which, to our knowledge, have not been applied to gene expression before.

We study the performance of the relevance network algorithm for GRN reconstruction, applied to synthetic gene expression data sets, and compare the capability of different combinations of 21 measures and 6 scoring schemes to detect/predict true and eliminate false links. A description of the data sets and the general definitions of the methods used in this study are given in detail in the Methods section.

Our contributions include: (1) an extensive systematic review and a comparison study which could serve as a basis for selecting a reverse engineering method based on a combination of a similarity measure and a scoring scheme suitable for a given expression data; in this context, we investigate not only the pairwise similarity measures, but also, where applicable, their respective conditional and partial variants; (2) introduction of approaches that are novel or borrowed from other fields, but have not yet been encountered in the field of network reconstruction; and (3) definition of a novel information-theoretic measure, the residual mutual information, and evaluation of its performance in unraveling gene regulatory interactions.

Results

We investigate the performance of the relevance network algorithm applied to gene expression time series from a network of 100 genes in E. coli under optimal sampling conditions (noise-free, with an uniform sampling in time). Interpolation has not been applied to the time series at this point. We show and discuss the receiver operating characteristic (ROC) curves deduced for the basic algorithm with identity (ID) scoring scheme in combination with all measures included in our investigation, and for the five additional scoring schemes - CLR, ARACNE, MRNET, TS and AWE - with selected measures. Subsequently, since optimal sampling conditions are never achieved, the performance of the similarity measures and scoring schemes is additionally investigated on noisy data. The role of sampling and interpolation on the performance is discussed as well. Moreover, the influences of network properties (e.g., size and degree) are shown using two additional networks, a yeast network composed of 100 genes from S.cerevisiae and a network of 200 genes from E. coli.

Identity scoring scheme

The basic relevance network algorithm (scoring with unit matrix) is used to compare the performance of all measure's classes.

Measures operating on vectors

Additional file 1, Figure S1 shows the efficiency of the reconstruction of links based on classical distance measures and the dynamic time warping. In general, none of these measures is able to avoid false positives on a larger scale without loosing most of the true interactions. On the other hand, the ROC curves are rather flat for high false positive rates, which implies that these measures could be useful initially to determine connections which are not present in the network. All of the curves shown in Additional file 1, Figure S1 are smooth, meaning that the prediction of links is not very sensitive to the explicit choice of the threshold. From this analysis, we can discriminate that the L^{
s
} norm (with s = 10, equating the length of the time series) performs best in reconstructing the network. These results outperform the Euclidean (L^{2} norm) and the Manhattan (L^{1} norm) distance, which can be explained by the fact that the L^{
s
} weights large distances more heavily. The dynamic time warping fails for the investigated data, which is most likely a result of the coarse sampling and the complexity of the network.

Measures operating on random variables

Furthermore, the ID scoring scheme is evaluated using several measures which employ time series represented via random variables. In particular, we examine in detail the performance of correlation and information-theoretic measures.

In the case of the linear Pearson correlation (PC) coefficient, as shown in Figure 2(a), we obtain almost identical results from the simple and the conditional (CPC) measure, although the CPC is expected to eliminate indirect interactions. However, this does not mean that there are no indirect links wrongly deduced by the linear PC. The problem here is rooted in the estimation of the conditional probabilities, which is barely reliable for 10 time points.

Even if a basic significance test is included – for example the data is reshuffled 100 times, then the measures for the randomized series are calculated, and the results are compared to those obtained from the original time series – the results do not change significantly (Additional file 1, Figure S2). The partial Pearson correlation, on the other hand, shows better results for low false positive rates, but looses its accuracy when high true positive rates are reached. Additionally, the results obtained from the PPC are less significant (in terms of the reshuffled time series). Removing links which have no significant values of the correlation leads to an almost random prediction from the partial Pearson correlation.

As we cannot infer self-regulation by analyzing the similarity of expression series, the diagonal of the correlation matrix was set to zero in our computations above (by definition it is one). Comparing the reconstruction efficiency of the linear PC with that of the rank correlations (diagonal equals to one), we observe that the ROC curve shown in Figure 2(b) is smoother for the Pearson correlation than the curves obtained from the rank correlations. Hence Pearson's correlation measure is less sensitive to the choice of the threshold, whereas the rank correlations can achieve a slightly better overall performance.

Next, we investigate the efficiency of the ID scoring scheme considering information-theoretic measures. In general, we observe that the resulting reconstruction strongly depends on the method chosen for the estimation of entropies. Here we present the results obtained using the R-package "infotheo" (in particular the Miller-Madow asymptotic bias corrected empirical estimator) since, for short time series, it yields better estimates of the entropy than the R-package "entropy". Besides the basic pairwise mutual information (MI), we also investigated the conditional mutual information (CMI) and the residual mutual information (RMI) in order to reduce the number of false positive links. All these measures result in ROC curves which are more or less discontinuous. This is a finite size effect, as the time series are very short, and thus the estimation of the MI (entropies) becomes problematic.

We find a quite different behavior of the ROC curves, as shown in Figure 2(c), in specific regions of the ROC space. The simple mutual information results in a flat and comparatively smooth ROC curve for high false positive rates. This means that the measure allows removing about 60% of the false positives, by loosing approximately 10% of the true links. An even better performance in the same ROC space region can be achieved using the residual mutual information, which we proposed as a partial mutual information measure to distinguish indirect from direct (linear) relationships between triplets of genes. In contrast to this, the conditional MI results in a more discontinuous curve for high fpr: here, the ratio of the true and false positive rate is nearly the same as observed for a random prediction. In principle, the CMI is stricter in removing indirect links as it also can detect nonlinear interactions. However, the conditional probabilities cannot be estimated sufficiently well from 10 time points. Hence, the conditional MI fails for (the investigated) short data sets in the region of high false positive rates.

Additionally, when looking at the region of low fpr, we observe that the ROC curve of the simple MI becomes more discontinuous than for the high fpr. The true positive rate decreases significantly for slightly reduced threshold values, in the region around 30% and 15% of the false positives. This is manifested as jumps in the curve due to which this measure is rather sensitive to the choice of the threshold if low false positive rates are to be achieved. In contrast to this, the residual mutual information results in a smoother curve for low false positive rates than the simple MI, indicating that the measure is less sensitive to the choice of threshold, although the curve exhibits smaller jumps as well. In the region of fpr < 10% the performance of the RMI decreases slightly compared to the simple measure. The conditional mutual information on the other hand, achieves only very low false positive rates, which also lead to low true positive rates (up to about 5%). Tuning the threshold to allow for slightly higher values of the fpr the ROC curve of the CMI immediately jumps to 50% of false positives. Hence, the region between about 3% and 50% of false positive links is not achievable using the considered conditional measure.

We also implemented a basic significance test for the mutual information measures by reshuffling the time series 100 times, calculating the measure for the randomized series, and comparing the results to those obtained for the original time series. The associated ROC curves are shown in Additional file 1, Figure S3. With respect to the significance, the reconstruction efficiency of the simple and, in particular, the residual mutual information decreases, since the inferred degree of interaction for most of the gene pairs is not significant in the specified sense. In contrast to this, with the significance test, the quality of the prediction obtained from the CMI increases slightly, but its overall performance is still deficient.

That evaluation leads to the conclusion that (from the MI measures) only the simple and the residual mutual information can provide a sufficient reconstruction efficiency using the IDentity scoring scheme. This holds true only in the case that we do not rely on the simple significance test.

Investigating the performance of the coarse-grained measures on the short gene expression time series, we obtained ROC curves which look almost the same as expected for a complete random linking in the network, as illustrated in Additional file 1, Figure S4. Even though the coarse-grained measures are in principle promising for the inference of interdependency from time series of intermediate length, they are not applicable in our case. The reason for this is the limited number of available time points which makes not only the estimation of the MI, but also the identification of a proper time lag a very challenging task. Interpreting the CCIR as a distance, and not as a similarity measure, leads to an increase of the inferred true positives. However, the predictive power of the measure remains very low.

Model-based measures

The evaluation of the ID scoring scheme using model-based measures (Granger causality in this case) leads to an almost random prediction of links (the associated ROC curves are shown in Additional file 1, Figure S5). Hence, the Granger causality (GC) measure is not suitable for the reconstruction of GRN, when only very short gene expression time series are available. This is due to the fact that the results of the GC index depend strongly on the model estimation. An AR model has to be estimated for the given data set, whose order is determined based on the Akaike information criterion. However, this seems to be insufficient, since the AIC usually requires a higher order model (due to the high variability of the data), whereas the expression time series are in general very short.

Measures operating on symbolic dynamics

Next, we use the principle of order patterns to derive symbol sequences from the time series [38]. As already shown in general nonlinear time series analysis, the symbol based measures show a good overall performance in reverse engineering.

The ROC curves (Figure 2(d)) obtained for these measures are rather smooth and flat for false positive rates larger than 30%, which means that only a small portion of links is lost when reducing the false positive rates down to this value. Consequently, the results are robust to the choice of threshold in this particular region of the ROC space. However, the ROC curves become less smooth for lower values of the false positive rates. This implies that false positive rates smaller than 20% are barely possible to achieve. The best overall performance has been found here for the combination of symbol sequence similarity and mutual information of the symbol sequences (SySimMI), as well as for the mutual information of the symbol sequences (SyMI). The latter outperforms the simple MI of the time series themselves, as the length of the series used to estimate the measure is much longer in the case of the symbolic dynamics. Additionally, the conditional entropy of the symbol vectors obtained from pairs of time points shows results similar to the SySimMI and the SyMI in a wide range of the ROC space.

Symmetric scoring schemes - CLR, ARACNE and MRNET

Next, we evaluate the possibility for reconstruction of the underlying E. coli (sub)network based on the three modifications of the relevance network algorithm (Algorithm 1, given in the Methods section) as implemented in the "minet"-package, namely the CLR, the ARCANE and the MRNET. All three algorithms represent extensions of the basic relevance network approach, to the effect that they introduce additional scoring rules for the pairwise weighting of the interactions in order to reduce the amount of links that are falsely detected.

In Additional file 1, Figure S6, we present the results, in terms of the ROC curves, which we obtain using the different scoring schemes and in all cases the default weights of the pairwise interactions, namely, the squared Spearman's correlation for every set of pairs. As the algorithms implemented in the "minet" are designed to reduce the number of false positives, high false positive rates (of more than about 50%) do not occur here, unless all interactions are set as links. Moreover, the MRNET and the CLR result in ROC curves which are not smooth, meaning that their capability to reconstruct particular links is limited and strongly dependent on a proper choice of the threshold τ. The ARACNE, on the other hand, is restricted to an almost fixed fpr-tpr value.

Asymmetric scoring schemes - TS and AWE

As none of the previously described scoring schemes is able to indicate directionality from symmetric measures, we include in this study, and to our knowledge for the first time in GRN reconstruction, an evaluation of the performance of the Time Shift (TS) as a symmetry-breaking scoring scheme. We show the results of this modification of the relevance network algorithm removing the links which are falsely detected by the CLR (measure: μ_{
ρ
}) or the AWE (measure: ). However, unraveling the directionality of interaction (between pairs of genes) using the correlation of the delayed time series has shown to decrease the maximal achievable true positive rates.

The slope of the ROC curves (shown in Figure 2(e)) indeed does not change much in comparison to the results of the CLR and the AWE scoring scheme. Moreover, if combined with the Pearson correlation of the delayed time series, the ROC curve obtained from the CLR becomes considerably smoother (compared to the curve shown in Additional file 1, Figure S6) and hence, the prediction is less sensitive to the choice of a threshold. The same does not hold true for the ROC curve obtained from the application of the TS scoring scheme in addition to the AWE. Instead, this curve becomes flatter and is slightly shifted towards lower false positive rates in comparison to the corresponding curve in Figure 2(f). This implies that, while for low fpr the curve looks basicly the same, in the intermediate range of the ROC space (fpr about 0.15 to 0.45) similar tpr values can be obtained for lower fpr. However, in the range of high fpr, the maximal achievable tpr value is lower. Hence, true positive rates of approximately 80% can be achieved with lower costs, as the according number of false positives is in general smaller when the TS scoring scheme is used. On the other hand, as already mentioned, the quality of the link detection becomes worse for higher false positive rates (more than about 40%) compared to the corresponding results of the AWE itself. The true positive rate in the ROC curve in Figure 2(e) is almost constant in this region of the ROC space.

Similar to the TS, the AWE scoring scheme aims at breaking symmetries and thus allows extraction of information about the directionality of interaction from symmetric measures. However, a detailed comparison of the reconstruction efficiency of the AWE using different symbolic dynamics measures shows that in contrast to the TS scoring scheme, AWE does not decrease the maximal achievable true positive rates. Instead, the ROC curves, as shown in Figure 2(f), become more flat for high false positive rates compared to the curves obtained for the basic algorithm with the ID scoring scheme using symbolic dynamics (Figure 2(d)). Hence true positive rates of more than 80% are achievable by the AWE algorithm with much lower costs than with the ID scoring scheme. On the other hand, the ROC curves obtained from AWE are more steep for low false positive rates. This implies that here true positive rates up to approximately 45% can be achieved with false positive rates of less than 10%. Furthermore, the curves shown in Figure 2(f) are much smoother in comparison to those in Figure 2(d), indicating that the reconstruction is less sensitive to the choice of a particular threshold.

Influence of noise

In general, noise-free expression measurements cannot be achieved in real experiments: In fact, intermediate and high noise level are not rare. Thus, in order to account for stochasticity in the time series, and particularly to establish the robustness of the ranking of the investigated similarity measures, we additionally evaluate the ROC curves for noise intensity of 0.3.

As expected, the measures which failed in the noise-free case (e.g., DTW, CMI, the coarse-grained information rate, and the Granger causality measures) did not improve their performance, as shown in Additional file 1 Figure S7. On the other hand, the measures based on vectors yield very robust results with respect to noise (Additional file 1, Figure S7). However, since the performance of these measures was already insufficient in the noise-free case, its general overall ranking does not improve significantly. We additionally noticed that the measures which performed best in the case of optimal sampling conditions, such as MI, RMI, correlation and symbol based measures differ in their robustness against noise, as illustrated in Figure 3. For example, the reconstruction efficiency of the simple and the conditional Pearson's correlation slightly decreases, while that of partial Pearson's correlation slightly increases. Hence, all three measures result basically in the same ROC curves, meaning one can abandon the more computationally intensive calculation of partial and conditional Pearson's correlation under these circumstances. Furthermore, MI and RMI both lose their accuracy as noise increases, and the corresponding ROC curves resemble those of the Pearson's correlation. However, the relation between both measures stays the same (RMI performs slightly better than MI).

The reconstruction efficiency for the symbol based measures decreases significantly as well, which holds true in particular for the mutual information of symbol sequences (as noise affects the inference of a symbol sequence using order pattern, as well as the binning process for MI calculation). However, apart from that, the ROC curves obtained for the symbol based measures are more continuous for noisy data than those in the noise-free case, which implies that the reconstruction process in this case is less influenced by the choice of a particular threshold.

A similar behavior is observed for the rank correlation coefficients. However, the shape of the curves appears more robust under the influence of noise than it is the case for the symbol based measures. Hence, the rank based measures represent the most suitable similarity measures to study the interrelation among short time series at high noise levels.

Finally, we observe that the CLR and the AWE are the most robust scoring schemes with respect to noise, whereas ARACNE fails for short and noisy time series.

A detailed analysis on the reconstruction efficiency of the top-ranking measures and scoring schemes under various stochastic conditions is considered in the Discussion section. Additionally, the performance as a function of the length of the time series and the noise intensity can be found in the Additional files.

The role of interpolation and sampling

Due to the fact that time-resolved gene expression data are usually quite coarsely sampled, general assumptions upon what happens between two time points cannot be made. This problem becomes obvious when unequally sampled data are used (Additional file 1, Figure S8).

Although the interpolation at the beginning of the time series (where the time points are rather close) seems to be sufficient, it does not accurately capture the dynamics of the expression time series when the distance between the time points becomes larger. Hence, by interpolating the gene expression data sets, artifacts are introduced, which will be further reflected in the results of the particular measures of interdependency. In order to avoid these artifacts, we renounce the interpolation in this comparison study, even though this leads to less significant results for almost all measures, as they operate far below the limit of their theoretically defined preconditions. However, we have observed that the overall results (ROC analysis) are typically equal or even better when interpolation is not included, especially when non-uniformly sampled time series are considered. Additional file 1, Figure S9 illustrates this effect exemplary for the simple mutual information.

However, some measures, such as the Granger causality used in this study, as well as several scoring schemes (e.g., the Time Shift), are explicitly time dependent. Hence, they require uniformly sampled data, meaning that an interpolation is needed if only non-uniformly sampled data is available. This is in general, the case in GRN reconstruction.

However, most of the well performing reconstruction tools in our study are not explicitly time-dependent, which means they do not require a specific time sampling. This implicates that they are not very sensitive concerning the spacing on the time axis. Our results, as shown in Additional file 1, Figure S10, illustrate that a non-uniform sampling for these tools can even improve the quality of the reconstruction, since a larger period of the dynamics is captured.

The role of the network topology

In general, the underlying network and its properties are not known prior to the reconstruction process. However, the available experimental and theoretical research has suggested that gene regulatory networks most likely are characterized with scale-free properties [39]. Therefore, we compare the reconstruction efficiency of the relevance network approach for various subnetworks of E. coli's and S.cerevisiae's regulatory networks, as described in details in the Methods section (subsection on Synthetic data sets). These networks (Additional file 1, Figure S14) differ in size, average degree and clustering coefficient. Nevertheless, we observe that the performance of the top-ranking measures, such as the symbol based measures, rank correlations, MI and RMI do not depend on the network topology: Very similar ROC curves are obtained for all of the network types analyzed, as shown in Figure 4 (this also pertains for several other measures, as shown in Additional file 1, Figure S11). The performance in the range of low fpr was improved for most of the measures for increased average degree of the nodes. However, at the same time, the performance in the range of high fpr was usually decreased. In general, the largest differences in the reconstruction efficiency occur for the conditional Granger causality and partial Pearson correlation, where the quality of the reconstruction decreases significantly for an increased number of nodes (e.g., E. coli network with 200 nodes) and an increased clustering coefficient, as in the S.cerevisae network.

Discussion

The observation of the ROC curves does not always allow conclusions on the overall performance of a measure or scoring scheme. Therefore, we further compare the described modifications of the relevance network algorithm in terms of ROC statistics (we focus on the algorithm as illustrated in Algorithm 1, without consideration of additional statistical significance due to the lack of a suitable null model). As an example, we evaluate the ROC statistics from time series with uniform sampling (without performing an interpolation) for the network of 100 genes of E. coli.

ROC statistics for noise-free data

To evaluate and rank the overall performance of all approaches under study we calculate three common summary statistics from ROC analysis: the area under the ROC curve (AUC(ROC)), the Y ouden index and the area under the Precision/Recall curve (AUC(PvsR)) as explained in the Methods section. Furthermore, as the modifications of the algorithm implemented in the "minet" package are commonly and widely used approaches for GRN reconstruction, we use the results which gave the best performance in order to establish a benchmark for the comparison of the different measures and scoring schemes. In Table 2 we provide an overview of the results from the summary statistics for the different measures (mutual information and correlation estimation), and scoring schemes implemented in the R-package "minet". Based on these results, we define a measure combined with a particular scoring scheme to be

Table 2

The minet algorithm

PARAMETER (MINET)

AUC(ROC)

YOUDEN

AUC(PvsR)

clr, mi.empirical, equalfreq

0.80

0.54

0.05

clr, mi.empirical, equalwidth

0.76

0.45

0.04

clr, mi.mm, equalfreq

0.80

0.54

0.05

clr, mi.mm, equalwidth

0.76

0.48

0.04

clr, mi.shrink, equalfreq

0.80

0.53

0.05

clr, mi.shrink, equalwidth

0.74

0.41

0.04

clr, mi.sg, equalfreq

0.80

0.54

0.05

clr, mi.sg, equalwidth

0.74

0.42

0.04

clr, pearson, none

0.78

0.49

0.05

clr, spearman, none

0.80

0.53

0.05

clr, kendall, none

0.80

0.53

0.05

mrnet, mi.empirical, equalfreq

0.82

0.59

0.04

mrnet, mi.empirical, equalwidth

0.76

0.47

0.05

mrnet, mi.mm, equalfreq

0.81

0.57

0.04

mrnet, mi.mm, equalwidth

0.77

0.46

0.05

mrnet, mi.shrink, equalfreq

0.81

0.57

0.04

mrnet, mi.shrink, equalwidth

0.73

0.39

0.04

mrnet, mi.sg, equalfreq

0.81

0.57

0.04

mrnet, mi.sg, equalwidth

0.77

0.47

0.06

mrnet, pearson, none

0.78

0.49

0.04

mrnet, spearman, none

0.82

0.58

0.03

mrnet, kendall, none

0.81

0.56

0.03

aracne, mi.empirical, equalfreq

0.76

0.52

0.01

aracne, mi.empirical, equalwidth

0.54

0.12

0.02

aracne, mi.mm, equalfreq

0.76

0.52

0.01

aracne, mi.mm, equalwidth

0.54

0.12

0.02

aracne, mi.shrink, equalfreq

0.76

0.52

0.01

aracne, mi.shrink, equalwidth

0.55

0.14

0.02

aracne, mi.sg, equalfreq

0.76

0.52

0.01

aracne, mi.sg, equalwidth

0.54

0.12

0.02

aracne, pearson, none

0.54

0.07

0.03

aracne, spearman, none

0.76

0.52

0.01

aracne, kendall, none

0.76

0.52

0.01

Overview on the results of the summary statistics from the ROC analysis for the different parameter (implemented scoring schemes and measures for noise level 0.0) in the minet package.

well performing for short expression data sets (evaluated on the synthetic data in this case) if:

AUC(ROC) > 0.8,

YOUDEN > 0.5 and

AUC(PvsR) > 0.05,

sufficiently performing if

0.8 >AUC(ROC) > 0.7,

0.5 >YOUDEN > 0.4 and

0.05 >AUC(PvsR) > 0.03

and deficient otherwise.

By calculating the summary statistics in the noise-free case, as shown in Figure 5, we conclude that several information-theoretic measures (simple and residual MI), correlations (simple and conditional Pearson's, as well as Spearman's) and measures based on symbolic dynamics (SyMI and SySimMI) perform sufficiently well in combination with the basic relevance network algorithm (IDentity scoring scheme). Here, it stands out that the simple Spearman correlation performs better than the simple Pearson correlation, and SyMI is better than the simple MI. This is due to the fact that symbol and rank based measures are less sensitive to finite size effects and the distribution of data.

The modifications of the relevance network algorithm in the "minet" package having best performance in the reconstruction of GRN from short data sets, are the CLR and the MRNET ("minet" is based on Spearman's correlation in this case). Here the AUC(ROC) indicates almost no change compared to the basic algorithm with identity scoring (measure: Spearman's correlation), while the Y OUDEN index decreases for the CLR and increases for the MRNET. However, the opposite is true for the AUC(PvsR). The overall performance of the CLR (in terms of the considered summary statistics) is slightly better than those of the MRNET (CLR scoring scheme was used to set the benchmarks). Moreover, the measures combined with the TS scoring scheme perform sufficiently well. However, the summary statistics do not change much compared to the results obtained for the same measures using the ID. In contrast, the asymmetric weighting yields a significant increase among all the summary statistics compared to the performance of the same measures using only the identity scoring scheme.

Hence, in the noise-free case, we obtain the following ranking of measures with the highest capability to detect true and eliminate false positive links:

1. AWE + TS (scoring by μ_{
S
}),

2. AWE + TS (scoring by μ_{
P
}),

3. AWE,

4. AWE,

5. AWE and

6. μ_{
S
}CLR

The asymmetric weighting (AWE) significantly improves the prediction at this point, since it breaks the symmetry of a particular measure based on topological consideration and, therefore, reduces the number of false positive links. Hence the AWE (measure: ) clearly shows the best performance when short time series are considered (the results become slightly better if Time Shift is applied in addition).

ROC statistics for noisy data

In order to account for stochasticity in the time series as well as to establish the robustness of the investigated (top-ranked) similarity measures against noise, we evaluate additionally their performance for two different noise intensities, namely 0.3 (Figure 6) and 0.5 (Additional file 1, Figure S12). Only those measures which perform sufficiently well in the noise-free case (measures operating on random variables and symbolic dynamics) are tested. In particular, we examine the Pearson's (μ_{
P
}(x, y)), Spearman's (μ_{
S
}(x, y)) and Kendall's (μ_{
K
}(x, y)) correlation coefficients as well as the symbol based measures , , , and using the ID scoring scheme. Additionally, we investigate the performance of CLR, MRNET, ARACNE, AWE and TS scoring schemes based on the same measures as in the noise-free case.

Under the influence of noise, the quality of the results of the symbol based measures (in particular ) decreases. As noise strongly influences the process of symbol assigning, it can principally enhance or distort the information content. The direction of the influence is not predictable a priori, but in the presence of strong noise, symbols are no longer reliable (if no additional information on the influence of the noise is provided). On the other hand, measures operating on random variables are rather robust against noise (the best results in these cases have been achieved using rank correlations).

The ARACNE has proven to be very sensitive with respect to noise. In contrast to this, the asymmetric weighting (compared to the results of the ID using the same measure) still performs well within the given limits, as it is only based on topological considerations, and it is not influenced by the presence of noise. Furthermore, to investigate how noise influences the reconstruction efficiency, we calculated the area under the ROC curve and the YOUDEN index as a function of the noise intensity for the 5 combinations of similarity measures and scoring schemes which performed best in the noise-free case, namely the symbolic measures and the asymmetric scoring schemes, mentioned in the previous section. Additionally, we compared the results to those obtained for time series of different lengths (i.e., 8 and 20 time points). We conclude that for short time series, the capability of the measures and scoring schemes to detect true and at the same time eliminate false positive links depends both on the number of time points and the noise intensity (Additional file 1, Figure S13). However, this dependence is small compared to the differences in the reconstruction efficiency between the various measures. Moreover, the sensitivity against noise is reduced with increased length of the time series (which corresponds to the usage of order pattern of higher dimension). In general, we observe a decrease in the reconstruction efficiency if the noise levels increase or the length of the time series decreases. For the short time series used in this study, however, these dependencies are not monotone.

Conclusions

By performing an extensive comparison analysis of the reconstruction efficiency of the relevance network algorithm using 6 scoring schemes and 21 different measures, we showed that with a suitable choice of a measure and a scoring scheme, this approach is applicable to short time series to gain knowledge about the underlying gene regulatory networks which differ in various properties. However, most of the currently used measures have highly limited capabilities, as the number of time points of the gene expression data is usually not sufficient to infer the underlying structure of the network. This in turn make the distinction between direct and indirect interactions an even more challenging task.

This study could serve as a basis for the selection of a reverse engineering method for network reconstruction, based on the combination of a similarity measure and a scoring scheme suitable for given data. Our results showed that rank and symbol based measures (which we applied for the first time for GRN reconstruction) have a significantly better performance in inferring interdependencies, whereas most of the standard measures (such as Granger causality and several information-theoretic measures) fail when short time series are considered. The residual mutual information, which we proposed in this work as a partial mutual information measure, increased the reconstruction efficiency of the relevance network algorithm compared to simple and, in particular, conditional mutual information.

Nevertheless, from the analysis presented here, we conclude that it is necessary to move further from the standard similarity measures based on the time series directly, towards measures rooted in the study of symbolic dynamics or ranks deduced from the time series, in order to increase the efficiency of the relevance network algorithm for GRN reconstruction. Although measures based on symbolic dynamics performed significantly well in the noise-free case, their performance was decreased as the noise level in the system increased, and for high noise intensities it became comparable to that of mutual information. This implied that in the presence of strong noise, rank correlations (in particular Spearman's rank correlation) are most efficient tools for GRN reconstruction, since their performance was not significantly affected as the noise level increased. Additionally, we note that the results obtained for RMI, rank correlations and symbol based measures are robust with respect to the network topology. We also showed that an unequal sampling of the data in general does not pose additional problems if measures are considered where interpolation is not essential (such as the top-ranked measures in this study).

We point once again that all rank and symbol based measures described here are symmetric. This means that the directionality of the interactions cannot be inferred, unless a symmetry-breaking scoring scheme is considered in addition. In that direction, we showed that a novel scoring scheme, the asymmetric weighting (AWE), which we proposed in this work stands as a valuable approach to overcome the problems of introducing directionality in the reconstruction of the regulatory networks.

It would be interesting to compare in future the observed reconstruction efficiency of the relevance network approach to that of other reverse engineering methods, such as the Bayesian network approach.

Methods

Our work focuses on methods for reverse engineering which operate on time-resolved gene expression experiments (in terms of mRNA concentrations). We define a time series profile for a gene measured over n time points as a sequence of expression values x =<x_{1} , ..., x_{
n
} >, where each x_{
i
}, 1 ≤ i ≤ n, corresponds to a distinct time point t_{
i
}. In addition, let each of the m genes be represented by r time-resolved replicates over n time points. Here, we use the mean of r = 6 replicates, resulting in an m × n data matrix M. Let M_{
i
}, 1 ≤ i ≤ m, denote the i^{
th
} row of a matrix M, which corresponds to the time-resolved expression profile of the i^{
th
} gene.

The general reverse engineering method based on a particular similarity measure μ and a scoring scheme F operating on the data matrix M is given in Algorithm 1.

Input:

M, matrix with m rows (genes) and n columns (time points),

μ, similarity measure,

F, scoring scheme

Output:

m × m adjacency matrix, A, of the reconstructed network G

1 foreachgene i, i ∈ {1,..., m} do

2 foreachgene j, j ∈{1,..., m}, j ≠ ido

3w_{
ij
}←|μ(M_{
i
}, M_{
j
}) |

4 end

5 end

6C : c_{
ij
}← w_{
ij
}· f_{
ij
} ;

7 chose a threshold τ ;

8a_{
ij
}← 1 if c_{
ij
} >τ;

9a_{
ij
}← 0 if c_{
ij
} ≤ τ;

Algorithm 1: General reverse engineering method based on a similarity measure μ and a scoring scheme F. f_{
ij
} ∈ F, c_{
ij
} ∈ C, a_{
ij
} ∈ A, and w_{
ij
} ∈ W, where W is the matrix obtained by applying μ on all pairs of rows of the given data matrix M.

The evaluation of the scoring schemes and measures is generally performed in R [40] using available packages, as noted in the manuscript. Additionally, several C routines were developed in order to improve computational speed.

In what follows, we describe the procedure for generating the synthetic time series data, and present the definitions of the similarity measures and scoring schemes used in the comparative analysis. Furthermore, the details of the ROC analysis are briefly reviewed.

Synthetic data sets

The evaluation of the existing methods for reverse engineering gene regulatory networks often employs real time-resolved expression data. However, these data include the convoluted effects of regulons (genes under regulation by the same regulatory protein) and stimulons (genes under regulation by the same external influence), which renders it challenging to realistically assess the performance of investigated methods. Moreover, not every regulatory subnetwork leads to expression of the participating genes over the measured time period and particular condition of interest. These facts lead to a lack of control when using transcriptomics time series data sets for network inference.

Following the example of the DREAM challenge [2, 14], in this comparison study we use synthetically generated data sets to overcome the described disadvantages. The usage of these synthetic data, in contrast to real measurements, enables us to directly compare the performance of different reconstruction tools, since the topology and dynamic of the underlying network is known a priori. In particular, we use the synthetic network generator, SynTReN [41, 42], which creates synthetic transcriptional regulatory networks by providing the edges of the network, as well as information about the interaction (activating, repressing, or dual). Additionally, the generator produces simulated gene expression data of the associated mRNA concentrations for each gene, based on Michaelis-Menten and Hill kinetics, which approximates experimental expression measurements. These expression data sets are uniformly sampled in time.

In SynTReN, the levels for three types of noise are user definable: (1) biological noise, corresponding to biological variability given by the stochastic variations in gene expression, (2) experimental noise, corresponding to the technical variability, and (3) noise on correlated inputs, which accounts for the influence of several activated genes on a regulated gene. Note that different noise levels are included in this study, but we make no distinction between the strength of the three noise types.

In particular, we generate regulatory networks from the GRN of E. coli and S. cerevisiae, using the cluster addition strategy to select a connected subgraph: In each iteration, a node is randomly chosen and added to the graph together with all its neighbors. This strategy is chosen since it is an efficient method to extract a subnetwork that approximates well the topology of the source network. The results presented in this work are obtained for subnetworks of distinct sizes which differ in degree and clustering coefficient. In particular, we investigate an E. coli subnetwork of 100 genes including 121 links where 10 of the genes code for transcription factors. It is characterized by an average degree of 2.42 and a clustering coefficient of 0.016. Additionally, we examine two more networks: (1) An E. coli subnetwork of 200 genes (34 coding for transcription factors) that includes 303 links and is characterized by an average degree of 3.03 and a clustering coefficient of 0.019, and (2) a S. cerevisiae subnetwork of 100 genes (14 coding for transcription factors) that includes 123 links and is characterized by an average degree of 2.46 and a clustering coefficient of 0.026. The degree distributions are shown in Additional file 1, Figure S14. Synthetic gene expression data with 10 biological (n = 10 time points) and 6 technical (r = 6) replicates have been generated from the particular networks. We use the means of the technical replicates for the interaction analysis. An example of a generated network (100 in E. coli used for our investigations), and the simulated gene expression data sets are visualized in Figure 7.

Interpolation

Since transcriptomics time series data usually consist of expression at a few, possibly non-uniformly sampled time points, data interpolation is often the first pre-processing step. Different techniques, including linear [43] and nonlinear interpolation methods, such as cubic- [33] and b-splines [25] have been applied [11]. Although these methods extend the available data, they also introduce artefacts due to over-fitting. For this reasons, we preclude from interpolating the synthetic time series data (whenever possible) for ranking of the used similarity measures. Nevertheless, for reason of comprehensiveness, we consider the effect of interpolation using cubic splines on the investigated similarity measures and scoring schemes of highest rank in absence of noise.

Similarity measures

Reverse engineering of regulatory networks relies on the inference of interrelationships among genes, based on similarity measures. In general, given two time series x and y over n time points, a similarity measure is given by the mapping μ : R^{
n
} × R^{
n
} → I, where I, I ⊆ R. The so-defined pairwise similarity measure detects (non)linear relationships between two variables, in the considered case, between two gene expression time series. The definition allows for the measure to be symmetric, which is not commonly the case for gene regulatory interactions. Moreover, if two genes are linked indirectly via a third gene, the pairwise measure can not distinguish direct from the indirect relationships and hence additional false positive links will be introduced in the network reconstruction.

Nevertheless, the definition of the similarity measure can be extended to conditional and partial measures, incorporating the possibility to exclude the influence of a third gene. Conditional similarity measures are more general, since they do not rely on specific assumptions on the probability distribution (deduced from the time series associated with a discrete random variable), but estimate the distribution which in turn impedes the computation of the measure from short time series. On the other hand, partial measures can indicated conditional independence reliable only for multivariate Gaussian variables.

To be able to discern the direction of a putative interaction and hopefully eliminate any spurious effects, we consider the conditional and partial variants for several of the measures detailed below. We term the basic pairwise measures as simple, in comparison to their conditional and partial variants. An overview on the measures included in this study is given in Table 1.

Measures operating on vectors

Some of the standard measures used for determining gene regulatory interactions are based on the calculation of the distance between expression time series regarded as vectors. In the following, x and y will denote the vectors <x_{1}, ..., x_{
n
} > and <y_{1}, ..., y_{
n
} >, respectively. Our study includes:

L^{
s
}norm:

This distance measure for vectors x and y is defined as follows:

(1)

In our study, s = 10, which corresponds to the number of available time points.

Euclidean distance

Furthermore, we consider the well-known Euclidian distance, which is a special case of the L^{
s
} norm, with s = 2. Therefore, it is defined as

(2)

Manhattan distance

We also study the performance of the Manhattan distance which represents the shortest path between two points, placed on a rectangular grid, and is analogous to the L^{1} norm:

(3)

Dynamic time warping (DTW)

In addition, we investigate the performance of the DTW, which to our knowledge, has not been applied to the problem of gene regulatory network inference, but rather on clustering genes expression data [43, 44]. The DTW-based measure relies on finding the optimal (least cumulative) distance mapping a given time series into a reference time series, where both sequences may vary in time and/or speed. It was originally developed for speech recognition [45, 46], but has been recently used for different data mining tasks in medicine and bioinformatics [43, 47]. The concept of DTW is sketched in Figure 8 for two short time series with 4 time points each. In the first step of the DTW algorithm, local distances (e.g., Euclidean or Manhattan distance) for all pairs of time points are calculated. Then, the time series are mapped into each other by linking various time points, such that each point is included at least once and the sum over the lengths of all those links is minimal (optimal alignment path). Here, we use the DTW as implemented in the R-package "dtw" [48–50], with the Euclidean as point-wise local distance, and different step patterns which indicate the local constraints of the alignment paths. We include three different step patterns, namely

symmetric1

(4)

symmetric2

(5)

and asymmetric

(6)

to find an optimal alignment. Here μ_{
EC
} denotes the local (Euclidean) distance, and the measure μ_{
W
} the cumulative distance (representing the minimum sum of local distances along the alignment paths).

The resulting matrix of cross-distances D contains the pairwise calculated distance measures (μ_{
EC
}, μ_{
L
}, μ_{
MA
}, or μ_{
W
}, as defined above) and is, in all cases, normalized by the largest value occurring in the matrix as follows:

(7)

The similarity measure is then defined by:

(8)

Measures operating on random variables

Despite the representation of the expression time series as vectors, time series x = <x_{1}, ..., x_{
n
} > can be associated with a discrete random variable X with probability distribution p(x), x ∈ X that is approximated by the frequency via standard binning arguments. This representation allows to calculate several widely used similarity measures, such as correlation and information-theoretic measures. Note that the temporal information is lost by this representation of time series data. We first review the most commonly used measures in detail:

Pearson correlation

This similarity measure quantifies the linear relationship between the random variables X and Y, corresponding to two time series x and y:

(9)

where E denotes expectation

(10)

If the variables are independent, the correlation coefficient is μ_{
p
} = 0, but the opposite is not true, as this coefficient is sensitive mainly to linear dependencies. Note that μ_{
P
} receives values in the interval [-1, 1] and is symmetric.

Conditional Pearson correlation (CPC)

By using the conditional expectation value

(11)

where p(x|y), x ∈ X, y ∈ Y is the conditional probability distribution, one can provide the following definition for CPC:

(12)

Thus, the conditional correlation among the time series x and y of the corresponding genes, eliminating the influence of all other genes is defined as

(13)

Partial Pearson correlation

Analogously, one could also consider

(14)

where

(15)

The residuals are calculated following Eq. (16) making a linear regression of x (respectively y) depending on z:

(16)

Rank correlations can be used as a more general measure of interdependence, not restricted to a linear relationship. Even though they measure a different type of relationship than the product moment correlation coefficient, like the previous correlation measures, these are also defined in the interval [-1, 1].

Spearman's rank correlation

This type of correlation is based on the rank distribution of the expression values:

(17)

where R(x) is the rank of x.

Kendall's rank correlation

Another rank correlation is

(18)

with n_{
c
} being the number of concordant pairs, and n_{
d
} the number of discordant pairs of the rank sets.

It is common to regard the rank correlation coefficients (especially Spearman's rank correlation) as alternatives to Pearson's coefficient, since they could either reduce the amount of calculation or make the coefficient less sensitive to non-normality of distributions. Nevertheless, they quantify different types of association.

Unlike most of the measures discussed here, the correlation measures do not only provide an information about whether two genes are interacting, but also whether it is an activating or repressing relationship. As the latter information is outside of the interest of the current study, only the absolute value (respectively the square) of the correlation coefficient is taken into account.

Additionally to the correlation, information-theoretic measures are also defined using random variables as relevant representation for expression time series.

Mutual information

One of the most commonly used measures for inferring interdependencies between two subunits of a system is the mutual information (MI) [16]. Intuitively, MI measures the information content that two random variables X and Y share. The simple mutual information can then be expressed in terms of the marginal entropies H(X) and H(Y ), and the joint entropy H(X, Y ) using the definition of the Shannon entropy

(19)

which quantifies the uncertainty associated with a random variable. Hence the simple MI is defined as

(20)

It includes also non-linear interrelations, but same as the other simple measures, the simple MI cannot be used to distinguish between direct and indirect relations.

Conditional mutual information

However, if we replace the marginal and joint entropies by the conditional analogs, H(X|Z), H(Y|Z) and H(X, Y|Z), we can eliminate the influence of a third variable Z. Hence, we calculate the minimal information shared by the time series x and y of two genes conditioned on each z_{
k
}, k = 1, ..., m. Thus the conditional–sometimes also referred to as partial [51]–mutual information (CMI) can be written as:

(21)

Where

(22)

The degree of interaction is indicated by the values of μ_{
I
} or normalized by the largest value occurring among all pairs of genes.

Mutual/conditional coarse-grained information rate

Another approach based on information-theoretic aspects are the coarse-grained measures. Here, instead of approximating the exact entropies of time series, relative measures of "information creation" are used to study the interrelationship of two (sub)systems. Thus, for this purpose, the calculation of coarse-grained entropy rates [52] is used to replace the approximation of the Kolmogorov-Sinai entropy (metric entropy of a dynamical system): First, a time lag l_{
max
} is determined such that

(23)

among all analyzed data sets. Then, the coarse-grained information rate (CIR) is given by the norm of the mutual information

(24)

Usually the parameter l_{
min
} and Δl (difference between consecutive time lags) can be set to one, and thus the CIR becomes

(25)

Hence, the mutual coarse-grained information rate (MCIR) is defined as

(26)

whereas the conditional coarse-grained information rate (CCIR) as

(27)

with

(28)

Finally, a normalization by the largest value occurring among all pairs of genes is performed. These (normalized) coarse-grained information rates are then used to indicate the degree of interaction.

Model-based measures

Granger causality

A rather new approach for inferring gene regulatory networks is the Granger causality (GC). Given the time series x and y, two linear autoregressive (AR) models are estimated, both including the past of x, and additionally, one of them including the past of y. In order to determine the optimal order q of the AR model, which denotes the number of past time points which have to be included, we use the function "VARselect" from the R-package "vars" [53, 54] based on the Akaike information criterion (AIC) [55]. The AIC is a measure of the goodness of a fit of an estimated statistical model, deduced as a tool for model selection. In the general case, the AIC is defined as:

(29)

where u is the number of parameters in the statistical model, and L is the maximized value of the likelihood function for the estimated model.

With properly selected AR models, the part of the variance in the data which is explained by one model in comparison to the other one, provides an information on the causal relationship. This comparison can be formulated in terms of an index.

Thus the Granger Causality index, denoted by μ_{
G
} for the simple linear measure, as defined in [34, 35] via the covariance σ, is:

(30)

and can be inferred from the AR models:

(31)

(32)

where a_{11i
}, a_{21i
}and a_{22i
}are the parameters of the models and u_{1t
}, respectively u_{2t
}represents white noise.

Conditional/partial Granger causality

As for the previous measures, we use the conditional and partial (linear) Granger causality measures (CGC and PGC) as defined in [34, 35], in order to identify existing indirect relationships. Hence, the AR models are formulated as

(33)

(34)

for the conditional, and, in addition,

(35)

(36)

for the partial Granger causality, with a_{11i
}, a_{12i
}, a_{21i
}, a_{22i
}, a_{23i
}, a_{31i
}, a_{32i
}, a_{41i
}, a_{42i
}and a_{43i
}being the parameter of the models and u_{1t
}, u_{2t
}, u_{3t
}and u_{4t
}representing noise terms.

Using the Eqs. (33) and (34), the conditional Granger causality index is then defined as:

(37)

where

(38)

and, using the Eqs. (33) to (36), the partial Granger causality index is

(39)

where

(40)

The degree of interaction is indicated by the Granger causality index normalized by the largest value occurring among all pairs of genes.

Measures operating on symbolic dynamics

Despite the promising applications of interaction measures based on symbolic dynamics in various fields, they have not yet been employed for reverse engineering gene regulatory networks. For instance, in standard nonlinear time series analysis, the usage of symbolic dynamics to uncover patterns of interactions, especially from short data sets [56], has proven as a valuable tool. Therefore, we explore the potential of symbolic dynamics for the problem at hand by using the principle of order patterns. By this principle, as described in [38], the time series are transformed into symbol sequences. An order pattern π of dimension δ is defined by the discrete order sequence of the time series x and has the length δ. Hence, the time series can be symbolized using order patterns following:

(41)

where l is the time lag. In terms of gene regulatory network reconstruction, we need to choose a certain number of time points and rank them according to their expression value in order to obtain the order pattern. Then, each possible ordering corresponds to a predefined symbol.

This concept is illustrated in Figure 9 for time series composed of n = 4 time points. As we are dealing with very short time series here, we consider all possible combinations of the chosen number of time points. For instance, for the time series of length n = 4 and an order pattern of dimension δ = 3, we define symbols (order patterns π_{
k
}) for the following groups of time points: (1, 2, 3), (1, 2, 4), (1, 3, 4) and (2, 3, 4), shown in the left panels of Figure 9. Next, we define a symbol sequence

(42)

where denotes the order pattern obtained for gene i from the k-th group of time points and

(43)

is the length of the symbol sequence.

In this work, we usually choose the dimension δ such that the length T becomes maximal, given a time series of length n (i.e., δ = 5 for n = 10). However, if the symbol sequences are calculated for longer time series, this approach is not applicable anymore. This is due to the fact that calculations in R are not possible because of the fast growing length of the symbol vectors. Hence, we use for the time series of 20 time points order pattern of dimension 6 instead of the dimension 10, which would lead to the symbol vector of maximal length.

Symbol sequence similarity

Using the above described approach, we infer the interdependency of two genes as follows: Given a certain number δ of time points we define a vector P containing all possible permutations of the ranking, and assign a symbol (order pattern π_{
k
}) to each of them. Next, we define a vector , using the same symbols as for P, but assigned to the reversed ranking. Now, we count the pattern overlap of two symbol sequences S^{(i) }and S^{(j) }to evaluate the symbol sequence similarity, p_{1}, assuming both time series are interrelated (Eq. (44)), respectively p_{2} if we assume anti-interrelation (Eq. (45)).

(44)

(45)

We choose the maximal value of the two frequencies p_{1} and p_{2}

(46)

to define the symbol sequence similarity (SySim).

Mutual information of symbol vectors

We calculate the mutual information of the symbol vectors (SymMI) of maximal length by:

(47)

In addition, we consider the mean of the symbol sequence similarity and the mutual information (of the symbol vectors) (SymSimMI)

(48)

This is further extended to include symbolic dynamics based on a slope comparison (order patterns for pairs of time points), where we consider

the symbol sequence similarity for pairs (pairs) as a similarity measure

and the conditional entropies for pairs (pairs) as a distance measure, with .

A novel measure – the residual mutual information

Estimating entropies from short time series is imprecise, hence the estimation of the mutual information and, in particular, its conditional counterpart, suffers the same disadvantage. On the other hand, the simple mutual information is not able to distinguish between direct and indirect links. Therefore, in order to overcome the encountered problem, we propose a novel partial measure – the residual mutual information (RMI) defined as:

(49)

where

(50)

analogously to the idea of partial correlation (the residuals are calculated in the same way as for the partial correlation in Eq. (16)). The degree of interaction in the complex network is then indicated by the values of , normalized by the largest value occurring among all pairs of genes.

Applied to short data sets, we expect that the residual mutual information performs much better in discriminating indirect links than the conditional MI, as we can abandon the estimation of additional conditional probabilities. Hence, the measure is more robust to effects of small sample size. Furthermore, we expect that the RMI's performance ranges between those of the simple and the conditional mutual information for long time series, since in contrast to the CMI, we eliminate here only the linear influence of the variable Z on X and Y. We postpone the confirmation of this claim for further theoretic analysis.

Scoring schemes

Once a chosen similarity measure has been applied on a given data matrix, there are several possibilities to score the resulting "weights" of putative interactions. In this sense, a scoring scheme F is a matrix of dimensions m × m. Let W denote the matrix obtained by applying μ on all pairs of rows of a given data matrix M, for w_{
ij
} ≥ 0, ∀i, j. The scores from a given scoring scheme and similarity measure can then be represented by a matrix C calculated from the Hadamard element-wise product of W and F, such that c_{
ij
} = w_{
ij
} · f_{
ij
}.

To unravel the linkage of genes we apply the relevance network algorithm (Algorithm 1) using different scoring schemes and measures. In principle, all of the measures can be combined with any scoring schemes, but we restrict our investigations to the most commonly used.

IDentity (ID)

The identity scoring scheme corresponds to the basic relevance network [26] approach: Given a specific measure, a particular threshold τ is set in order to account for "true" links between elements in the network. The matrix F is the unit matrix (f_{
ij
} = 1) in this case. Therefore, for a symmetric similarity measure, the identity scoring scheme cannot infer directionality of interactions. We test the performance of all measures mentioned above in combination with this scoring scheme.

Context Likelihood of Relatedness (CLR)

As a second type of scoring scheme, often used for the reconstruction of GRN, we consider the CLR, which is an extension to the basic relevance network approach. Once weights w_{
ij
} have been assigned for each pair of genes according to the strength of interaction inferred from a particular measure, a score is derived, related to the empirical distribution of the values in W. Thus, the matrix F obtains the form

(51)

(52)

(53)

where and σ_{
i
} (σ_{
j
}) are the mean and standard deviation of the empirical distribution of w_{
ik
} (w_{
jk
}), k = 1, ..., m. The links having c_{
ij
} <τ (with c_{
ij
} = w_{
ij
}· f_{
ij
} and τ a predefined threshold) are removed for the network reconstruction.

Here, we use the CLR as implemented in the R-package "minet" [57, 58], which uses either the simple mutual information or a squared correlation matrix (Pearson's, Spearman's or Kendall's) to measure the strength of interaction among genes. We note that the CLR algorithm cannot infer directionality from symmetric measures.

Algorithm for the Reconstruction of Accurate Cellular Networks (ARACNE)

Furthermore, we investigate the Algorithm for the Reconstruction of Accurate Cellular NEtworks, referred to as ARACNE, and include its performance in the current comparison study. The ARACNE is based on the data processing inequality, which states that post-processing cannot increase the amount of information. Hence it follows that:

(54)

when gene i and k are not directly linked, but this goes through j, where x_{
i
}, x_{
j
} and x_{
k
} are the expression time series of these genes. In this manner, the algorithm discriminates indirect links. ARACNE is a relevance network algorithm as illustrated in Algorithm 1 as well. First, weights w_{
ij
} (normalized to the interval [0, 1]) are assigned to each pair of nodes. Then the scoring scheme operates as follows: For each triplet of nodes the edge having the lowest weight will be removed (its weight is set to zero), if the difference between the two lowest weights is above a threshold τ_{
d
}, as that interaction is interpreted as indirect. In this manner, the matrix F obtains the form:

(55)

Moreover, the ARACNE removes all edges satisfying c_{
ij
} <τ, where τ is a predefined threshold.

As for the CLR algorithm, we rely on the "minet"-package, using the simple mutual information or a squared correlation matrix to determine the weights. By default, the two thresholds are set to zero. The ARACNE does not distinguish the direction of a link from a symmetric measure as well.

Maximum Relevance/minimum redundancy NETwork (MRNET)

As another example of a relevance network algorithm, we consider the Maximum Relevance/minimum redundancy NETwork (MRNET) [59]. This scoring scheme performs series of supervised maximum relevance/minimum redundancy (MRMR) gene selection procedures, where the expression of each gene in turn plays the role of the target output y = x_{
i
}, with V = x\x_{
i
} being the set of the expression data of the input variables, and x the set of the expression levels of all genes. Given the set M of selected variables and pairwise weights w_{
ij|M
}, the criterion updates M by choosing the variable

(56)

that maximizes the score

(57)

where is a redundancy term, and u_{
j
} = w_{
ji
} is a relevance term. This scheme therefore assigns higher rank to direct interactions, whereas indirect interactions (redundant information with the direct ones) should receive lower rank. Thus, the matrix F is defined as:

(58)

(59)

(60)

Finally, all edges whose score c_{
ij
} lies below a predefined threshold τ will be removed.

The implementation of the MRNET in the "minet"-package, which we used in our study, assigns the weights based on the pairwise simple mutual information or a squared correlation among the time series of two genes (normalized to the largest value occurring among the pairs). Also this algorithm is not able to infer directionality from symmetric measures.

Time Shift (TS)

In nonlinear time series analysis, the shifting of time series is a common way to infer the directionality of causal relationships. As the driving system by definition has to act first, shifting its time series forward in time (relative to the time series of the response system) should increase the similarity of both time series. Comparing the values of a particular measure for different time shifts gives then a hint on the direction of the interaction. Thus, the time shift scoring scheme starts with a cubic spline interpolation for each pair of genes expression time series. Then the series of the second gene is shifted against that of the first gene. If x and y are two expression time series stored in the i^{
th
} and j^{
th
} row of the data matrix M, and and are the related interpolated time series, we can then define the shifted time series as

(61)

and

(62)

where N is the length of the interpolated time series and N_{
shift
} is the assumed shift of versus , with N_{
shift
} ∈ Z and N_{
shift
} ∈ [-0.1 · N, 0.1 · N]. Next, is evaluated for all possible values of N_{
shift
}, resulting in a vector (for not significant values of μ the corresponding entry will be set equal 0). The scoring is now given by

(63)

(64)

(65)

In case the largest significant value of the measure is obtained for a negative shift, the regulatory direction from the first to the second gene is kept, while the opposite direction is preserved if the largest significant value is obtained for a positive shift. Furthermore, both regulatory directions are kept, if the maximum arises for a shift of zero or multiple opposed shift values or in the case when no significant value exists. The scoring scheme aims at providing a hint on the directionality, because the absolute values of the calculated correlations on the delayed time series are rather biased as the data sets are quite short.

In the next step of Algorithm 1, the information regarding the directionality are combined with the weight of interaction inferred from a particular measure (c_{
ij
} = w_{
ij
}· f_{
ij
}). Hence, the weights for the unlikely direction are set to zero in order to break symmetries, and thus reduce the number of false positives. Finally, all edges with c_{
ij
} <τ are removed, where τ is a particular threshold.

We test that scoring scheme using the absolute value of of the correlation coefficientsμ_{
P
}(Pearson) and μ_{
S
}(Spearman) for pairs of the shifted expression series, where the significance level was set to α = 0.01 and only absolute values of correlation larger 0.9 have been taken into account. The choice of the measure to infer the weights in the first step of Algorithm 1 is independent of that and includes here the mean of sequence similarity and mutual information of symbols, as well as Spearman's and Pearson's correlation. Furthermore, this scoring scheme is applied in addition to (or after) another scoring scheme (e.g., ID, CLR, or AWE). It is important to note that in contrast to the previously described modifications of the algorithm, the scoring scheme we propose here allows to investigate the directionality, also when symmetric measures are considered.

A novel scoring scheme - Asymmetric WEighting (AWE)

Most of the measures used to infer the degree of interaction between pairs of genes, such as correlations or the mutual information, are symmetric. Hence, when applied in symmetric algorithms they are not able to unravel the regulatory dependences, since these measures do not distinguish the direction of the interaction. Thus, we introduce an asymmetric weighting based on topological aspects, for the complete set of pairwise weights obtained from a particular measure, and implement it according to Algorithm 1. In particular, we compute a matrix of weights, where the columns represent the genes which are regulated, and the rows stand for the genes which regulate other genes. The scoring value is then calculated by dividing each row entry by the sum of the corresponding column values. The scoring scheme (and the corresponding matrix F) is defined by:

(66)

Hence, the probability that the j^{
th
} gene is regulated sums up to one: . Here, the score indicates that how likely a gene is regulating another one depends not only on the strength of interactions, but also on its amount.

Eventually, if c_{
ij
} ≥ τ the edge is introduced, otherwise it is omitted.

We test the asymmetric weighting on the matrix W inferred from the symbolic dynamics measures.

ROC analysis

In order to rank the performance of the different similarity measures and scoring schemes, we evaluate to which extent each of them accurately reconstructs the underlying network of regulatory interactions. To this end, we use the receiver operating characteristic (ROC) analysis [60], as it provides indices to value the reconstruction efficiency among all the measures and scoring schemes under study. The ROC analysis is a tool for visualizing, organizing, and selecting classifiers based on their performance in terms of a cost/benefit analysis. For this purpose the ROC space is defined by the false positive rate, fpr, and the true positive rate, tpr, which depict the relative trade-offs between true positives tp (benefits) and false positives fp (costs). An overview on important quantities in ROC analysis is given in Table 3.

Table 3

ROC

true positives

correctly identified true edges

tp

false positives

spurious edges

fp

true negatives

correctly identified zero edges

tn

false negatives

unrecognized true edges

fn

positives

all true edges

p = tp + fn

negatives

all zero edges

n = tn + fp

false positive rate

part of negatives set positive

fpr = fp/n

true positive rate

part of positives set positive

tpr = tp/p

false negative rate

part of positives set negative

fnr = fn/p

true negative rate

part of negatives set negative

tnr = tn/n

recall (sensitivity)

true positive rate

tpr

specificity

true negative rate

tnr

precision

positive predictive value

tp/(tp + fp)

Summary of important quantities in ROC analysis.

While discrete classifiers lead to just a single point in the ROC space, classifiers such as the similarity measures studied in this work produce probability values of how likely an instance belongs to a certain class. Here, the classification depends on a predefined threshold. The ROC curve is then produced by continuously tuning this threshold, which on the other hand can be suggestive on the performance of the measures. However, a well-defined rating is not always possible "by eye". Therefore, different summary statistics are common, for example the area under the ROC curve (AUC(ROC)) or the YOUDEN index (YOUDEN = max(tpr - fpr)) [61]. Another standard evaluation plot in the field of the ROC analysis is the precision/recall graph (PvsR), which is based on the comparison between the true edges and the inferred ones. Hence, it highlights the precision of the reconstruction, and does not suffer from the typically large number of false positives in a gene regulatory network reconstruction. We thus give the summary statistic using the area under the precision-recall curve (AUC(PvsR)) as well as the ROC curve. An efficient implementation of the ROC analysis is provided by the R-package "ROCR" [62].

All ROC curves are evaluated with respect to the underlying GRN, which is a directed graph. As several of the scoring schemes/measures do not distinguish whether the regulation is directed from gene i to gene j or vise versa, some of the false positives will follow from the missing information on the directionality. However, since the network under study is a sparse one, this additional false positives barely carry a weight.

Declarations

Acknowledgements

The authors acknowledge the GoFORSYS project funded by the Federal Ministry of Education and Research, Grant No. 0313924.

Authors’ Affiliations

(1)

Interdisciplinary Center for Dynamics of Complex Systems, University of Potsdam

(2)

Potsdam Institute for Climate Impact Research (PIK)

(3)

Department of Physics, Humboldt University of Berlin

(4)

Systems Biology and Mathematical Modeling Group, Max Planck Institute for Molecular Plant Physiology

(5)

Institute of Biochemistry and Biology, University of Potsdam

(6)

Institute for Complex Systems and Mathematical Biology, University of Aberdeen

References

Levine M, Tjian R: Transcription regulation and animal diversity.Nature 2003,424(6945):147–151.PubMedView Article

Stolovitzky G, Monroe D, Califano A: Dialogue on Reverse-Engineering Assessment and Methods: The DREAM of High-throughput pathway inference.Annals of the New York Academy of Sciences 2007, 1115:1–22.PubMedView Article

Albert R, Othmer HG: The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila melanogaster.Journal of Theoretical Biology 2003, 223:1–18.PubMedView Article

Ronen M, Rosenberg R, Shraiman BI, Alon U: Assigning numbers to the arrows: parameterizing a gene regulation network by using accurate expression kinetics.PNAS 2002,99(16):10555–10560.PubMedView Article

Luscombe NM, Babu MM, Yu H, Snyder M, Teichmann SA, Gerstein M: Genomic analysis of regulatory network dynamics reveals large topological changes.Nature 2004,431(7006):308–312.PubMedView Article

Davidson EH, Rast JP, Oliveri P, Ransick A, Calestani C, Yuh CH, Minokawa T, Amore G, Hinman V, Arenas-Mena C, Otim O, Brown CT, Livi CB, Lee PY, Revilla R, Rust AG, Pan Z, Schilstra MJ, Clarke PJ, Arnone MI, Rowen L, Cameron RA, McClay DR, Hood L, Bolouri H: A genomic regulatory network for development.Science 2002,295(5560):1669–1678.PubMedView Article

Lee TI, Rinaldi NJ, Robert F, Odom DT, Bar-Joseph Z, Gerber GK, Hannett NM, Harbison CT, Thompson CM, Simon I, Zeitlinger J, Jennings EG, Murray HL, Gordon DB, Ren B, Wyrick JJ, Tagne JB, Volkert TL, Fraenkel E, Gifford DK, Young RA: Transcriptional Regulatory Networks in Saccharomyces cerevisiae.Science 2002,298(5594):799–804.PubMedView Article

Shen-Orr SS, Milo R, Mangan S, Alon U: Network motifs in the transcriptional regulation network of Escherichia coli.Nature Genetics 2002, 31:64–68.PubMedView Article

Zhang Z, Liu C, Skogerb G, Zhu X, Lu H, Chen L, Shi B, Zhang Y, Wang J, Wu T, Chen R: Dynamic Changes in Subgraph Preference Profiles of Crucial Transcription Factors.PLoS Computational Biology 2006,2(5):e47+.PubMedView Article

Papp B, Oliver S: Genome-wide analysis of context-dependence of regulatory networks.Genome Biology 2005,6(2):206.PubMedView Article

Bar-Joseph Z: Analyzing time series gene expression data.Bioinformatics 2004, 20:2493–2503.PubMedView Article

Wang X, Wu M, Li Z, Chan C: Short time-series microarray analysis: Methods and challenges.BMC Systems Biology 2008.,2(58):

Lin J, Keogh E, Lonardi S, Chiu B: A symbolic representation of time series, with implications for streaming algorithms.Proceedings of the ACM DMKD 2003, 20:2–11.

Marbach D, Prill RJ, Schaffter T, Mattiussi C, Floreano D, Stolovitzky G: Revealing strengths and weaknesses of methods for gene network inference.Proceedings of the National Academy of Sciences 2010,107(14):6286–6291.View Article

D'haeseleer P: How does gene expression clustering work?Nat Biotech 2005,23(12):1499–1501.View Article

Soranzo N, Bianconi G, Altafini C: Comparing association network algorithms for reverse engineering of large-scale gene regulatory networks: synthetic versus real data.Bioinformatics 2007,23(13):1640–1647.PubMedView Article

Toh H, Horimoto K: Inference of a genetic network by a combined approach of cluster analysis and graphical Gaussian modeling.Bioinformatics 2002,18(2):287–297.PubMedView Article

Reverter A, Chan EKF: Combining partial correlation and an information theory approach to the reversed engineering of gene co-expression networks.Bioinformatics 2008,24(21):2491–2497.PubMedView Article

Zampieri M, Soranzo N, Altafini C: Discerning static and causal interactions in genome-wide reverse engineering problems.Bioinformatics 2008,24(13):1510–1515.PubMedView Article

Opgen-Rhein R, Strimmer K: From correlation to causation networks: a simple approximate learning algorithm and its application to high-dimensional plant gene expression data.BMC Systems Biology 2007, 1:37.PubMedView Article

Li W: Mutual information functions versus correlation functions.Journal of Statistical Physics 1990,60(5):823–837.View Article

Baba K, Shibata R, Sibuya M: Partial correlation and conditional correlation as measures of conditional independence.Australian & New Zealand Journal of Statistics 2004,46(4):657–664.View Article

Wille A, Zimmermann P, Vranova E, Furholz A, Laule O, Bleuler S, Hennig L, Prelic A, von Rohr P, Thiele L, Zitzler E, Gruissem W, Buhlmann P: Sparse graphical Gaussian modeling of the isoprenoid gene network in Arabidopsis thaliana.Genome Biology 2004,5(11):R92.PubMedView Article

Steuer R, Kurths J, Daub CO, Weise J, Selbig J: The mutual information: Detecting and evaluating dependencies between variables.Bioinformatics 2002,18(suppl 2):S231–240.PubMedView Article

Liang KC, Wang X: Gene Regulatory Network Reconstruction Using Conditional Mutual Information.EURASIP Journal on Bioinformatics and Systems Biology 2008, 2008:253894.View Article

Butte AJ, Kohane IS: Mutual Information Relevance Networks: Functional Genomic Clustering Using Pairwise Entropy Measurements.Pacific Symposium on Biocomputing 2000, 5:415–426.

Margolin AA, Nemenman I, Wiggins C, Stolovitzky G, Califano A: On The Reconstruction of Interaction Networks with Applications to Transcriptional Regulation.To appear in Proc. NIPS Comp. Bio. Workshop, 2004, special issue of Bioinformatics 2004.

Smith VA, Jarvis ED, Hartemink AJ: Evaluating functional network inference using simulations of complex biological systems.Bioinformatics 2002,18(suppl 1):S216–224.PubMedView Article

Geier F, Timmer J, Fleck C: Reconstructing gene-regulatory networks from time series, knock-out data, and prior knowledge.BMC Systems Biology 2007, 1:11.PubMedView Article

Werhli AV, Grzegorczyk M, Husmeier D: Comparative evaluation of reverse engineering gene regulatory networks with relevance networks, graphical gaussian models and bayesian networks.Bioinformatics 2006,22(20):2523–2531.PubMedView Article

D'haeseleer P, Wen X, Fuhrman S, Somogyi R: Linear modeling of mRNA expression levels during CNS development and injury.PSB 99 On-Line Proceedings 1999, 41–52.

Guo S, Seth AK, Kendrick KM, Zhou C, Feng J: Partial Granger causality - Eliminating exogenous inputs and latent variables.Journal of Neuroscience Methods 2008, 172:79–93.PubMedView Article

Ding M, Chen Y, Bressler SL: Granger Causality: Basic Theory and Application to Neuroscience. In Handbook of Time Series Analysis. Edited by: Björn Schelter JT Matthias Winterhalder. Wiley InterScience; 2007:437–460.

Lozano AC, Abe N, Liu Y, Rosset S: Grouped graphical Granger modeling for gene expression regulatory networks discovery.Bioinformatics 2009,25(12):i110–118.PubMedView Article

Marwan N, Romano MC, Thiel M, Kurths J: Recurrence plots for the analysis of complex systems.Physics Reports 2007,438(5–6):237–329.View Article

Albert R: Scale-free networks in cell biology.J Cell Sci 2005,118(21):4947–4957.PubMedView Article

Ihaka R, Gentleman R: R version 2.9.2. 2009.

Van den Bulcke T, Van Leemput K, Naudts B, van Remortel P, Ma H, Verschoren A, De Moor B, Marchal K: SynTReN: a generator of synthetic gene expression data for design and analysis of structure learning algorithms.BMC Bioinformatics 2006, 7:43.PubMedView Article

Van den Bulcke T, Van Leemput K, Naudts B, van Remortel P, Ma H, Verschoren A, De Moor B, Marchal K: syntren generator. 2006.

Aach J, Church GM: Aligning gene expression time series with time warping algorithms.Bioinformatics 2001,17(6):495–508.PubMedView Article

Ferre F, Clote P: BTW: a web server for Boltzmann time warping of gene expression time series.Nucleic Acids Res 2006, (34 Web Server):W482-W485.

Sakoe H, Chiba S: Dynamic Programming Algorithm Optimization for Spoken Word Recognition.IEEE Transactions on Acoustics, Speech and Signal Processing 1978,ASSP-26(1):43–49.View Article

Velichko V, Zagoruyko N: Automatic recognition of 200 words.International Journal of Man-Machine Studies 1970,2(3):223–234.View Article

Caiani E, Porta A, Baselli G, Turiel M, Muzzupappa S, Pagani M, Malliani A, Cerutti S: Analysis of cardiac left-ventricular volume based on time warping averaging.Medical and Biological Engineering and Computing 2002,40(2):225–233.PubMedView Article

Giorgino T: Computing and Visualizing Dynamic Time Warping Alignments in R: The dtw Package.Journal of Statistical Software 2009,31(7):1–24.

Giorgino T: Dynamic Time Warping algorithms cran.r-project. 2010. [Package version 1.14–3]

Tormene P, Giorgino T, Quaglini S, Stefanelli M: Matching Incomplete Time Series with Dynamic Time Warping: An Algorithm and an Application to Post-Stroke Rehabilitation.Artificial Intelligence in Medicine 2009, 45:11–34.PubMedView Article

Frenzel S, Pompe B: Partial Mutual Information for Coupling Analysis of Multivariate Time Series.Phys Rev Lett 2007,99(20):204101.PubMedView Article

Palus M, Komarek V, Hrncir Z, Sterbova K: Synchronization as adjustment of information rates: Detection from bivariate time series.Phys Rev E 2001,63(4):046211.View Article

Pfaff B: VAR, SVAR and SVEC Models: Implementation Within R Package vars.Journal of Statistical Software 2008,27(4):1–32.

Pfaff B: Analysis of Integrated and Cointegrated Time Series with R. 2nd edition. New York: Springer; 2008. [ISBN 0–387–27960–1]View Article

Akaike H: A new look at the statistical model identification.Automatic Control, IEEE Transactions on 2003,19(6):716–723.View Article

Wessel N, Suhrbier A, Riedl M, Marwan N, Malberg H, Bretthauer G, Penzel T, Kurths J: Detection of time-delayed interactions in biosignals using symbolic coupling traces.EPL (Europhysics Letters) 2009, 87:10004.View Article

Meyer PE, Lafitte F, Bontempi G: minet: Mutual Information Network Inference cran. 2009. [R package version 2.0.0]

Meyer PE, Kontos K, Lafitte F, Bontempi G: Information-Theoretic Inference of Large Transcriptional Regulatory Networks.EURASIP Journal on Bioinformatics and Systems Biology 2007, 2007:9.View Article

Fawcett T: An intoduction to ROC analysis.Pattern Recogn Lett 2006,27(8):861–874.View Article

Fluss R, Faraggi D, Reiser B: Estimation of the Youden Index and its Associated Cutoff Point.Biometrical Journal 2005,47(4):458–472.PubMedView Article

Sing T, Sander O, Beerenwinkel N, Lengauer T: ROCR: visualizing classifier performance in R.Bioinformatics 2005,21(20):3940–3941.PubMedView Article

Wen X, Fuhrman S, Michaels GS, Carr DB, Smith S, Barker JL, Somogyi R: Large-scale temporal gene expression mapping of central nervous system development.Proceedings of the National Academy of Sciences of the United States of America 1998, 95:334–339.PubMedView Article

Gibbons FD, Roth FP: Judging the Quality of Gene Expression-Based Clustering Methods Using Gene Annotation.Genome Research 2002,12(10):1574–1581.PubMedView Article

Eisen MB, Spellman PT, Brown PO, Botstein D: Cluster analysis and display of genome-wide expression patterns.Proceedings of the National Academy of Sciences of the United States of America 1998,95(25):14863–14868.PubMedView Article

D'haeseleer P, Wen X, Fuhrman S, Somogyi R: Mining the gene expression matrix: inferring gene relationships from large scale gene expression data.Proceedings of the second international workshop on Information processing in cell and tissues, Plenum Press 1998, 203–212.

Veiga D, Vicente F, Nicolas M, Vasconcelos AT: Predicting transcriptional regulatory interactions with artificial neural networks applied to E. coli multidrug resistance efflux pumps.BMC Microbiology 2008, 8:101.PubMedView Article

Mukhopadhyay ND, Chatterjee S: Causality and pathway search in microarray time series experiment.Bioinformatics 2007,23(4):442–449.PubMedView Article

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.