Netter: re-ranking gene network inference predictions using structural network properties
- Joeri Ruyssinck^{1, 3}Email author,
- Piet Demeester^{1, 3},
- Tom Dhaene^{1, 3} and
- Yvan Saeys^{2, 4}
https://doi.org/10.1186/s12859-016-0913-0
© Ruyssinck et al. 2016
Received: 14 July 2015
Accepted: 20 January 2016
Published: 9 February 2016
Abstract
Background
Many algorithms have been developed to infer the topology of gene regulatory networks from gene expression data. These methods typically produce a ranking of links between genes with associated confidence scores, after which a certain threshold is chosen to produce the inferred topology. However, the structural properties of the predicted network do not resemble those typical for a gene regulatory network, as most algorithms only take into account connections found in the data and do not include known graph properties in their inference process. This lowers the prediction accuracy of these methods, limiting their usability in practice.
Results
We propose a post-processing algorithm which is applicable to any confidence ranking of regulatory interactions obtained from a network inference method which can use, inter alia, graphlets and several graph-invariant properties to re-rank the links into a more accurate prediction. To demonstrate the potential of our approach, we re-rank predictions of six different state-of-the-art algorithms using three simple network properties as optimization criteria and show that Netter can improve the predictions made on both artificially generated data as well as the DREAM4 and DREAM5 benchmarks. Additionally, the DREAM5 E.coli. community prediction inferred from real expression data is further improved. Furthermore, Netter compares favorably to other post-processing algorithms and is not restricted to correlation-like predictions. Lastly, we demonstrate that the performance increase is robust for a wide range of parameter settings. Netter is available at http://bioinformatics.intec.ugent.be.
Conclusions
Network inference from high-throughput data is a long-standing challenge. In this work, we present Netter, which can further refine network predictions based on a set of user-defined graph properties. Netter is a flexible system which can be applied in unison with any method producing a ranking from omics data. It can be tailored to specific prior knowledge by expert users but can also be applied in general uses cases. Concluding, we believe that Netter is an interesting second step in the network inference process to further increase the quality of prediction.
Keywords
Gene regulatory networks Network inference Graphlets Gene expression dataBackground
Network representations are widely used and vital in many areas of science and engineering. They serve both as an endpoint for users to structure, visualize and handle large amounts of data and as a starting point for various algorithms that use networks for automated hypothesis generation. In systems biology, one of the long-standing challenges is the reverse engineering of the cell’s transcriptome in the form of gene regulatory networks (GRNs). This has proven to be a daunting task, as the amount of genes in the network vastly exceeds the amount of available measurements. As a result, many computational methods have been developed [1–3] which try to overcome this challenge using various strategies. These methods differ not only in their accuracy to infer the network but also strike a different balance between scalability and complexity [4, 5]. In a recent community-wide effort, a large blind assessment of unsupervised inference methods using microarray gene expression data was conducted [6]. This study concluded that no single inference method performs best across all data sets but that in contrast, the integration of several techniques to form an ensemble ‘community’ prediction led to a robust and top performance. In a collaborative effort between the DREAM organizers, the GenePattern team [7] and individual contributors, a web service GP-DREAM was set up to run and combine current state-of-the-art methods. To date, five methods are offered: ANOVerence [8], CLR [9], GENIE3 [10], the Inferelator [11] and TIGRESS [12].
Common inference strategies of GRN inference algorithms include the calculation of local pairwise measures between genes or the transformation of the problem into independent regression subproblems to derive connections between genes. It is clear that using these schemes, the algorithm is unaware that the goal is to infer an actual network topology. Therefore, the global network structure cannot influence the inference process. For example, relevance networks [13] are created by calculating the mutual information between each possible gene interaction pair. A high mutual information score between a gene pair is then considered as putative evidence of a regulatory interaction. It is well known that this technique predicts a large amount of false positive interactions due to indirect effects. Two widely-used methods, CLR [9] and ARACNE [14] acknowledge this weakness and implement strategies to mitigate this problem by incorporating a more global network context. ARACNE uses the Data Processing Inequality on every triplet of genes to filter out the weakest connection. CLR builds a background model for each pair of interacting genes and will transform the mutual information score to its likelihood within the network context. WGCNA [15] also incorporates a global network context in the network reconstruction step of the algorithm. Pairwise correlations are raised to the optimal power to maximally fit a scale-free topology property of the constructed network. In a more general context, Network Deconvolution [16] was proposed as a post-processing technique to infer direct effects from an observed correlation matrix containing both direct and indirect effects. Similarly, a post-processing method named Silencer [17] uses a matrix transformation to turn the correlation matrix into a highly discriminative ’silenced’ matrix, which enhances only the terms associated with direct causal links. However, in general and as to date, almost none of the state-of-the-art algorithms make use of general or specific structural knowledge of gene regulatory networks to guide the inference progress. In contrast, such structural properties of GRN and biological networks in general have been studied extensively in literature [18, 19], introducing concepts such as modularity, hub-nodes and scale-free biological networks. The topology of a specific GRN is highly dependent on the experimental conditions and the type of cell [20] although general topological properties have been reported. It has been described that both prokaryotic and eukaryotic transcription networks exhibit an approximately scale-free out-degree distribution, while the in-degree distribution follows a restricted exponential function [21]. Furthermore, the concept of relatively isolated sets of co-expressed genes under specific conditions, called modules, has been introduced, as discussed in [22]. Topological analyses of GRN have also revealed the existence of network motifs [23], recurrent subgraphs in the larger network which appear more frequent than would be expected in randomized networks. The existence of such network motifs and their frequency of occurrence inevitably has an impact on the global network structure. Finally, prior knowledge about the topology of the specific GRN of the cell at hand can be available, in the simplest scenario in the form of already known regulatory links extracted from literature. We believe that the current non-inclusion of such known structural properties in the inference process leads to predictions that do not achieve their full potential. Furthermore, they are often characterized by different graph-invariant measures than networks found in literature. Although it is hard to completely transform these predictions into more realistic networks, it is clear that the inclusion of structural knowledge is desirable and will be beneficial to the prediction accuracy. However, including such complex and diverse topological information directly in the inference process of existing algorithms is non-trivial and offers little room for modifiability.
Instead in this work, we propose and validate a post-processing approach that aims to be easily modifiable and extendable. The resulting algorithm, named Netter, uses as input any ranking of regulatory links sorted by decreasing confidence obtained by a network inference algorithm of choice. It then re-ranks the links based on graph-invariant properties, effectively penalizing regulatory links which are less likely to be true in the inferred network structure and boosting others. It is not the goal of this work to put forth structural properties of GRN, instead we wish to offer a flexible system in which the end user can decide which structural properties are to be included or emphasized. Expert users can easily design and include novel structural properties and consequently tune Netter to a specific use case. However, to validate our approach, we also introduce and motivate three simple structural properties and default settings which can be used in a more general setting in which specific knowledge of the GRN is unavailable. Using these proposed structural properties and settings, we demonstrate that Netter improves the predictions of six state-of-the-art inference methods on a wide array of synthetic and real gene expression datasets including the DREAM4 and DREAM5 benchmarks. Netter also slightly improves the DREAM5 community prediction of the E.coli. network inferred from real expression data. We compare and discuss the performance of Netter with other techniques that aim to incorporate the global network or post-process GRN predictions. Next, we further investigate and discuss the characteristics of improvement of Netter. Lastly, we show that the performance gain of Netter is robust with regard to its parameter settings and the exact definition of its structural properties.
Methods
Input, problem definition and output
Most GRN inference methods return an absolute ranking of all possible edges, sorted by decreasing confidence that this link is a true regulatory interaction. This ranking is then later transformed into a network representation by selecting a threshold determined by the algorithm or end-user. Netter uses as input such an absolute ranking of potential gene regulatory links. This ranking can be incomplete, however no regulatory links will be added as Netter is strictly a re-ranking approach. No further assumptions are made about which algorithms or data sources were used. Although we focus here on unsupervised network inference methods which use microarray expression data to infer network topologies, Netter is generally applicable to any method producing a ranking from omics data. In practice, it only makes sense to re-rank the top of the ranking noting that networks consisting of 100 genes already produce a complete ranking of 9900 potential regulatory links (excluding self-regulatory interactions). Therefore, in the first step of the algorithm (Fig. 1 a), the top x most confident links of the prediction are extracted, where x is a user-chosen value. The algorithm will work on these links only, assigning them a new rank, whereas the remaining links maintain their original ranks and cannot influence the decision process.
Formulation as an optimization problem
Here α is a global balance factor, s is a structural cost function giving a score to a ranking based on structural properties and Δ is a divergence function quantifying how much a ranking is different from the original ranking. Intuitively, f strikes a balance between modifying the original ranking to have better structural properties while remaining true to the original ranking (Fig. 1 d).
Simulated annealing optimization
This optimization problem is then solved by following a simulated annealing approach to the problem. In a single step of the optimization process, we create a new ranking l ^{′} by randomly moving γ links up or down the ranking by θ positions. γ is sampled uniform from [ 1,Γ] in each step, while θ is sampled uniform for each link from [−Θ,+Θ]. Θ, Γ being user-set integer values. In practice, this way the optimization process will explore both minor as substantial changes to the ranking. The newly generated ranking l ^{′} is accepted as the new ranking if f(l ^{′})<f(l) or with a probability of \(\phantom {\dot {i}\!}e^{-(\,f(l^{\prime })-f(l))/T}\) otherwise, with T being the current temperature of the annealing process, as proposed by [24]. We use a traditional exponential cooling schedule in which the temperature is repeatedly lowered by a constant factor μ after each iteration. To avoid manual tuning of the annealing parameters for each network, Netter will automatically adjust the parameters and restart if the acceptance ratio of bad mutations during a starting window (e.g. the first 10 % iterations) is not within user-defined limits.
Assigning a structural cost function and a divergence cost function to a ranking
Structural property functions
Here, the parameters a and b can be specific for each of the s _{ struct } and the default values can be found in Additional files 1–3. In the results section we discuss how changes in the relative weighing coefficients and the exact shape of the individual structural penalty functions (by varying a and b) influence the performance of Netter.
Graphlet-based structural penalty
Regulatory gene limiting penalty
In the case a list of known regulatory genes is not available, as in the DREAM4 benchmark, predictions tend to favor the presence of outgoing links originating from almost every node in the inferred network. This is due to indirect effects, if for example a gene A regulates both genes B and C in the same process, most algorithms will also predict a regulatory link between B and C. Furthermore, in the absence of interventional or time-series data, the direction of a regulatory link is hard to predict resulting in a large amount of bi-directional links as both directed edges will usually be close to each other in the ranking. In reality, it is improbable that every gene in the network has an outgoing link in the network, as this would suggest that the gene has a regulatory function. Although the graphlet-based structural penalty partially addresses these problems, a simple regulatory gene limiting penalty was created which defines y as the amount of nodes in the network with at least one outgoing link relative to the total amount of genes in the network. Parameters a and b were set such that a high cost was associated with networks containing a high percentual number of nodes that have outgoing links. Additional file 2 describes the exact default mapping and a more detailed performance stability analysis.
Anti-dominating penalty
In some cases after re-ranking, we noticed that a regulatory gene and its target genes would completely dominate the top of the prediction list, leaving no room for other modules. This behavior is unwanted, as one wants to discover different areas of the network. This penalty counters this problem by penalizing a percentual large amount of links originating from the same gene in the network. The anti-dominating penalty defines y as the ratio between the maximum amount of links originating from a same gene in the network and the total amount of links in the network. Additional file 3 describes the default mapping and a stability analysis of this penalty.
Computational aspects of Netter
The large search space of possible rankings results in the necessity of performing many steps to minimize the optimization function. Therefore, it is critical that a single step is performed as efficient as possible. Two computationally expensive processes can be distinguished in a single iteration. First, the new candidate ranking l ^{′} created from l, needs to be transformed into new subnetworks g _{ i }. Second, structural penalties need to be calculated using the newly created subnetworks of which some, e.g. the graphlet count, can be computationally expensive. Executing both processes from scratch would result in an unacceptable runtime. However, because the new ranking l ^{′} is similar to the current ranking l an incremental approach to the problem can be used. Therefore, Netter uses an incremental update scheme to keep track of the subnetworks and can efficiently revert back in case the new ranking is rejected. All penalty functions, including the graphlet enumerator have been defined and developed to work in an incremental way and new structural penalties should also be implemented in this setting. Each optimization procedure in Netter is ’embarrassingly parallel’. Therefore, Netter will assign new optimization runs to each idle, available core. To give an estimate of the execution time of Netter: a typical set-up as described further including 100 independent optimization runs, took 5 single core hours on a Intel i3 CPU M350 clocked at 2.27 GHz, 8.00 GB of RAM and a 64-bit OS. However, the running time is highly dependent on the parameter settings and the list of included penalties. Furthermore, the amount of independent runs (=100) is conservative and can be further lowered if computing power is an issue. We discuss this in more detail in the Results and discussion subsection.
Selected network inference methods
In order to test Netter we performed a large number of experiments using a variety of network inference methods. We selected six network inference methods in total with varying complexity and performance. In addition, in case of the DREAM5 networks, the community prediction networks as created and published in [27] were added. Of the six selected network inference methods, three are based on mutual information scores: CLR [9], ARACNE [14] and BC3NET [28]. Three other methods use machine learning techniques to infer the network GENIE3 [10], NIMEFI [29] and TIGRESS [12].
Selected data sets and evaluation measures
Overview of the datasets used in the performance tests
Name | Networks | Reg.genes | Genes | Samples | Type |
---|---|---|---|---|---|
DREAM4 | 5 | 100 | 100 | 100 | Artif. |
DREAM5 artif. | 1 | 195 | 1643 | 805 | Artif. |
DREAM5 E. coli. | 1 | 334 | 4511 | 805 | Real |
SynTRreN-100 | 5 | 100 | 100 | 100 | Artif. |
SynTRreN-150 | 5 | 150 | 150 | 150 | Artif. |
GNW-200 | 15 | 200 | 200 | 200 | Artif. |
Results and discussion
To interpret the performance results of Netter, it is important to note that from a theoretical point of view, a post-processing approach can never improve every network prediction it is applied on. If this would be the case, repeatedly applying this algorithm on the outcome of a previous re-ranking would eventually result in the perfect ranking. An analogy can be found in lossless compression, where one also tries to find general properties to obtain a good compression ratio for a large set of probable items sampled from the population. In the specific case of Netter, each consecutive re-ranking will result in less information of the original prediction being used to guide the re-ranking process and therefore should be avoided. Furthermore, for a specific network it is hard to explain why a loss in prediction accuracy occurred. A possible reason is that the initial prediction was of insufficient quality to guide to optimization process in the right direction. It is known that these network inference algorithms achieve low accuracy and that algorithms can produce different rankings even with those obtaining similar performance metric scores [6, 29]. Further on in this section, we will briefly discuss the performance gain of Netter with regard to the initial prediction accuracy. Also in the following subsections, we present the results of performance tests, compare Netter to other similar technique, discuss the effect of successive applications of Netter and comprehensively investigate the influence of the various parameters settings and choice of the structural cost function definitions.
Performance tests
We ran Netter on all networks and all method predictions using the following settings. The cutoff value x was set to the first 750 links or the amount of non-zero links in the case less edges received a non-zero score. The mutation parameters Θ and Γ were set to 70 links and 50 positions respectively. The subnetwork size parameter n was set to 25 and the associated coefficients π _{ i } were set to 0.5i, for i= [1…amount of subnetworks]. The annealing scheme allowed an acceptance ratio of bad mutations of approximately 10 % after the first 3000 of 30,000 iterations. The optimization process was performed 100 times for each prediction before averaging and all penalty functions discussed in the previous section were included. The relative weighing parameter was set to 25 for the graphlet penalty, 2 for the gene regulatory penalty and 75 for the anti-dominating penalty, α was set to 10^{−5}. The influence of the individual penalty cost function shape, the relative weighing coefficients and other parameters on the performance is discussed in the next section. Each re-ranking experiment was repeated three times and, due to the ensemble approach of Netter, the rankings were almost identical.
AUPR before and after re-ranking predictions of the DREAM5 dataset
Net. | GENIE3 | NIMEFI | TIGRESS | Community | ||||
---|---|---|---|---|---|---|---|---|
Orig. | New | Orig. | New | Orig. | New | Orig. | New | |
Artif. | 0.94 | 0.96 | 0.81 | 0.82 | 0.92 | 0.90 | 0.91 | 0.88 |
E. coli. | 0.15 | 0.21 | 0.18 | 0.21 | 0.20 | 0.16 | 0.13 | 0.15 |
Focusing on DREAM5, Table 2 shows an overview of the AUPR of GENIE3, NIMEFI, TIGRESS and the community network. We did not re-rank the predictions of the mutual information methods, as these methods were outperformed by the former in the DREAM5 challenge. The table shows that the original AUPR score on the artificial network is already quite high and Netter is unable to further improve the prediction. However, on the E. coli. network inferred using real expression data, Netter substantially improves the predictions of GENIE3 and NIMEFI while the TIGRESS performance decreases. Netter is also able to slightly improve the community network as produced by the DREAM5 challenge.
Comparing Netter to similar techniques
In this subsection, we will compare Netter with other post-processing approaches for GRN inference predictions and other algorithms that incorporate global network information in their inference process. We are not aware of any other methods that use structural properties of the output network to guide the inference prediction on a large scale. However, as discussed in the introduction, both CLR and ARACNE can be considered as extensions of relevance networks which correct the mutual information (MI) scores using a more global network context. Network Deconvolution and the Silencer on the other hand are post-processing techniques that attempt to separate direct causal effects from indirect effects and have been applied for GRN inference. As mentioned in the introduction, WGCNA raises a pairwise correlation matrix to a certain power to maximally fit the scale-free topology measure. Although, the idea is similar to Netter, both methods cannot be compared directly. WCGNA only changes the edge weight values but does not change the ranking of edges. As baseline for our comparison, we infer networks by calculating MI scores for each pair of genes. Next, we also infer the networks using ARACNE and CLR. For each network, we post-process these three predictions using Netter, Network Deconvolution and the Silencer. This results in twelve different predictions for each network. We use the same full dataset as in the performance tests. Again we use the AUROC and AUPR scores as evaluation metrics, however we do not adopt the pre-processing procedure described in the ‘Selected data sets and evaluation measures’ subsection, as we are interested in comparing between methods as opposed to relative gains in this test.
The figure shows that the ARACNE method has a higher AUPR score compared to the MI network but at the cost of a decreased AUROC score. This is caused by ARACNE setting a large number of interactions to 0, a more aggressive approach than most other algorithms. CLR both has higher AUROC and AUPR scores than the original MI prediction. These performance gains are to be expected, as both algorithms are widely adopted and have been successfully applied to various use cases. Among the post-processing algorithms, Netter is the clearly the most successful one. Applying Netter results in a substantial improvement for the AUPR score of the ARACNE and CLR predictions as also shown in the previous subsections and a small improvement in AUPR score for the MI network. The smaller gain for the MI network can be explained by the lower accuracy of the initial prediction, as we will further discuss in the following subsection. Netter does not seem to influence the AUROC score of the MI, ARACNE or CLR predictions. This is because Netter is a conservative approach, only re-ranking the first x (in casu 750), allowing no new links to enter the prediction. Applying Network Deconvolution results in a decrease in AUROC and AUPR in all but a few cases for the MI prediction. It has no effect on the ARACNE predictions and lowers the prediction accuracy of CLR in general. The Silencer is able to correct the loss in AUROC score originating from ARACNE but does not have a positive effect in all other cases. The performance of the Silencer has been subject to controversy [34]. Concluding, we believe that Netter compares favorably to other post-processing approaches. Furthermore it has the advantage that it is not limited to correlation-like measures but can be applied to rankings or ranking ensembles of different algorithms.
Characteristics of improvement with regard to the initial prediction accuracy
Successive applications of Netter
Parameter and structure cost function stability analysis
The large number of parameters which can be set in Netter raises the questions of how one can tune these parameters and how influential these parameters are on the prediction accuracy. Furthermore, one needs to be sure that a small change in the definition of the structural functions does not lead to a large change in the re-ranking accuracy. To address the first question, Netter is equipped with a logger system which can track among others the prediction accuracy, the total cost function, the individual penalty functions and the accept/revert ratio of the simulated annealing process at desired intervals.
Stability tests of α, n, π _{ i } and the relative weights of the structural penalties
Default setting n=25,π _{ i }=0.5i | n=50,π _{ i }=0.5i | n=75,π _{ i }=0.5i | π _{ i }=0.25i, n=50 | π _{ i }=3i, n=50 |
---|---|---|---|---|
0.41 | 0.41 | 0.41 | 0.41 | 0.42 |
Default setting α=10^{−5} | α=10^{−2} | α=10^{−3} | α=10^{−4} | α=10^{−6} |
0.41 | 0.37 | 0.40 | 0.41 | 0.42 |
Default setting g4=2.0 | g4=0.0 | g4=1.0 | g4=5.0 | g4=10.0 |
0.41 | 0.39 | 0.41 | 0.41 | 0.41 |
Default setting r=25.0 | r=0.0 | r=1.0 | r=15.0 | g=35.0 |
0.41 | 0.36 | 0.38 | 0.41 | 0.41 |
Default setting a=25.0 | a=0.0 | a=50.0 | a=75.0 | a=100.0 |
0.41 | 0.40 | 0.41 | 0.41 | 0.41 |
Influence of the number of optimization runs on the convergence of Netter
Influence of the subnetwork size n and coefficients π _{ i }
When calculating the structural cost function, the ranking is divided into subnetworks of increasing size. The size is determined by the parameter n and the impact on the total structural cost function of a single subnetwork g _{ i } is determined by the associated coefficient π _{ i }. Increasing the subnetwork size will decrease the computation time, as there are fewer subnetworks of which the structural properties need to be tracked. On the other hand, a larger subnetwork size leads to less structural differentiation options for the different links, possibly resulting in a lower accuracy. Table 3 shows the results for varying n and π _{ i }. The performance is stable with regard to the coefficient choice for π _{ i } and the subnetwork size n over a wide range of values. Concluding, we recommend to set n to a small value (e.g. 25 or 1/30 of x) to ease the computational demands but to allow for maximum differentiation, however the choice of n and π _{ i } is not crucial with regard to the performance.
Influence of varying the global balance factor α
Probably the most important parameter in the re-ranking algorithm is the parameter α which determines the trade-off between the divergence cost and structural cost of a ranking. If this parameter is set too high, the algorithm will not allow any changes to be made to the original ranking. Whereas if the parameter is set too low, the re-ranking process will not use the original ranking to guide the optimization process. We vary this parameter by setting the values 10^{−i }, with i=2…6. The results are shown in Table 3. For high values of α, the network will only allow small changes to the network, resulting in accuracy which is between the accuracy of the original prediction and the maximum accuracy which can be achieved after re-ranking. Interestingly, the accuracy seems to be stable for the values i=4…6. We believe this is due to the ensemble approach in which we average over several optimization processes.
Influence of varying the relative weight of a individual structure penalty function
The impact of the individual penalty functions on the total structural cost function can be adjusted by changing the associated weights of each penalty function. These weights are typically set by running the algorithm several times with some initial settings and by tracking the individual penalty scores using the logging system. The influence of these parameters is shown in Table 3. For all three penalty functions, a performance loss can be seen if the penalty influence is set to zero and as such is not included in the structural cost function. The weight of all three penalties is shown to be robust for a wide range of values, meaning that a small change in this weight does not result in a big effect on the outcome. As a rule of thumb, we suggest that the weights are set using the logger system to values such that all penalties which the user designed and included more or less equally contribute to the decrease in the overall penalty function. This way, the weights of the individual penalty functions seem to have little effect on the accuracy increase of the re-ranking process.
Influence of the individual structure cost penalty mappings
In order to test the robustness, we replaced the default v-shaped function (f(y)=∥a y+b∥) of each structural penalty in a 4 by 4 grid search. b was set such that the function had zero cost at different values for the structural property y and for each setting of b, four different slopes were selected by varying a. Additional files 1–3 contain the exact values of a and b, a visualization of the functions and the performance metrics of the networks re-ranked by Netter using these settings. For the graphlet based and the gene limiting penalty, the decrease in average AUPR over the 15 networks was at most 0.02 and corresponded to the setting in which the penalty function was moved furthest from the original intersect. We therefore conclude that these penalties are stable over a wide range of possible mapping definitions. The anti-dominate penalty showed a slightly faster decrease in AUPR if the intersection with the x-axis was moved further to the right. In the extreme case the performance dropped to 0.38 from 0.41. The performance loss is slightly more pronounced because unlike the latter penalties the penalty cost associated with y-values left of the intersect have no meaning, as it does not make sense to discourage rankings which explore different regions of the network. Concluding, the exact shape of all three structural penalty functions is robust and only decreases slowly if the function is moved closer to the inverse function. The individual network re-ranking scores can be found in Additional files 1–3.
Further exploration of the impact of the structural penalty function definition
AUPR results of re-ranking without penalty functions for a set number of iterations
Initial | Default re-rank. | 300 iter. | 3000 iter. | 30,000 iter. |
---|---|---|---|---|
0.34 | 0.41 | 0.34 (±0.01) | 0.31 (±0.02) | 0.21 (±0.02) |
This experiment was repeated 10 times and the standard deviation between runs is shown between brackets. The table shows that the performance drops as the number of iterations increases. This is expected, as the initial prediction is more confident about the top of the ranking which would as a result contain more true positive links. Randomly shuffling the ranking would eventually lead to a uniform distribution of the true positive links, resulting in a worse AUPR score and an AUROC score of 0.5. Due to the ensemble nature of Netter, the standard deviation of the performance loss between the final obtained rankings remains small, although the obtained ranking diverges more than in the latter case.
AUPR results of re-ranking with the inverse of the default structural properties
Initial | Default re-rank. | Inv. graphlet | Inv. regulatory | Both inv. |
---|---|---|---|---|
0.34 | 0.41 | 0.32 | 0.30 | 0.26 |
Even in the extreme case in which one uses two inverted functions which are clearly not typical for a gene regulatory network, the accuracy of the prediction remains higher than the randomly shuffled network. This is due to the divergence cost function which attempts to keep the new ranking as close as possible to the original. In case only one inverted function is used, the performance loss is less pronounced, suggesting that other structural properties can counter the effects of ill-chosen penalty functions to some extent. Overall we believe that the performance gain is promising if well-motivated structural properties are used and the performance gain is robust to the exact transformation of the structural property into a penalty function.
Conclusions
In this work we presented Netter, a novel post-processing algorithm for gene regulatory network predictions. Our algorithm re-ranks a sorted list of predicted regulatory interactions using known structural properties of the network. The algorithm works by defining an optimization problem in which we minimize a weighted sum of desired structural properties and a regularization term penalizing divergence from the original prediction. This optimization problem is solved several times using simulated annealing, after which the obtained networks are aggregated using average rank to obtain the final output. We offer a flexible system in which desired structural properties can be developed and included. Expert users can tune the system to include specific prior knowledge but we show that by using three suggested more general penalty functions we can obtain a large accuracy gain on benchmark and artificial data. Using these settings Netter outperforms other post-processing methods such as the Silencer and Network Deconvolution. Although our method is heavily parameterized, we have shown that the performance increase is robust for a wide range of values and structural cost penalty functions. Furthermore, especially the top of the ranking is improved by Netter, making our method appealing for practical use. Finally, we have shown that Netter can further improve the DREAM5 community prediction of the E.coli. network inferred from real expression data.
Notes
Declarations
Acknowledgements
This work was supported by the Ghent University Multidisciplinary Research Partnership Bioinformatics: from nucleotides to networks.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Authors’ Affiliations
References
- Narendra V, Lytkin NI, Aliferis CF, Statnikov A. A comprehensive assessment of methods for de-novo reverse-engineering of genome-scale regulatory networks. Genomics. 2011; 97(1):7–18.PubMed CentralView ArticlePubMedGoogle Scholar
- Soranzo N, Bianconi G, Altafini C. Comparing association network algorithms for reverse engineering of large-scale gene regulatory networks: synthetic versus real data. Bioinformatics. 2007; 23(13):1640–1647.View ArticlePubMedGoogle Scholar
- Hache H, Lehrach H, Herwig R. Reverse engineering of gene regulatory networks: a comparative study. EURASIP J Bioinformatics Syst Biol. 2009; 2009:8–1812.Google Scholar
- De Smet R, Marchal K. Advantages and limitations of current network inference methods. Nat Rev Micro. 2010; 8(10):717–29.Google Scholar
- Michoel T, De Smet R, Joshi A, Van de Peer Y, Marchal K. Comparative analysis of module-based versus direct methods for reverse-engineering transcriptional regulatory networks. BMC Systems Biology. 2009; 3(1):49.PubMed CentralView ArticlePubMedGoogle Scholar
- Marbach D, Costello J, Kuffner R, Vega N, Prill R, Camacho D, et al.Wisdom of crowds for robust gene network inference. Nat Meth. 2012; 9(8):796–804.View ArticleGoogle Scholar
- Reich M, Liefeld T, Gould J, Lerner J, Tamayo P, Mesirov JP. GenePattern 2.0. Nat. Genet. 2006; 38(5):500–1.View ArticlePubMedGoogle Scholar
- Küffner R, Petri T, Tavakkolkhah P, Windhager L, Zimmer R. Inferring gene regulatory networks by ANOVA. Bioinforma. 2012; 28(10):1376–1382.View ArticleGoogle Scholar
- Faith JJ, Hayete B, Thaden JT, Mogno I, Wierzbowski J, Cottarel G, et al.Large-Scale Mapping and Validation of Escherichia coli Transcriptional Regulation from a Compendium of Expression Profiles. PLoS Biol. 2007; 5(1):8.View ArticleGoogle Scholar
- Huynh-Thu VA, Irrthum A, Wehenkel L, Geurts P. Inferring Regulatory Networks from Expression Data Using Tree-Based Methods. PLoS ONE. 2010; 5(9):12776.View ArticleGoogle Scholar
- Bonneau R, Reiss DJ, Shannon P, Facciotti M, Hood L, Baliga NS, et al.The Inferelator: an algorithm for learning parsimonious regulatory networks from systems-biology data sets de novo. Genome Biology. 2006; 7(5):36.View ArticleGoogle Scholar
- Haury AC, Mordelet F, Vera-Licona P, Vert JP. TIGRESS: Trustful Inference of Gene REgulation using Stability Selection. BMC Syst Biol. 2012; 6(1):145.PubMed CentralView ArticlePubMedGoogle Scholar
- Butte AJ, Kohane IS. Mutual information relevance networks: functional genomic clustering using pairwise entropy measurements. Pac Symp Biocomput. 2000; 5:415–26.Google Scholar
- Margolin AA, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Favera RD, et al.ARACNE: An Algorithm for the Reconstruction of Gene Regulatory Networks in a Mammalian Cellular Context. BMC Bioinforma. 2006; 7(Suppl 1):7–7.View ArticleGoogle Scholar
- Langfelder P, Horvath S. WGCNA: an R package for weighted correlation network analysis. BMC Bioinforma. 2008; 9:559.View ArticleGoogle Scholar
- Feizi S, Marbach D, Médard M, Kellis M. Network deconvolution as a general method to distinguish direct dependencies in networks. Nat Biotechnol. 2013; 31(8):726–33.PubMed CentralView ArticlePubMedGoogle Scholar
- Barzel B, Barabási AL. Network link prediction by global silencing of indirect correlations. Nat Biotechnol. 2013; 31(8):720–5.PubMed CentralView ArticlePubMedGoogle Scholar
- Barabási AL, Oltvai ZN. Network biology: understanding the cell’s functional organization. Nat Rev Genet. 2004; 5(2):101–13.View ArticlePubMedGoogle Scholar
- Grigorov MG. Global properties of biological networks. Drug Discovery Today. 2005; 10(5):365–72.View ArticlePubMedGoogle Scholar
- Luscombe NM, Babu MM, Yu H, Snyder M, Teichmann SA, Gerstein M. Genomic analysis of regulatory network dynamics reveals large topological changes. Nature. 2004; 431(7006):308–12.View ArticlePubMedGoogle Scholar
- Albert R. Scale-free networks in cell biology. J Cell Science. 2005; 118:4947–957.View ArticlePubMedGoogle Scholar
- Schlitt T, Brazma A. Current approaches to gene regulatory network modelling. BMC Bioinforma. 2007; 8(Suppl 6):9.View ArticleGoogle Scholar
- Alon U. Network motifs: theory and experimental approaches. Nat Rev Genet. 2007; 8(6):450–61.View ArticlePubMedGoogle Scholar
- Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by Simulated Annealing. Science. 1983; 220(4598):671–80.View ArticlePubMedGoogle Scholar
- Pržulj N, Corneil DG, Jurisica I. Modeling interactome: scale-free or geometric?Bioinforma. 2004; 20(18):3508–515.View ArticleGoogle Scholar
- Pržulj N. Biological network comparison using graphlet degree distribution. Bioinforma. 2007; 23(2):177–83.View ArticleGoogle Scholar
- Marbach D, Prill RJ, Schaffter T, Mattiussi C, Floreano D, Stolovitzky G. Revealing strengths and weaknesses of methods for gene network inference. Proc Natl Acad Sci. 2010; 107(14):6286–291.PubMed CentralView ArticlePubMedGoogle Scholar
- de Matos Simoes R, Emmert-Streib F. Bagging statistical network inference from large-scale gene expression data. PLoS ONE. 2012; 7(3):33624.View ArticleGoogle Scholar
- Ruyssinck J, Huynh-Thu VA, Geurts P, Dhaene T, Demeester P, Saeys Y. Nimefi: Gene regulatory network inference using multiple ensemble feature importance algorithms. PLoS ONE. 2014; 9(3):92709.View ArticleGoogle Scholar
- Marbach D, Schaffter T, Mattiussi C, Floreano D. Generating Realistic In Silico Gene Networks for Performance Assessment of Reverse Engineering Methods. J Comput Biol. 2009; 16(2):229–39.View ArticlePubMedGoogle Scholar
- Prill RJ, Marbach D, Saez-Rodriguez J, Sorger PK, Alexopoulos LG, Xue X, et al.Towards a Rigorous Assessment of Systems Biology Models: The DREAM3 Challenges. PLoS ONE. 2010; 5(2):9202.View ArticleGoogle Scholar
- Van den Bulcke T, Van Leemput K, Naudts B, van Remortel P, Ma H, Verschoren A, et al.SynTReN: a generator of synthetic gene expression data for design and analysis of structure learning algorithms. BMC Bioinforma. 2006; 7(1):43.View ArticleGoogle Scholar
- Schaffter T, Marbach D, Floreano D. GeneNetWeaver: In silico benchmark generation and performance profiling of network inference methods. Bioinforma. 2011; 27(16):2263–270.View ArticleGoogle Scholar
- Bastiaens P, Birtwistle MR, Bluthgen N, Bruggeman FJ, Cho KH, Cosentino C, et al.Silence on the relevant literature and errors in implementation. Nat Biotech. 2015; 33(4):336–9.View ArticleGoogle Scholar