Markov chain Monte Carlo for active module identification problem

Background Integrative network methods are commonly used for interpretation of high-throughput experimental biological data: transcriptomics, proteomics, metabolomics and others. One of the common approaches is finding a connected subnetwork of a global interaction network that best encompasses significant individual changes in the data and represents a so-called active module. Usually methods implementing this approach find a single subnetwork and thus solve a hard classification problem for vertices. This subnetwork inherently contains erroneous vertices, while no instrument is provided to estimate the confidence level of any particular vertex inclusion. To address this issue, in the current study we consider the active module problem as a soft classification problem. Results We propose a method to estimate probabilities of each vertex to belong to the active module based on Markov chain Monte Carlo (MCMC) subnetwork sampling. As an example of the performance of our method on real data, we run it on two gene expression datasets. For the first many-replicate expression dataset we show that the proposed approach is consistent with an existing resampling-based method. On the second dataset the jackknife resampling method is inapplicable due to the small number of biological replicates, but the MCMC method can be run and shows high classification performance. Conclusions The proposed method allows to estimate the probability that an individual vertex belongs to the active module as well as the false discovery rate (FDR) for a given set of vertices. Given the estimated probabilities, it becomes possible to provide a connected subgraph in a consistent manner for any given FDR level: no vertex can disappear when the FDR level is relaxed. We show, on both simulated and real datasets, that the proposed method has good computational performance and high classification accuracy.

(Continued from previous page) for the resampled data. The calculated support values can be then used to distinguish robust signals from noise in the resulting module. However, this method is limited to experiments with a large number of replicates and can not be applied for small-scale experiments.
The way different module confidence thresholds are handled is another problem of binary methods like BioNet. For example, a module of higher confidence could contain vertices not present in a module of lower confidence. Since in a real-case scenario several confidence thresholds are considered, such inconsistencies impede the interpretation of the results. To address this issue, previously [12] we considered a problem of connectivitypreserving vertex ranking -a ranking with the constraints that: 1) each prefix of the ranking should induce a connected subgraph, and 2) smaller induced subgraphs correspond to more confident modules. In that paper we proposed a semi-heuristic ranking method that was better (as measured by the area under the ROC curve, AUC ROC) compared to both baseline vertex ranking by individual input significance and ranking from multiple BioNet runs with different thresholds.
In the current study we consider the active module identification problem as a soft vertex classification problem. In this case, instead of providing a hard classification of vertices into either being in the module or not, we estimate probabilities of each vertex to belong to the active module. To estimate these probabilities we propose a method that is based on Markov chain Monte Carlo (MCMC) module sampling from a posteriori distribution. First, by producing the vertex probabilities, this approach directly resolves the question of individual vertex confidence. Moreover, we show that the estimated probabilities can be used to calculate expected AUC ROC of any ranking and thus the problem of finding the best connectivity-preserving ranking can be defined constructively. We prove that this problem is NP-complete, but show that in practice it can be solved accurately and efficiently using a heuristic algorithm. Finally, we show that the method can achieve high classification accuracy on simulated data and works well on real data.

Formalization of the active module problem in terms of soft classification
For the sake of clarity in this paper we formulate the active module problem in terms of protein-protein interaction networks together with gene expression data. We formalize the protein-protein interaction network as a graph G, where vertices correspond to protein-encoding genes, and two vertices are linked by an edge if the corresponding proteins interact. Gene expression data are given for sets of samples for two biological conditions of interest (e.g. control and treatment), so that differential expression p-values can be calculated for each gene. While for some genes the null hypothesis of having zero expression change between conditions is true, and so the corresponding p-values are uniformly distributed on [ 0, 1], p-values for "interesting" genes that exhibit a difference in expression would tend to be closer to zero. According to [8], those interesting genes form a connected subgraph in G, which is called an active (or functional) module.

Beta-uniform mixture model
As shown in [9,13], the distribution of all p-values can be approximated by the so-called beta-uniform mixture (BUM) distribution, where the beta component corresponds to a signal in the data, and the uniform component corresponds to noise. Let us recall that the beta distribution has support [ 0, 1] and is defined by its density where the Beta function. The BUM distribution is a mixture of uniform and beta β(a, 1) distributions and is defined by its density where λ is the mixture weight of the uniform component and a is the shape parameter of the beta distribution. We assign a weight w(v) to each vertex v of the graph G equal to the p-value assigned to the corresponding gene. Thus, in our model we have: a connected graph G on n = |G| vertices, its connected subgraph M, a family of independent random variables 1], and a family of independent random variables W v , v ∈ V (M) with β(a, 1) distribution.

MCMC approach
Our goal is to find out which vertices are likely to belong to the active module M: Problem 1 (Soft Classification Active Module Problem, SCAMP) Given a connected graph G and vertex weights w v ∈[ 0, 1] find the probability P(v ∈ M | W = w) for each vertex v to belong to the module M.
We solve this problem in the following way. We generate a large sample S of random subgraphs S from conditional distribution P(S = M | W = w) using the Metropolis-Hastings algorithm (see "Methods" section). While all the probabilities P(S = M | W = w) are small and the multiplicity of each subgraph in S is small, the probabilities of each vertex to belong to the module are cumulative statistics and show robust behavior. The same holds for the probabilities P(V ⊂ M | W = w), where V are relatively small sets of vertices.
The benefits of the soft classification approach are: 1 it allows to estimate the level of confidence that a particular gene is expressed differently, which is the probability that a corresponding vertex belongs to the active module; 2 it allows to analyze alternative complement pathways by studying probabilities of the form P( ; 3 for any set of genes V reported as differently expressed, our method allows to compute the false discovery rate (FDR) as 1

|V |
v∈V P(v ∈ M | W = w); 4 it provides a vertex ranking that maximizes the area under the ROC curve (AUC ROC) (see "Features of soft classification solution" section); 5 it allows to heuristically find a vertex connectivity preserving ranking that maximizes the AUC ROC (see "Connectivity preserving ranking" section).

Markov chain Monte Carlo based method
We solve Problem 1 in the following way: we sample a set S of random subgraphs S from conditional distribution P(S = M | W = w) using the Markov chain Monte Carlo (MCMC) approach, and estimate P(v ∈ M | W = w) as First, we estimate the beta-uniform mixture parameters a and λ = 1 − |M| |G| with a maximum-likelihood estimator [14].
For MCMC sampling we implement the Metropolis-Hastings algorithm [15]. It starts at a random subgraph S 0 of order k = |M| = (1 − λ) |G|. On each step i we choose a candidate subgraph S by removing a vertex v − from S i and adding another vertex v + from the neighborhood of S i (by neighborhood of a subgraph S we mean the set of all vertices of G at distance 1 from S). We note that the subgraph S always has the same number of vertices k as S 0 . The proposal probability Q S |S i is We set the acceptance probability in the Metropolis-Hastings algorithm as where P S = M | W = w = 0 as soon as S is not connected. For any connected subgraph S of G the value P S = M | W = w can be expressed in terms of probability density p of an absolutely continuous random vector variable W : So, The fraction where L(S) is the likelihood of the subgraph S. In our case , so, we have: Assuming that the prior distribution of choosing the module P(S = M) is uniform 1 on the set of connected subgraphs S of the same order |M|, we get The proposed MCMC algorithm can visit all possible subgraphs S, and so it converges to the distribution P(S = M | W = w).

Algorithm 1: Metropolis-Hastings algorithm
Input: a connected graph G and vertex weights w v ∈ R + ; k -the number of vertices in the module M Output: A random subgraph S N sampled from conditional distribution P(S = M | W = w) S 0 ← a random connected subgraph on k = |M| vertices;

Heuristic approach to arbitrary module order
Since on the real data the estimated order of an active module has a tendency to be too large (1 − λ is up to 0.5), we adjust our method in order to allow to change the number of vertices in the subgraph during the MCMC process. In this approach on each step of the process one can either add one vertex to the subgraph or remove one vertex from it. In order to provide subgraphs of biologically relevant size, we penalize each additional vertex by a factor aτ a−1 , where τ is some confidence threshold as described in [9]. More formally, it means that we consider such a prior distribution on subgraphs that P(S=M) P(S + =M) = aτ a−1 , where S + is an induced subgraph on the vertices V (S) ∪ {v + } . Thus our heuristic approach is very similar to Algorithm 1, but For estimating probabilities P(v ∈ M | W = w) we need to choose a set of subgraph samples S. Here we consider two ways of doing this. For both ways we need to estimate the mixing time T of the algorithm: the number of Markov chain iterations such that the distribution of S T approximates the target distribution well. We note that theoretically the Markov chain converges to the desired distribution since it is ergodic: it is recurrent (since the number of states is finite), it is aperiodic (since there is a positive probability that the process stays at the same state), and it is irreducible since by construction it can reach any subgraph from any other subgraph. In practice, the mixing time depends on multiple Algorithm 2: Metropolis-Hastings algorithm for subgraphs with an arbitrary number of vertices Input: a connected graph G and vertex weights w v ∈ R + ; threshold τ Output: A random subgraph S N sampled from conditional distribution parameters including the graph G order. The first way to choose S is to do a number of independent MCMC runs of T iterations and add each S T to S. Here, all samples in S are independent, which can be used to calculate the accuracy of the vertex probabilities estimation. Another way consists in doing one long run of MCMC and putting all S i for i > T into S. Although consecutive samples are not independent, the probabilities that are estimated this way converge to the true probabilities given sufficiently long series (see "Experiments on simulated data" section for discussion of the converging time on the simulated data).

Features of soft classification solution
Having the set S of subgraphs sampled from the conditional distribution one can easily answer the following questions. First, one can estimate for each vertex v i the probability p i that it belongs to the active module. In the case when the module size is not fixed, we can also estimate the conditional probabilities

Second, for any boolean-valued function
Note, that if each S from S was generated with an independent MCMC run, then each A(S) can be considered to be a result of a Bernoulli trial. Thus, all such probabilities can be estimated with a mean square error of the order p(1−p) |S| , where p = P(A(M)|W = w). We also show how the probabilities of each vertex to belong to the active module are related to the expected AUC ROC for some vertex ranking. For a vertex ranking v 1 , v 2 , . . . , v n , where n = |G|, and a module M of order m the ROC curve is a step-curve in a square [ 0, 1] ×[ 0, 1], which starts at (0, 0), and on the i-th step it goes either up . The AUC ROC shows how accurate the classification is.
We prove the following lemma:

Lemma 1 Let n be the number of vertices in G, m be the number of vertices in the module
. . , v n be some vertex ranking and p i be the probabilities p i = P(v i ∈ M | W = w). Then the expected value of the AUC ROC is equal to Proof See Section S1 in Appendix.
Lemma 1 implies that the vertex ranking with the largest expected value of the AUC ROC is the ranking according to a descending order on p i (for any particular module order m). If the number m of the vertices in the module is not known, the expected AUC ROC is equal to and E stands for the expected value. We note that the probabilities p i can be estimated based on the sample S or approximated by p i .

Connectivity preserving ranking
While we provide the best solution for the soft classification problem in terms of the expected AUC ROC, one can be interested in getting the solution for the hard classification problem as well. Our method is easily transformable into a hard classification method. Namely, our goal is to define a module M(q) for any FDR level q in a consistent manner. That is, we don't allow the situation when a vertex is included in a module with a small FDR level, but excluded from a module with a larger FDR (formally speaking, we demand that M (q 1 ) ⊂ M(q 2 ) as soon as q 1 < q 2 ). Any such family of modules M(q) corresponds to a connectivity preserving ranking. 1 , v 2 , . . . , v k the induced graph on this set of vertices is connected.

Definition 1 For a graph G with vertices v i the connectivity preserving ranking is such a ranking of v i that for any k-prefix v
We want to choose the best connectivity preserving ranking in terms of the expected value of the AUC ROC. Since the expected AUC ROC depends only on p i (or their modified versions p i ), we can formulate the following problem: Problem 2 (Optimal Connectivity Preserving Ranking, OCPR) Given a graph G and the list of its vertices v i , each equipped with the probability p i , find the connectivity preserving ranking maximizing the expected AUC ROC.
Proof See Section S2 in Appendix.
As Problem 2 is NP-hard, here we provide a heuristic algorithm to solve it. On the k-th step of the algorithm we have an integer number r k and G k -a connected subgraph of G (where r 1 = n and G 1 = G), and we define a connected subgraph G k+1 in the following way. For each vertex v we define a subgraph G (v) k+1 as the largest connected component of Then we assign the rank r k to all vertices in the subgraph H. The next rank r k+1 is then defined as r k − |H|.
Each step requires time O n 2 , so the performance time is O n 3 . On our real data example, the running time was 13 seconds on Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz (implemented in C++).

Baseline ranking methods
We compared our approach with three other ranking methods on simulated data in "Experiments on simulated data" section. The first method ranks vertices by the ascending order of their input weights w v . In the second method vertices are ranked by the descending order of the number of occurrences in the modules as found by running the BioNet method with 20 different significance thresholds. The thresholds are selected to be distributed at equal intervals between maximum and minimum vertex log-likelihoods. The third method is a semi-heuristic ranking method from [12] developed for finding a good connectivity preserving ranking. To describe it briefly, it first finds the maximumlikelihood connected subgraph and recursively ranks a set of vertices inside and outside of the subgraph. The recursion step also involves finding a maximum-likelihood connected subgraph but with constraint on consistent connectivity with already defined ranking.
Note that we do not compare our method with the jackknife resampling method [11] on simulated data as it requires simulating gene expression data, which can induce artificial biases in the evaluation. Instead, we compare our results with the results of this method on real data (see "Experimental results on real data" section).

Experiments on simulated data
First, in order to show that our implementation is correct, we consider a toy example, for which the desired probabilities can be directly computed. We generated a random graph G on 30 vertices and 65 edges. Then we chose the active module M on 9 vertices randomly and generated the weights from the beta distribution β(0.2, 1) for the vertices in M and from the uniform distribution for all other vertices. For such a toy example we can both compute the probabilities P(v ∈ M|W = w) directly from (1) and estimate them asP(v ∈ M|W = w) using our MCMC approach. We run 10 6 iterations of the Metropolis-Hastings algorithm. The estimations approximated the actual probabilities very accurately with root-mean-square error equal to 2 · 10 −3 . We also note that the ranking of vertices based on estimationsP(v ∈ M|W = w) was the same as the ranking based on true probabilities. For both actual and estimated probabilities the AUC ROC is equal to 0.92.
Second, we considered MCMC convergence on a real-world protein-protein interaction graph. We used a graph with 2,034 vertices and 8,399 edges as constructed in [9] for a diffuse large B-cell lymphoma dataset. We chose an active module uniformly at random from connected subgraphs on 200 vertices. Weights of the vertices in the active module were generated from the beta distribution β(0.25, 1). We considered the behavior of loglikelihood values for samples during one MCMC run (Fig. 1, green line). This plot shows that the log-likelihood value stabilizes after about 25,000 iterations. Thus, we can estimate the MCMC mixing time T as 25,000 for this case. We also checked that 25,000 iterations is sufficient for estimating vertex probabilities. For different values of T we calculated AUC values for rankings based on 1,000 independent runs of MCMC for T iterations (Fig. 1,  red line). The results show that indeed 25,000 iterations is enough to achieve high AUC values, and the saturation phase begins even earlier. Lastly, we compared ranking for one long MCMC run and for 1,000 independent samples (Fig. 1, blue line). For one long run we estimated probabilities using all generated MCMC samples except the first 25,000. It can be seen that AUC values saturate after about 50,000 total MCMC iterations which justifies the usage of one long MCMC run. Practically, this means that good probability estimates can be achieved very fast, given that 100,000 MCMC iterations took about one minute on a laptop.
Finally, we compared our MCMC-based connectivity preserving ranking method with three other ranking methods (see "Baseline ranking methods" section) on simulated data. Here we considered random instances of sizes 100, 500 and 2,034, 50 instances of each size. For the cases with 100 and 500 vertices scale-free graphs were randomly generated for each instance, for 2,034 vertices the previosly constructed graph from the BioNet example was used. Then we generated active modules in two steps: 1) the number of the vertices in a module was uniformly selected from 5% to 25% of the original graph order and 2) a module was chosen uniformly at random from all connected subgraphs with the selected order. Vertex p-values were generated from the beta-uniform mixture distribution with values of the parameter a selected from the [ 0.01, 0.5] interval. The AUC measures for rankings produced by four tested methods are shown on Fig. 2. For all values of a our approach shows significantly better performance compared to all three baseline methods. Fig. 2 Ranking AUC values for simulated instances for graphs on a 100, 500 and 2,034 vertices with different parameters of beta-uniform mixture. The proposed MCMC method is compared with the following methods: ranking by input p-values (left), semi-heuristic ranking from [12] (middle) and BioNet-based ranking (right). One arrow corresponds to one experiment. The head of each arrow points to the AUC score of the MCMC method, while the tail indicates the AUC score of the corresponding alternative method. Thus, upward arrows indicate instances in which the MCMC method has a higher AUC score. Dots indicate the instances for which the results are equal. Color depends on which method has better AUC

Experimental results on real data
Similarily to the BioNet paper [9] we applied our method to the diffuse large Bcell lymphoma dataset and the protein-protein interaction graph constructed there. The p-values in the diffuse large B-cell lymphoma (DLBCL) dataset were the result of a differential expression t-test between the two tumor subgroups: germinal center B-cell-like (GCB) DLBCL and activated B-cell-like (ABC) DLBCL. The estimated BUM distribution parameters were λ = 0.48 and a = 0.18. As described in Section "Heuristic approach to arbitrary module order" section, we penalized the addition of a new vertex to the module using the confidence threshold τ = 10 −7 .
The obtained module is shown on Fig. 3. The module shows the prefix of the ranking with FDR = 0.25. Additionally, we highlighted submodules of more strict FDR values: 0.15 and 0.05. Note that, by construction, a more strict module is a subgraph of a less strict one. The genes in the modules are mostly known to be associated with cancer. For example, genes BCL2, BMF, CASP3, PTK2 and WEE1 are involved in the cell apoptosis pathway. Some other genes, like LYN or KCNA3, are associated with proliferation and signalling in hematopoietic cells. Additionally, we compared our result to one obtained by the jackknife resampling procedure as described in [11] with 100 resamples and the same threshold of τ = 10 −7 . Resampling support values are consistent with MCMC-based probabilities (see Appendix). Finally, we applied our method to hypoxia induced gene expression changes in neural stem cells (GSE80070) [16]. There we calculated differential expression between normoxia and hypoxia conditions in cortex primary cells and used differential expression p-values as an input to our algorithm. The input graph is a graph of protein-protein interaction derived from the InWeb network [17]. After removing proteins with more than 250 connections, selecting only the expressed genes, and selecting the largest connected component, we obtained a graph of 8,807 vertices and 103,933 edges. As a result of our method we have obtained a ranking, the top vertices subgraph from which is shown on Fig. 4. The module includes many known hypoxia-related genes, such as genes from the glycolysis pathway: TPI1, PGM2, LDHA, and, in particular, HIF1A -a known master regulator of cellular response to hypoxia, appearing at the core of the module.
For this dataset known hypoxia-related gene sets can be used as an approximation of the ground truth. Formally, we use a union of REACTOME_CELLULAR_RE-SPONSE_TO_HYPOXIA and HALLMARK_HYPOXIA gene sets from MSigDB [18] as our reference hypoxia pathway. With an approximation of the ground truth set known we were able to calculate quality metrics and compare them to other methods. For the comparison we used ranking by the nominal differential expression p-value, individual solutions by the BioNet method for different thresholds (the jackknife extension of BioNet is not applicable in this case, since the comparison is performed between groups of three biological replicates each), and individual subgraphs found by jActiveModules [8]. Confirming the results from [9] we show that jActiveModules solutions are dominated by BioNet solutions both in terms of true positive rate vs false positive rate and precision vs recall (Fig. 5). Our method, BioNet and p-value ranking are showing comparable performance, with our method showing marginally better ranking performance in terms of AUC  Additionally, we systematically tested our method on a collection of human gene expression datasets based on [19]. We analyzed 59 datasets, where for each dataset a relevant KEGG disease pathway ("target" pathway) is known, so that AUC ROC and AUC PR metrics can be calculated. As a baseline we used P-value-based and BioNet-based ranking, as described previously. For the AUC ROC metric, our method was better than BioNetbased ranking in 52 cases, and better than P-value based rakning in 55 cases. Similarly, for the AUC PR metric our method was better in 49 cases compared to BioNet and in 53 cases compared to P-value based ranking. The full results are available in Supplementary Materials, Table S4. However, we note that these results should be taken with caution, as all the methods performed poorly in terms of the AUC PR metric (with median value around 0.01). This means that these target pathways are far from perfect ground truth approximations, which highlights the importance of developing better benchmarking datasets.
While jackknife resampling support values [11] are consistent with our MCMC-based probabilities for the DLBCL dataset, they are not available for the hypoxia dataset and other datasets with small number or replicates. Thus, our approach has two major advantages compared to the jackknife method. First, our method is more flexible: it starts with p-values and can be used even for experiments with a small number of replicates, where resampling is not feasible. Second, calculating individual vertex probabilities does not involve solving NP-hard problems and can be easily done in practice without dependency on external solver libraries like IBM ILOG CPLEX. Even the NP-hard problem of finding the AUC ROC optimal connectivity preserving ranking can be solved well on real data with the heuristic algorithm.

Conclusion
While the active module identification problem was intensively studied in recent years, most of the approaches solve it as a hard classification problem. Here we study the question of assigning confidence values for individual vertices and thus consider a soft classification problem. All the hard classification methods could only provide a blackbox which tells us whether a particular gene belongs to the active module or not, and so, by design, they all have a flaw: they don't provide any way to distinguish between more and less confident inclusions. The soft classification approach addresses this issue.
We propose a method to estimate probabilities of each vertex to belong to the active module based on Markov chain Monte Carlo sampling. Based on these probabilities, for any given FDR level we can provide a solution to the hard classification problem in a consistent manner: a module for a more strict FDR level is a subgraph of any module for a more relaxed FDR level. Overall, the proposed approach is very flexible: it starts with pvalues, and thus can be used in many different contexts, and does not depend on external libraries for solving NP-hard problems. 1 Since we do not make any additional assumptions about the module, the uniform distribution on subgraphs of equal order is the best choice of the prior distribution.

S1
In this section we prove Lemma 1.
Proof First of all, we introduce a rotated not-normalized ROC curve (Fig. 6). On the i-th step this curve goes either up . This linear transformation of the ROC curve is a function, so one can find not only the expected value of the area under it, but its pointwise expected value as well. Let its height at the point i ). Thus, we obtian: After normalization we obtain

S2
In this section we prove Lemma 2.
Proof To prove that OCPR is NP-complete, we will use a modification of the method for proving that the Steiner tree problem in graphs is NP-complete. The referenced method is described in [20].
In decisional form, OCPR is equivalent to: OCPR (decisional). Given a graph G, the list of its vertices v i , each equipped with the probability p i , and a real number k, determine if there is a connectivity preserving ranking First, we need to show that OCPR is in NP. Given an hypothetic positive solution v 1 , v 2 , . . . , v n , it is trivial to check in polynomial time that this ranking is connectivity preserving and n i=1 ip i ≤ k holds.
The Exact Cover by 3-Sets problem (X3C) is a well-known NP-complete problem mentioned in [21]: Problem X3C. Given a finite set X with |X| = 3q and a collection C of 3-element subsets of X (C = {C 1 , . . . , C n }, C i ⊆ X, |C i | = 3 for 1 ≤ i ≤ n), determine if C contains an exact cover for X, that is, a subcollection C ⊆ C such that every element of X occurs in exactly one member of C ?
Let us propose a reduction from X3C to OCPR giving a set of rules to build an instance of OCPR starting from a generic instance of X3C. If we prove that this transformation is executable in polynomial time, we will also prove that OCPR is NP-complete. Given an instance of X3C, defined by the set X = x 1 , . . . , x 3q and a collection of 3-element sets C = {C 1 , . . . , C n }, we have to build an OCPR instance specifying the graph G = (V , E), the probabilities p i , and the upper bound on the required sum k.
The vertices of G are defined as: That is, we add a new node v, a node for each member of C, and a node for each element of X.
The edges of G are defined as: That is, there is an edge from v to each node c i , and an edge c i x j if the element x j belongs to the set C i of the X3C instance.
Let ε = 1 12q(q+1) . The probabilities p i are defined as: and the upper bound on the required sum is defined as k = 3 2 . The reduction from X3C to OCPR is easy to do in polynomial time. Proof. We split the proof into two parts, one for each implication.
• X3C ⇐ OCPR Suppose there is a connectivity preserving ranking with n i=1 ip i ≤ k.
Since k < 2 and p v = 1, the first vertex in this ranking must be v. Let x t 1 , x t 2 , . . . , x t 3q be the order the vertices corresponding to the X3C instance elements appear in this ranking. For every j, vertex v and at least j 3 vertices corresponding to 3-element sets of the X3C instance must appear before vertex x t j in the ranking, since every vertex c i appearing in the ranking opens the way for at most three new vertices x j to appear afterwards. Hence, the earliest position vertex x t j might appear at in the ranking is j + 1 + j 3 . But if every x t j does appear at its earliest possible position, then, like in the former case, n i=1 ip i = 1 + (3 + 4 + 5+7 + 8 + 9 + . . . + (4q − 1) + 4q + (4q + 1))ε = = 1 + 6q(q + 1)ε = 3 2 = k.
If any vertex x t j appears at a later position, its index in the ranking will increase, the sum of ip i will increase and exceed k (since the sum of ip i is already equal to k and p x t j > 0). Therefore, every vertex x t j must appear at position j + 1 + j 3 , and the Fig. 7 The support values and the probabilities of each vertex to belong to the active module are highly correlated ranking looks like in the former case, too: v, . . . , c q , x t 3q−2 , x t 3q−1 , x t 3q , c q+1 , c q+2 , . . . , c n .
It's easy to prove that x t 3i−2 , x t 3i−1 and x t 3i belong to C i using induction on i from 1 to q. Hence, C = C 1 , C 2 , . . . , C q is an exact cover for the X3C instance.
This concludes the proof.

S3
In this section we compare our method and the jackknife resampling method described in [14]. We apply our method and the jackknife resampling method to the diffuse large Bcell lymphoma dataset and the protein-protein interaction graph constructed in [9]. The p-values in the DLBCL dataset are the result of a differential expression t-test between the two tumor subgroups: germinal center B-cell-like (GCB) DLBCL and activated B-cell-like (ABC) DLBCL. In our method we penalized adding a new vertex to the module using the confidence threshold τ = 10 −7 . For the jackknife method the p-values are recalculated for each of 100 data resamples and then for each collection of the p-values the active module identification problem is solved with BioNet (with the same threshold τ = 10 −7 ). As the result of this procedure, for each vertex the support value (the occurrence frequency of the vertex in the solution) is computed. As one can see (Fig. 7), the results of these two methods are consistent.