In this section, we introduce the method NHMC (Network CLUS-HMC; a preliminary version of NHMC has been presented in [15]), the major contribution of the paper. NHMC builds autocorrelation-aware models (trees) for HMC. We shall start with a brief description of the algorithm CLUS-HMC, which builds trees for HMC and is the starting point for developing NHMC.
For the HMC task, the input is a dataset U consisting of instances (examples) that have the form u
i
= (x
i
,y
i
) ∈ X × 2C, where X = X1 × X2…× X
m
is the space spanned by m attributes or features (either continuous or categorical), while 2C is the power set of C = {c1,…,c
K
}, the set of all possible class labels. C is hierarchically organized with respect to a partial order ≼ which represents the superclass relationship. Note that each y
i
satisfies the hierarchical constraint:
(1)
The method we propose (NHMC) builds a generalized form of decision trees and is set in the Predictive Clustering (PC) framework [11]. The PC framework views a decision tree as a hierarchy of clusters: the top-node corresponds to one cluster containing all the data, that is recursively partitioned into smaller clusters when moving down the tree. Such a tree is called a predictive clustering tree (PCT). PCT combines elements from both prediction and clustering. As in clustering, clusters of data points that are similar to each other are identified, but, in addition, a predictive model is also associated to each cluster. This predictive model provides a prediction for the target property of new examples that are recognized to belong to the cluster. In addition, besides the clusters themselves, PC approaches also provide symbolic descriptions of the constructed (hierarchically organized) clusters.
The original PC framework is implemented in the CLUS system [11] (http://sourceforge.net/projects/clus/), which can learn both PCT and predictive clustering rules. The induction of PCT is not very different than the induction of standard decision trees (as performed, e.g., by the C4.5 algorithm [16]). The algorithm takes as input a set of training instances and searches for the best acceptable test to put in a node and split the data. If such a test can be found, then the algorithm creates a new internal node labeled with the test and calls itself recursively to construct a subtree for each subset (cluster) in the partition induced by the test on the training instances.
CLUS-HMC
The CLUS-HMC [4] algorithm builds HMC trees, PCT for hierarchial multi-label classification (see Figure 1(c) for an example of an HMC tree). These are very similar to classification trees, but each leaf predicts a hierarchy of class labels rather than a single label. CLUS-HMC builds the trees in a top-down fashion and the outline of the algorithm is very similar to that of top-down decision tree induction algorithms (see the CLUS-HMC pseudo-code in Additional file 1). The main differences are in the search heuristics and in the way predictions are made. For the sake of completeness both aspects are reported in the following. Additional details on CLUS-HMC are given by Vens et al. [4].
Search heuristics
To select the best test in an internal node of the tree, the algorithm scores the possible tests according to the reduction in variance (defined below) induced on the set U of examples associated to the node. In CLUS-HMC, the variance of class labels across a set of examples U is defined as follows:
(2)
where L
i
is the vector associated to the class labels of example u
i
(each element of L
i
is binary and represents the presence/absence of a class label for u
i
), is the average of all L
i
vectors corresponding to the class labels of examples in U and d(·,·) is a distance function on such vectors. The basic idea behind the use of the variance reduction is to minimize intra-cluster variance.
In the HMC context, class labels at higher levels of the annotation hierarchy are more important than class labels at lower levels. This is reflected in the distance measure used in the above formula, which is a weighted Euclidean distance:
(3)
where Li,k is the k-th component of the class vector L
i
and the class weights ω(c
k
) associated with the labels decrease with the depth of the class in the hierarchy. More precisely, ω(c) = ω0·a v g
j
{ω(p
j
(c))}, where p
j
(c) denotes the j-th parent of class c and 0 < ω0 < 1). This definition of the weights allows us to take into account a hierarchy of classes, structured as a tree and DAG (multiple parents of a single label).
For instance, consider the small hierarchya in Figure 1(b), and two examples (x1,y1) and (x2,y2), where y1 = {a l l,B,B.1,C,D,D.2,D.3} and y2 = {a l l,A,D,D.2,D.3}. The class vectors for y1 and y2 are: L1 = [ 1,0,0,0,1,1,1,1,0,1,1] and L2 = [ 1,1,0,0,0,0,0,1,0,1,1]. The distance between the two class vectors is then:
(4)
At each node of the tree, the test that maximizes the variance reduction is selected. This is expected to maximize cluster homogeneity with respect to the target variable and improve the predictive performance of the tree. If no test can be found that significantly reduces variance (as measured by a statistical F-test), then the algorithm creates a leaf and labels it with a prediction, which can consist of multiple hierarchically organized labels.
Predictions
A classification tree typically associates a leaf with the "majority class", i.e., the label most appearing in the training examples at the leaf. This label is later used for prediction purposes when a test case reaches that leaf. However, in the case of HMC, where an example may have multiple classes, the notion of "majority class" cannot be straightforwardly applied. In fact, CLUS-HMC associates the leaf with the mean of the class vectors of the examples in the leaf. The value at the k-th component of is interpreted as the membership score of class c
k
, i.e., the probability that an example arriving at the leaf will be labeled with a class c
k
.
For an example arriving at a leaf, binary predictions for each class label can be obtained by applying a user defined threshold τ on this probability: If the i-th component of is above τ (> τ), then the leaf predicts the class c
i
. To ensure that the predictions satisfy the hierarchical constraint, i.e., whenever a class is predicted, its super-classes are also predicted, it suffices to choose τ
i
≤ τ
j
whenever c
j
is ancestor of c
i
.
NHMC
We first discuss the network setting that we consider in this paper. We then propose a new network autocorrelation measure for HMC tasks. Subsequently, we describe the CLUS-HMC algorithm for learning HMC trees and introduce its extension NHMC (i.e., Network CLUS-HMC), which takes into account the network autocorrelation (coming from PPI networks) when learning trees for HMC.
Network setting for HMC
Some uses of a PPI network in learning gene function prediction models include: treating the interactions between pairs of genes as descriptive attributes (e.g., binary attributes [17]) and generating new features as combinations of PPI data with other descriptive attributes. Both approaches require that data are pre-processed before applying a network oblivious learning method (e.g., CLUS-HMC). However, the applicability of predictive models built in this way is strongly dependent on PPI network information being available for the testing data, i.e., for the proteins whose gene function we want to predict.
In order to learn general models, which can be used to make predictions for any test set, we use protein interactions as a form of background knowledge and exploit them only in the learning phase. More specifically, in the training phase, both gene properties and network structure are considered. In the testing phase, only gene properties are considered and the network structure is disregarded. This key feature of the proposed solution is especially attractive when function prediction concerns new genes, for which interactions with other genes are not known or are still to be confirmed.
Following Steinhaeuser et al. [18], we view a training set as a single network of labeled nodes. Formally, the network is defined as an undirected edge-weighted graph G = (V,E), where V is the set of labeled nodes, and is the set of edges. Each edge u ↔ v is assigned with a non-negative real number w, called the weight of the edge. It can be represented by a symmetric adjacency matrix W, whose entries are positive (w
ij
> 0) if there is an edge connecting i to j in G, and zero (w
ij
= 0) otherwise. In PPI networks, edge weights can express the strength of the interactions between proteins. Although the proposed method works with any non-negative weight values, in our experiments we mainly focus on binary (0/1) weights.
Each node of the network is associated with an example pair u
i
= (x
i
,y
i
) ∈ X × 2C, where , is subject to the hierarchical constraint. Given a network G = (V,E) and a function η : V ↦ (X × 2C) which associates each node with the corresponding example, we interpret the task of hierarchical multi-label classification as building a PCT which represents a multi-dimensional predictive function f : X ↦ 2C that satisfies the hierarchical constraint, maximizes the autocorrelation of the observed classes y
i
for the network G, and minimizes the prediction error on y
i
for the training data η(V).
Network autocorrelation for HMC
An illustration of the concept of network autocorrelation for HMC is a special case of network autocorrelation [19]. It can be defined as the statistical relationship between the observations of a variable (e.g., protein function) on distinct but related (connected) nodes in a network (e.g., interacting proteins). In HMC, domain values of the variable form a hierarchy, such as the GO hierarchy for protein functions. Therefore, it is possible to define network autocorrelation for individual nodes and for various levels of the hierarchy.
In predictive modeling, network autocorrelation can be a problem, since the i.i.d. assumption is violated, but also an opportunity, if it is properly considered in the model. This is particularly true for the task of hierarchical multi-label classification considered in this work. Indeed, due to non-stationary autocorrelation, PPI network data can provide useful (and diverse) information for each single class at each level of the hierarchy. Intuitively, genes belonging to classes at higher levels of the hierarchy tend to participate in very general types of interactions, while genes belonging to classes at lower levels of the hierarchy tend to participate in very specific and localized interactions. In any case, the effect of autocorrelation changes from level to level (this aspect is also mentioned by Gillis and Pavlidis [20]). For this reason, we explicitly measure autocorrelation and we build a model such that its value is maximized.
Geary’s Cfor HMC
In order to measure the autocorrelation of the response variable Y in the network setting for HMC, we propose a new statistic, named A
Y
(U), whose definition draws inspiration from Global Geary’s C[21]. Global Geary’s C is a measure of spatial autocorrelation for a continuous variable. Its basic definition (used in spatial data analysis [22]) is given in Additional file 2.
Let u
i
= (x
i
,y
i
) ∈ U ⊆ X × 2C be an example pair in a training set U of N examples. Let K be the number of classes in C, possibly defining a hierarchy. We represent y
i
as a binary vector L
i
of size K, such that Li,k = 1 if c
k
∈ y
i
and Li,k = 0 otherwise, and each L
i
satisfies the hierarchical constraint. Let d(L
i
,L
j
) be a distance measure defined for two binary vectors associated to two examples u
i
= (x
i
,y
i
) and u
j
= (x
j
,y
j
), which takes the class-label hierarchy into account.
The network autocorrelation measure A
Y
(U), based on Geary’s C, is defined as follows:
(5)
where is the vector representation of the mean vector computed on all binary vectors associated to example pairs in U. The constant 4 in the denominator is included for scaling purposes. The new autocorrelation measure A
Y
(U) takes values in the unit interval [0,1], where 1 (0) means strong positive (negative) autocorrelation and 0.5 means no autocorrelation.
The Algorithm
We can now proceed to describe the top-down induction algorithm for building Network HMC trees. The main differece with respect to CLUS-HMC is that the heuristic is different. The network is considered as background knowledge and exploited only in the learning phase. Below, we first give an outline of the algorithm, before giving details on the new search heuristics, which takes autocorrelation into account. We discuss how the new search heuristics can be computed efficiently.
Outline of the algorithm
The top-down induction algorithm for building PCT for HMC from network data is given below (Algorithm 1). It takes as input the network G = (V,E) and the corresponding HMC dataset U, obtained by applying η : V ↦ X × 2C to the vertices of the network.
In practice, this means that for each gene u
i
(see Figure 1(b)) there is a set of (discrete and continuous) attributes describing different aspects of the genes. For the experiments with the yeast genome, these include sequence statistics, phenotype, secondary structure, homology, and expression data (see next Section) and a class vector, L
i
i.e., functional annotations associated to it.
The algorithm recursively partitions U until a stopping criterion is satisfied (Algorithm 1 line 2). Since the implementation of this algorithm is based on the implementation of the CLUS-HMC algorithm, we call this algorithm NHMC (Network CLUS-HMC).
Search space
As in CLUS-HMC, for each internal node of the tree, the best split is selected by considering all available attributes. Let X
i
∈ {X1,…,X
m
} be an attribute and its active domain. A split can partition the current sample space D according to a test of the form X
i
∈ B, where . This means that D is partitioned into two sets, D1 and D2, on the basis of the value of X
i
.
For continuous attributes, possible tests are of the form X ≤ β. For discrete attributes, they are of the form (where is a non-empty subset of the domain Dom
X
of X). In the former case, possible values of β are determined by sorting the distinct values in D, then considering the midpoints between pairs of consecutive values. For b distinct values, b-1 thresholds are considered. When selecting a subset of values for a discrete attribute, CLUS-HMC relies on the non-optimal greedy strategy proposed by Mehta et al. [23].
Heuristics
The major difference between NHMC and CLUS-HMC is in the heuristics they use for the evaluation of each possible split. The variance reduction heuristics employed in CLUS-HMC (Additional file 1) aims at finding accurate models, since it considers the homogeneity in the values of the target variables and reduces the error on the training data. However, it does not consider the dependencies of the target variables values between related examples and therefore neglects the possible presence of autocorrelation in the training data. To address this issue, we introduced network autocorrelation in the search heuristic and combined it with the variance reduction to obtain a new heuristics (Algorithm 1).
More formally, the NHMC heuristics is a linear combination of the average autocorrelation measure A
Y
(·) (first term) and variance reduction Var(·) (second term):
(6)
where Var′(U) is the min-max normalization of Var(U), required to keep the values of the linear combination in the unit interval [0,1], that is:
(7)
with δ
max
and δ
min
being the maximum and the minimum values of Var(U) over all tests.
We point out that the heuristics in NHMC combines information on both the network structure, which affects A
Y
(·), and the hierarchical structure of the class, which is embedded in the computation of the distance, d(·,·) used in formula (5) and (2). We also note that the tree structure of the NHMC model makes it possible to consider different effects of the autocorrelation phenomenon at different levels of the tree model, as well as at different levels of the hierarchy (non-stationary autocorrelation). In fact, the effect of the class weights ω(c
j
) in Equation (3) is that higher levels of the tree will likely capture the regularities at higher levels of the hierarchy.
However, the efficient computation of distances according to Equation 3 is not straightforward. The difficulty comes from the need of computing A(U1) and A(U2)incrementally, i.e., from the statistics already computed for other partitions. Indeed, the computation of A(U1) and A(U2) from scratch for each partition would increase the time complexity of the algorithm by an order of magnitude and would make the learning process too inefficient for large datasets.
Efficient computation of the heuristics
In our implementation, in order reduce the computational complexity, Equation (6) is not computed from scratch for each test to be evaluated. Instead, the first test to be evaluated is that which splits U in U2 ≠ ∅ and U1 ≠ ∅ such that |U2| is minimum (1 in most of cases, depending on the first available test) and U1 = U - U2. Only on this partition, Equation (6) is computed from scratch. The subsequent tests to be evaluated progressively move examples from U1 to U2. Consequently, A
Y
(U1),A
Y
(U2),V a r(U1) and V a r(U2) are computed incrementally by removing/adding quantities to the same values computed in the evaluation of the previous test.
Var(·) can be computed according to classical methods for incremental computation of variance. As regards A
Y
(·), its numerator (see Equation (5)) only requires distances that can be computed in advance. Therefore, the problem remains only for the denominator of Equation (5). To compute it incrementally, we consider the following algebraic transformations:
where U = U′ ∪ {u
t
} and () is the average class vector computed on U (U′).
This allows us to significantly optimize the algorithm, as described in the following section.
Time complexity
In NHMC, the time complexity of selecting a split test represents the main cost of the algorithm. In the case of a continuous split, a threshold β has to be selected for the continuous variable. If N is the number of examples in the training set, the number of distinct thresholds can be N - 1 at worst. Since the determination of candidate thresholds requires an ordering of the examples, its time complexity is O(m · N · logN), where m is the number of descriptive variables.
For each variable, the system has to compute the heuristic h for all possible thresholds. In general, this computation has time-complexity O((N - 1) · (N + N · s) · K), where N - 1 is the number of thresholds, s is the average number of edges for each node in the network, K is the number of classes, O(N) is the complexity of the computation of the variance reduction and O(N · s) is the complexity of the computation of autocorrelation.
However, according to the analysis reported before, it is not necessary to recompute autocorrelation values from scratch for each threshold. This optimization makes the complexity of the evaluation of the splits for each variable O(N · s · K). This means that the worst case complexity of creating a split on a continuous attribute is O(m · (N · logN + N · s) · K).
In the case of a discrete split, the worst case complexity (for each variable and in the case of optimization) is O((d - 1) · (N + N · s) · K), where d is the maximum number of distinct values of a discrete variable (d ≤ N). Overall, the identification of the best split node (either continuous or discrete) has a complexity of O(m · (N · logN + N · s) · K) + O(m · d · (N + N · s) · K), that is O(m · N · (logN + d · s) · K). This complexity is similar to that of CLUS-HMC, except for the s factor which equals N in the worst case, although such worst-case behavior is unlikely.
Additional remarks
The relative influence of the two parts of the linear combination in Formula (6) is determined by a user-defined coefficient α that falls in the interval [0,1]. When α = 0, NHMC uses only autocorrelation, when α = 0.5, it weights equally variance reduction and autocorrelation, while when α = 1 it works as the original CLUS-HMC algorithm. If autocorrelation is present, examples with high autocorrelation will fall in the same cluster and will have similar values of the response variable (gene function annotation). In this way, we are able to keep together connected examples without forcing splits on the network structure (which can result in losing generality of the induced models).
Finally, note that the linear combination that we use in this article (Formula (6)) was selected as a result of our previous work on network autocorrelation for regression [12]. The variance and autocorrelation can also be combined in some other way (e.g., by multiplying them). Investigating different ways of combining them is one of the directions for our future work.
Comments
View archived comments (1)