Discriminative topological features reveal biological network mechanisms

Background Recent genomic and bioinformatic advances have motivated the development of numerous network models intending to describe graphs of biological, technological, and sociological origin. In most cases the success of a model has been evaluated by how well it reproduces a few key features of the real-world data, such as degree distributions, mean geodesic lengths, and clustering coefficients. Often pairs of models can reproduce these features with indistinguishable fidelity despite being generated by vastly different mechanisms. In such cases, these few target features are insufficient to distinguish which of the different models best describes real world networks of interest; moreover, it is not clear a priori that any of the presently-existing algorithms for network generation offers a predictive description of the networks inspiring them. Results We present a method to assess systematically which of a set of proposed network generation algorithms gives the most accurate description of a given biological network. To derive discriminative classifiers, we construct a mapping from the set of all graphs to a high-dimensional (in principle infinite-dimensional) "word space". This map defines an input space for classification schemes which allow us to state unambiguously which models are most descriptive of a given network of interest. Our training sets include networks generated from 17 models either drawn from the literature or introduced in this work. We show that different duplication-mutation schemes best describe the E. coli genetic network, the S. cerevisiae protein interaction network, and the C. elegans neuronal network, out of a set of network models including a linear preferential attachment model and a small-world model. Conclusions Our method is a first step towards systematizing network models and assessing their predictability, and we anticipate its usefulness for a number of communities.


Introduction
The post-genomic revolution has ushered in an ensemble of novel crises and opportunities in rethinking molecular biology.The two principal directions in genomics, sequencing and transcriptome studies, have brought to light a number of new questions and forced the development of numerous computational and mathematical tools for their resolution.The sequencing of whole organisms, including homo sapiens, has shown that in fact there are roughly the same number of genes in men and in mice.
1 Moreover, much of the coding regions of the chromosomes (the subsequences which are directly translated into proteins) are highly homologous.The complexity comes then, not from a larger number of parts, or more complex parts, but rather through the complexity of their interactions and interconnections.
Coincident with this biological revolution -the massive and unprecedented volume of biological data -has blossomed a technological revolution with the popularization and resulting exponential growth of the Internet.Researchers studying the topology of the Internet [2] and the World Wide Web [3] attempted to summarize their topologies via statistical quantities, primarily the distribution P (k) over nodes of given connectivity or degree k, which it was found, was completely unlike that of a "random" or Erdos-Renyi graph 1 .Instead, the distribution obeyed a power-law P (k) ∼ k −γ for large k.This observation created a flurry of activity among mathematicians at the turn of the millennium both in (i) measuring the degree distributions of innumerable technological, sociological, and biological graphs (which generically, it turned out, obeyed such power-law distributions) and (ii) proposing myriad models of randomly-generated graph topologies which mimicked these degree distributions (cf.[6] for a thorough review).The success of these latter efforts reveals a conundrum for mathematical modeling: a metric which is universal (rather than discriminative) cannot be used for choosing the model which best describes a network of interest.The question posed is one of classification, meaning the construction of an algorithm, based on training data from multiple classes, which can place data of interest within one of the classes with small test loss.
Figure 1: Ambiguity in network mechanisms: we plot the degree distribution of two graphs generated using radically different algorithms.The red line results from an algorithm of the Barabasi class [3]; the blue from the "static" model of Kim et al [8].The distributions are indistinguishable, illustrating the insufficiency of degree distributions as a classifying metric.
In this paper, we present a natural mapping from a graph to an infinite-dimensional vector space using simple operations on the adjacency matrix.We then test a number of different classification (including density estimation) algorithms which prove to be effective in finding new metrics for classifying real world data sets.We selected 17 different models proposed in the literature to model various properties of naturally occurring networks.Among them are various biologically inspired graph-generating algorithms which were put forward to model genetic or protein interaction networks.To assess their value as models of their intended referent, we classify data sets for the E. coli genetic network, the C. elegans neural network and the yeast S. cerevisiae protein interaction network.We anticipate that this new approach will provide a general tool of analysis and classification in a broad diversity of communities.
The input space used for classifying graphs was introduced in our earlier work [9] as a technique for finding statistically significant features and subgraphs in naturally occurring biological and technological networks.Given the adjacency matrix A representing a graph (i.e., A ij = 1 iff there exists an edge from j to i), multiplications of the matrix count the number of walks from one node to another (i.e., [A n ] ij is the number of unique walks from j to i in n steps).Note that the adjacency matrix of an undirected graph is symmetric.The topological structure of a network is characterized by the number of open and closed walks of given length.Those can be found by calculating the diagonal or nondiagonal components of the matrix, respectively.For this we define the projection operation D such that and its complement U = I − D. (Note that we do not use Einstein's summation convention.Indices i and j are not summed over.)We define the primitive alphabet {A; T, U, D} as the adjacency matrix A and the operations T, U, D with the transpose operation T (A) ≡ A T distingushing walks "up" the graph from walks "down" the graph.From the letters of this alphabet we can construct words (a series of operations) of arbitrary length.A number of redundancies and trivial cases can be eliminated (for example, the projection operations satisfy DU = UD = 0) leading to the operational alphabet {A, AT, AU, AD, AUT }.The resulting word is a matrix representing a set of possible walks generated by the original graph.An example is shown in Figure 2.
Each word determines two relevant statistics of the network: the number of distinct walks and the number of distinct pairs of endpoints.These two statistics are determined by either summing the entries of the matrix (sum) or counting the number of nonzero elements (nnz) of the matrix, respectively.Thus the two operations sum and nnz map words to integers.This allows us to plot any graph in a high-dimensional data space: the coordinates are the integers resulting from these path-based functionals of the graph's adjacency matrix.
The coordinates of the infinite-dimensional data space are given by integer-valued functionals where each L i is a letter of the operational alphabet and F is an operator from the set {sum, sumD, sumU, nnz, nnzD, nnzU}.We found it necessary only to evaluate words with n ≤ 4 (counting all walks up to length 5) to construct low test-loss classifiers.Therefore, our word space is a 6 4 i=1 5 i = 4680-dimensional vector space, but since the words are not linearly independent (e.g., sumU + sumD = sum), the dimensionality of the manifold explored is actually much smaller.However, we continue to use the full data space since a particular word, though it may be expressed as a linear combination of other words, may be a better discriminator than any of its summands.
In [9], we discuss several possible interpretations of words, motivated by algorithms for finding subgraphs.Previously studied metrics can sometimes be interpreted in the context of words.For example, the transitivity of a network can be defined as 3 times the number of 3cycles divided by the number of pairs of edges that are incident on a common vertex.For a loopless graph (without self-interactions), this can also be calculated as a simple expression in word space: sum(DAAA)/sum(UAA).Note that this expression of transitivity as the quotient of two words implies separation in two dimensions rather than in one.However, there are limitations to word space.For example, a similar measure, the clustering coefficient, defined as the average over all vertices of the number of 3cycles containing the vertex divided by the number of paths of length two centered at that vertex, cannot be easily expressed in word space because vertices must be considered individually to compute this quantity.Of course, the utility of word space is not that it encompasses previously studied metrics, but that it can elucidate new metrics in an unbiased, systematic way, as illustrated below.

SVMs
A standard classification algorithm which has been used with great success in myriad fields is the support vector machine, or SVM [10].This technique constructs a hyperplane in a high-dimensional feature space separating two classes from each other.Linear kernels are used for the analysis presented here; extensions to appropriate nonlinear kernels are possible.
We rely on a freely available C-implementation of SVM-Light [11], which uses a working set selection method to solve the convex program-ming problem with Lagrangian with y i (w

Robustness
Our objective is to determine which of a set of proposed models most accurately describes a given real data set.After constructing a classifier enjoying low test loss, we classify our given real data set to find a 'best' model.However, the real network may lie outside of any of the sampled distributions of the proposed models in word space.In this case we interpret our classification as a prediction of the least erroneous model.
We distinguish between the two cases by noting the following: Consider building a classifier for apples and grapefruit which is then faced with an orange.The classifier may then decide that, based on the feature size the orange is an apple.However, based on the feature taste the orange is classified as a grapefruit.That is, if we train our classifier on different subsets of words and always get the same prediction, the given real network must come closest to the predicted class based on any given choice of features we might look at.We therefore define a robust classifier as one which consistently clas-sifies a test datum in the same class, irrespective of the subset of features chosen.And we measure robustness as the ratio of the number of consistent predictions over the total number of subspace-classifications.

Generative Classifiers
A generative model, in which one infers the distribution from which observations are drawn, allows a quantitative measure of model assignment: the probability of observing a given wordvalue given the model.For a robust classifier, in which assignment is not sensitively dependent on the set of features chosen, the conditional probabilities should consistently be greatest for one class.
We perform density estimations with Gaussian kernels for each individual word, allowing calculation of p(C = c|X j = x), the probability of being assigned to class c given a particular value x of word j.By comparing ratios of likelihood values among the different models, it is therefore possible, for the case of non-robust classifiers, to determine which of the features of an orange come closest to an apple and which features come closest to a grapefruit.
We compute the estimated density at a word value x 0 from the training data x i (i = 1, . . ., m) as where we optimize the smoothing parameter λ by maximizing the probability of a hold-out set using 5-fold cross-validation.More precisely, we partition the training examples into 5-folds , where f i is the set of indices associated with fold i (i = 1 . . .5) and We then maximize as a function of λ.In all cases we found that Q(λ) had a well pronounced maximum as long as the data was not oversampled.Because words can only take integer values, too many training examples can lead to the situation that the data take exactly the same values with or without the hold-out set.In this case, maximizing Q(λ) corresponds to p(x, λ) having single peaks around the integer values, so that λ tends to zero.Therefore, we restrict the number of training examples to 4N v , where N v is the number of unique integer values taken by the training set.With this restriction Q(λ) showed a well-pronounced maximum at a non-zero λ for all words and models.

Word Ranking and Decision Trees
The simplest scheme to find new metrics which can distinguish among given models is to take a large number of training examples for a pair of network models and find the optimal split between both classes for every word separately.
We then test every one-dimensional classifier on a hold-out set and rank words by lowest test loss.
Below we show that this simple approach is already very successful.
Extending these results, one can ask how many words one needs to distinguish entire sets of different models, as estimated by building a multiclass decision tree and measuring its test loss for different numbers of terminal nodes.We use Matlab's Statistical Toolbox with a binary multiclass cost function to decide the splitting at each node.To avoid over-fitting the data, we prune trained trees and select the subtree with minimal test loss by 10-fold cross-validation.
Additionally, we propose a different approach using decision trees to find most discriminative words.For every possible model pair (i, j) for 1 ≤ i < j ≤ N mod where N mod is the total number of models, we build a binary decision tree, but restricted so that at every level of each tree the same word has to be used for all the trees.At every level the best word is chosen according to the smallest average training loss over all binary trees.The model is not meant to be a substitution to an ordinary multi-class decision tree.It merely represents another algorithm which may be useful to find a fixed number of most discriminative words, for example for visualization of the distributions in a three-dimensional subspace.
In order to classify real data, we sample training examples of the given models with a fixed total number of nodes N 0 , and allow a small interval I M of 1-2% around the total number of edges M 0 of the considered real data set.All additional model parameters are sampled uniformly over a given range which is specifid by the model's creators in most cases, otherwise can be given reasonable bounds.Such a generated graph is accepted if the number of edges M falls into the specified interval I M around M 0 , thereby creating a distribution of graphs associated to each model which could describe the real data set with given N 0 and M 0 .

Results
We apply our methods to three different real data sets: the E. coli genetic network [25](directed), the S. cerevisiae protein interaction network [26](undirected), and the C. elegans neural network [27](directed).
Each node in E. coli's genetic network represents an operon coding for a putative transcriptional factor.An edge exists from operon i to operon j if operon i directly regulates j by binding to its operator site.This gives a very sparse adjacency matrix with a total of 423 nodes and 519 edges.
The S. cerevisiae protein interaction network has 2114 nodes and 2203 undirected edges.Its sparseness is therefore comparable to E. coli's genetic network.
The C. elegans data set represents the organism's fully mapped neural network.Here, each node is a neuron and each edge between two nodes represents a functional, directed connection between two neurons.The network consists of 306 neurons and 2359 edges, and is therefore about 7 times more dense than the other two networks.
We create training data for undirected or directed models according to the real data set.All parameters other than the numbers of nodes and edges were drawn from a uniform distribution over their range.We sampled 1000 examples per model for each real data set, trained a pairwise multi-class SVM on 4/5 of the sampled data and tested on the 1/5 hold-out set.We determine a prediction by counting votes for the different classes.All three classifiers show very low test loss and two of them a very high robustness.The average number of support vectors is relatively small.Indeed, some pairwise classifiers had as few as three support vectors and more than half of them had zero test loss.All of this suggests the existence of a small subset of words which can distinguish among most of these models.
The predicted models Kumar, MZ, and Sole are based on very similar mechanisms of duplication and mutation.The model by Kumar et al was originally meant to explain various properties of the WWW.It is based on a duplication mechanism, where at every time step a prototype for the newly introduced node is chosen at random, and connected to the prototype's neighbors or other randomly chosen nodes with probability p.It is therefore built on an imperfect copying mechanism which can also be interpreted as duplication-mutation, often evoked when considering genetic and proteininteraction networks.Sole is based on the same idea, but allows two free parameters, a probability controlling the number of edges copied and a probability controlling the number of random edges created.MZ is essentially a directed version of Sole.Moreover, we observe that none of the preferential attachment models came close to being a predicted model for one of our biological networks even though they, and other preferential attachment models in the literature, were created to explain power-law degree distributions.The duplication-mutation scheme arises as the more successful one.
Kumar and MZ were classified with almost perfect robustness against 500-dimensional subspace sampling.With 26 different choices of subspaces, E. coli was always classified as Kumar.We therefore assess with high confidence that Kumar and MZ come closest to modeling E. coli and C. elegans, respectively.In the case of Sole and the S. cerevisiae protein network we observed fluctuations in the assignment to the best model.3 out of 22 times S. cerevisiae was classified as Vazquez (duplication-mutation) , other times as Barabasi (preferential attachment), Klemm (duplicationmutation), Kim (scale-free static) or Flammini (duplication-mutation) depending on the subset of words chosen.This clearly indicates that different features support different models.Therefore the confidence in classifying S. cerevisiae to be Sole is limited.
The preference of individual words for individual models is investigated using kernel density estimation 2.3 by finding words which maximize p i (x 0 )/p j (x 0 ) for two different models (i and j) at a word value of the real data set x 0 .Figure 4 shows the sampled distribution and estimated density for the word which extremely disfavors the winning model over its follower.The opposite case is shown in 3 for E. coli, where the word supports the winning model and disfavors its follower.More specifically we are able to verify that most of the words of E. coli are most likely to be generated by Kumar.Indeed, out of 1897 words taking at least 2 integer values for all of the models (density estimation for a single value is not meaningful), the estimated density at the E. coli word value was highest for Kumar in 1297 cases, for Krapivsky-Bianconi in 535 cases and for Krapivsky in only 65 cases.
Figure 3 shows the distributions for the word nnzDAUT AUT AUAUT A which had a maximum ratio of probability density of Kumar over the one of Krapivsky-Bianconi at the E. coli position.E. coli in fact has a zero word count meaning that none of the associated subgraphs shown in Figure 5 actually occur in E. coli.Four of those subgraphs have a mutual edge which is absent in the E. coli network and also impossible to generate in a Kumar graph.Krapivsky-Bianconi graphs allow for mutual edges which could be one of the reasons for a higher count in this word.Another source might be that the fifth subgraph showing a higher order feedforward loop is more probable to be generated in a Krapivsky-Bianconi graph than in a Kumar graph.This subgraph also has to be absent in the E. coli network since it gives a zero word value, showing that the Kumar and the Krapivsky-Bianconi models have both a tendence to give rise to a topological structure that does not exist in E. coli.This analysis gives an example of how these findings are useful in refining network models and in deepening our understanding of real networks.For further discussions refer to our website [1] The SVM results suggest that one may only need a small subset of words to be able to separate most of the models with almost zero test loss.The simplest approach to find such a subset is to look at every word for a given pair of models and compute the best split, then ranking words by lowest training loss.We find that among the most discriminative words some occur very often such as nnzAA or nnzAT A, which count the pairs of edges attached to the same vertex and either pointing in the same direction or pointing away from each other, respectively.Other frequent words include nnzDAA, nnzDAT A and sumUAT A. A striking feature of this single-word analysis is that the test loss associated with simple onedimensional classifiers are comparable to the SVM test loss confirming that most pairs of models are separable with only a few words.To consider all of the models at once and not just in pairs we apply both tree algorithms described in 2.4 to all three data sets.Figures 6 and 7 show scatter-plots of the training data using the most discriminative three words.Taking those three words the average training-loss over all pairs of models is 1.7%, 0.8% and 0.2% for the E. coli, C. elegans and S. cerevisiae training data, respectively.

Conclusions
It is not surprising that models with different mechanisms are distinguishable; however, the fact that these models have not been separated in a systematic manner to date points to the inadequacy of current metrics popular in the network theory community.We have shown that a systematic enumeration of countably infinite features of graphs can be successfully used to find new metrics which are highly efficient in sep- arating various kinds of models.Furthermore, they allow us to define a high-dimensional input space for classification algorithms which for the first time are able to decide which of a given set of models most accurately describes three exemplary biological networks.

Bianconi
Growth model with a probability of attaching to an existing node p ∼ η i k i , where η i is a fitness parameter.Here we use a random fitness landscape, where η is drawn from a uniform distribution in (0, 1) Callaway Growth model adding one node and several edges between randomly chosen existing nodes (not necessarily the newly introduced one) at every time step. [19]

Kim
A "static" model giving rise to a scale-free network.Edges are created between nodes chosen with a probability p ∼ i −α where i is the label of the node and α a constant parameter in (0, 1).
[12], [8], [13] Erdos Undirected random graph.[15] Flammini Growing graph based on duplication modeling protein interactions.At every time step a prototype is chosen randomly.With probability q edges of the prototype are copied.With probability p an edge to the prototype is created. [17]

Klemm
Growing graph using sets of active and inactive nodes to model citation networks.[30], [31] Small World Interpolation between a regular lattice and a random graph.We replace edges in the regular lattice by random ones. [14]

Barabasi
Growing graph with a probability of attaching to an existing node p ∼ k i .("Bianconi" with η i = 1 for all i) [7]

Sole
Growing graph initialized with a 5-ring substrate.At every time step a new node is added and a prototype is chosen at random.The prototype's edges are copied with a probability p.Furthermore, random nodes are connected to the newly introduced node with probability q/N , where p and q are given parameters in (0, 1) and N is the number of total nodes at the considered time step.[18] Table 2: Undirected Network Models.k i is the degree of the i-th node.

Name Fundamental Mechanism References Kim 2
Directed version of "Kim".A "static" model giving rise to a scale-free network.Edges are created between nodes chosen with probabilites p ∼ i α in and q ∼ j α out where α in and α out are fixed parameters chosen in (0, 1) and i(j) is the label of the i-th (j-th) node [8] Erdos Directed random graph. [15]

Grindrod
Static graph.Edges are created between nodes i, j with probability p = bλ |i−j| , where b and λ are fixed parameters.
[22], [21] Krapivksy Growing graph modeling the WWW.At every time step either a new edge, or a new node with an edge, are created.Nodes to connect are chosen with probability p ∼ k i,in + a and q ∼ k j,out + b based on preferential attachment with fixed real-valued offsets a and b. [20]

Krapivsky-Bianconi
Extension of "Krapivsky" using a random fitness landscape multiplying the probabilities for preferential attachment.It is the directed analog of "Bianconi" being an extension to "Barabasi". (original)

Kumar
Growing graph based on a copying mechanism to model the WWW.At every time step a prototype P is chosen at random.Then for every edge connected to P , with probability p an edge between the newly introduced node and P 's neighbor is created, and with probability (1 − p) an edge between the new node and a randomly chosen other node is created. [23] Middendorf-Ziv (MZ) Growing directed graph modeling biological network dynamics.A prototype is chosen at random and duplicated.The prototype or progenitor node has edges pruned with probability β and edges added with probability α ≪ β.
Based loosely on the undirected protein network model of Sole et al. [18].original

Vazquez
Growth model based on a recursive 'copying' mechanism, continuing to 2nd nearest neighbors, 3rd nearest neighbors etc.The authors call it a 'random walk' mechanism. [24] Table 3: Directed Network Models.k i,in (k i,out ) is the in-(out-)degree of the i-th node.

Figure 2 :
Figure 2: The elements of the matrix AT A count these two walks.T A corresponds to one step "up" the graph, the following A to one step "down".The last node could be either the same as the starting node as in the first subgraph (accounted for by the diagonal part DAT A) or a different node as in the second subgraph (accounted for by the non-diagonal part UAT A).

Figure 5 :
Figure 5: Subgraphs associated with the word nnzDAUT AUT AUAUT A. The word has a non-zero value iff at least one of these subgraphs occurs in the network

Figure 6 :
Figure 6: E. coli and seven directed models.The distributions in word space are shown for a projection onto the subspace of the three most discriminatve words.Subgraphs associated with every word are also shown.

Figure 7 :
Figure 7: S. cerevisiae and 7 undirected models.The distributions in word space are shown for a projection onto the subspace of the three most discriminative words.Subgraphs associated with every word are also shown.

Table 10 :
x) = −1.25 f (x) = −1.25 f (x) = −3.87f (x) = −0.58f (x) = −0.15f (x) = −1.17f (x) = −168.96L tst = 0.0% L tst = 0.0% L tst = 0.0% L tst = 0.0% L tst = 0.0% L tst = 0.0% L tst = 0.0% L tr = 0.0% L tr = 0.0% L tr = 0.0% L tr = 0.0% L tr = 0.0% L tr = 0.0% L tr = 0.0% N sv =5 N sv =4 N sv =3 N sv =8 N sv =8 N sv =4 N sv =4Table9: SVM results for C. elegans.f (x) = w • x C.elegans + b, L tst is the test loss, L tr the training loss and N sv the number of support vectors.Results are shown for SVMs trained between every pair of models.if f (x) > 0 C. elegans is classified as the row-header, if f (x) < 0 as the column-header.Ranking of words found by binary pairwise trees for the S. cerevisiae training data.L tr for a word ranked n is the average training loss over all pairwise trees, where every tree has depth n and splits the data using words 1 to n in the given order.

Table 1 :
Results of multi-class SVM.L tr is the empirical training loss averaged over all pairwise classifiers, L tst is the averaged empirical test loss.N sv is the average number of support vectors.The winner is the model that got the highest number of votes when classifying the given real data set.

Table 4 :
SVM results for E. coli.f (x) = w • x E.coli + b, L tst is the test loss, L tr the training loss and N sv the number of support vectors.Results are shown for SVMs trained between every pair of models.if f (x) > 0 E. coli is classified as the row-header, if f (x) < 0 as the column-header.

Table 5 :
Most discriminative words for the E. coli training data based on lowest test loss by 1-dimensional splitting for every pair of models.L tst is the test loss and L tr the training loss.

Table 6 :
Ranking of words found by binary pairwise trees for the E. coli training data.L tr for a word ranked n is the average training loss over all pairwise trees, where every tree has depth n and splits the data using words 1 to n in the given order.

Table 7 :
Most discriminative words for the C. elegans training data based on lowest test loss by 1-dimensional splitting for every pair of models.L tst is the test loss and L tr the training loss.

Table 8 :
Ranking of words found by binary pairwise trees for the C. elegans training data.L tr for a word ranked n is the average training loss over all pairwise trees, where every tree has depth n and splits the data using words 1 to n in the given order.