Open Access

Exact score distribution computation for ontological similarity searches

BMC Bioinformatics201112:441

DOI: 10.1186/1471-2105-12-441

Received: 20 May 2011

Accepted: 12 November 2011

Published: 12 November 2011

Abstract

Background

Semantic similarity searches in ontologies are an important component of many bioinformatic algorithms, e.g., finding functionally related proteins with the Gene Ontology or phenotypically similar diseases with the Human Phenotype Ontology (HPO). We have recently shown that the performance of semantic similarity searches can be improved by ranking results according to the probability of obtaining a given score at random rather than by the scores themselves. However, to date, there are no algorithms for computing the exact distribution of semantic similarity scores, which is necessary for computing the exact P-value of a given score.

Results

In this paper we consider the exact computation of score distributions for similarity searches in ontologies, and introduce a simple null hypothesis which can be used to compute a P-value for the statistical significance of similarity scores. We concentrate on measures based on Resnik's definition of ontological similarity. A new algorithm is proposed that collapses subgraphs of the ontology graph and thereby allows fast score distribution computation. The new algorithm is several orders of magnitude faster than the naive approach, as we demonstrate by computing score distributions for similarity searches in the HPO. It is shown that exact P-value calculation improves clinical diagnosis using the HPO compared to approaches based on sampling.

Conclusions

The new algorithm enables for the first time exact P-value calculation via exact score distribution computation for ontology similarity searches. The approach is applicable to any ontology for which the annotation-propagation rule holds and can improve any bioinformatic method that makes only use of the raw similarity scores. The algorithm was implemented in Java, supports any ontology in OBO format, and is available for non-commercial and academic usage under: https://​compbio.​charite.​de/​svn/​hpo/​trunk/​src/​tools/​significance/​

Background

Ontologies are knowledge representations using controlled vocabularies that are designed to help knowledge sharing and computer reasoning [1]. Many ontologies can be represented by directed acyclic graphs (DAGs), whereby the nodes of the DAG, which are also called terms of the ontology, are assigned to items in the domain and the edges between the nodes represent semantic relations. Ontologies are designed such that terms closer to the root are more general than their descendant terms. For the ontologies we consider in this paper, the annotation-propagation rule applies, that is, items are annotated to the most specific term possible but are assumed to be implicitly annotated to all ancestors of that term.

Examples for ontologies are the Foundational Model of Anatomy (FMA) ontology [2], the Sequence Ontology [3], the Cell Ontology [4], and the Chemical Entities of Biological Interest (ChEBI) ontology [5], which describe objects from the domains of anatomy, biological sequences, cells, and biologically relevant chemicals. In contrast, other ontologies are used to describe the attributes of the items of a domain. For instance, GO terms are used to annotate genes or proteins by describing the biological functions or characteristics to the proteins. The Mammalian Phenotype Ontology (MPO) [6] and the Human Phenotype Ontology (HPO) [7] describe the attributes of mammalian and human diseases. In this case, the domain object is a disease such as Marfan syndrome, whose attributes are the clinical features of the disease such as arachnodactyly and aortic dilatation. In other words, terms of phenotype ontologies such as the MPO and HPO can be conceived of as describing abnormal qualities (e.g. hypoplastic) of anatomical or biochemical entities [8].

Semantic similarity between any two terms within an ontology is based on the annotations to items in the domain and on the structure of the DAG. Different semantic similarity measures have been proposed [9, 10] and the measures have been used in many different applications in computational biology. For example, different studies show that semantic similarity between proteins annotated with GO terms correlate with sequence similarity [1113]. Other studies investigated the correlation of gene coexpression with semantic similarity using GO terms [14, 15]. In addition, semantic similarity measures for GO terms have been used to predict protein subnuclear localization [16].

In another application we have implemented a semantic similarity search algorithm in the setting of medical diagnosis. A user enters HPO terms describing the clinical abnormalities observed in a patient and a ranked list of the best matching differential diagnoses is returned [17]. This kind of search can be performed using raw semantic similarity scores calculated using any of the semantic similarity measures [1821, 12, 22]. However, among these different measures the node-based pairwise similarity measure defined by Resnik turned out to have the best performance in our previous study [17] and is therefore considered in this work.

The search is based on q attributes (HPO terms) that describe the phenotypic abnormalities seen in a patient for whom a diagnosis is being sought. For each of the entries of a database containing diseases annotated with HPO terms corresponding to their characteristic signs and symptoms, the best match between each of the q terms of the query with one of the terms annotating the disease is found and the average of the semantic similarity scores is determined. The diseases are then ranked according to these scores and returned to the user as suggestions for the differential diagnosis.

The distribution of scores that a domain object can achieve varies according to the number and specificity of the ontology terms used to annotate it. In a recent study by Wang et al. [23], it was discovered that many of the commonly used semantic similarity measures, including the ones used in this work, are biased towards domain objects that have more annotations. The effect was termed annotation bias. Applications that use the scores alone therefore tend to preferentially select items with higher numbers of annotations, which may lead to wrong conclusions [23].

Previously, we developed a statistical model to assign P-values to the resulting similarity scores on the basis of the probability of a random query obtaining at least as high a score in order to compensate for the fact that different domain objects may have a different number of annotations. Using extensive simulations, we showed that this approach outperformed searches based on the semantic similarity scores alone [17]. A disadvantage of that procedure was the fact that extensive simulations using randomized queries were necessary in order to estimate the true distribution of the semantic similarity scores, which is needed in order to calculate a P-value for any given similarity score.

In this paper, we describe an algorithm to collapse a DAG representing an ontology into connected components of nodes corresponding to terms that make identical contributions to the semantic similarity score. The new algorithm reduces the amount of computational time needed to calculate the score distribution (and thereby P-values) by many orders of magnitude compared to a naive calculation. A preliminary description of the algorithm was presented in a conference paper [24]. Here, we validate the algorithm by comparing to sampling based approaches and show using simulations that the application of the exact P- value outperforms sampling based approaches in the context of clinical diagnostics with the HPO.

Methods

Notation

We consider an ontology O composed of a set of terms that are linked via an is-a or part-of relationship. The ontology O can then be represented by a DAG G = ( V , E ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq1_HTML.gif, where every term is a node in V and every link is a directed edge in E. A directed edge going from node n1 to n2 is denoted e1,2 and we refer to n2 as the parent of n1. An item i is defined as an abstract entity to which terms of the ontology are annotated. Let Anc(n) be defined as the ancestors of n, i.e., the nodes that are found on all paths from node n to the root of G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif, including n. We note that the annotation-propagation rule states that if an item is explicitly annotated to a term n, it is implicitly annotated to Anc(n). In order to describe the implicit annotations we define T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif. Let T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif be the set of terms that has been explicitly annotated to item i, then T I M P L = n T A n c ( n ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq5_HTML.gif, namely all terms that are annotated to item i and all their ancestors in G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif. Let the set of common ancestors of two nodes n1 and n2 be defined as ComAnc(n1, n2) = Anc(n1) Anc(n2). Let Desc(n) be the set of descendant nodes of n, again including n. Note that in this notation descendant nodes are considered only once, even if there are multiple paths leading to them.

Multisets

In what follows we need to compute the similarity also between a multiset and a set of terms. The concept of multisets [25] is a generalization of the concept of sets. In contrast to sets, in which elements can only have a single membership, the elements of multisets may appear more than once.

Formally, a multiset M is a set of pairs, M = {(s1, m1),..., (s d , m d )}, in which s i U = {s1,..., s d } are the elements of the underlying set U. Furthermore, m i defines the multiplicity of s i in the multiset. The sum of the multiplicities of M is called the multiset cardinality of M, denoted |M|. Only multiplicities in the domain of positive integers are considered, i.e., m i +. We define a multi subset relation between multiset N and multiset M, denoted as N M, as a generalization of the subset relation between two sets:
N M ( s , n ) N : m n : ( s , m ) M . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equa_HTML.gif

The multiset coefficient M ( n , q ) = n + q - 1 q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq6_HTML.gif denotes the number of distinct multisets of cardinality q, with elements taken from a finite set of cardinality n. It describes how many ways there are to choose q elements from a set of n elements if repetitions are allowed.

Similarity measures

We will concentrate in this work on the class of similarity measures that are based on the information content (IC) of a node:
I C ( n ) = - log p ( n ) , https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ1_HTML.gif
(1)
where p(n) denotes the frequency among all items in the domain of annotations to n, which implicitly contains all annotations of descendants of n due to the annotation-propagation rule. The information content is a nondecreasing function on the nodes of G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif as we descend in the hierarchy and is therefore monotonic. The similarity between two nodes was defined by Resnik as the maximum information content among all common ancestors [19]:
s i m ( n 1 , n 2 ) = m a x { I C ( a ) | a C o m A n c ( n 1 , n 2 ) } . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ2_HTML.gif
(2)

Equation (2) provides a definition for the similarity between two terms. Other popular pairwise measures that additionally incorporate the IC of the query terms, for example [20, 21], are not considered here (see Discussion).

One can extend this concept to define a similarity between two domain objects that are each annotated by multiple ontology terms by taking the average of the best pairwise similarities for all terms [11]:
s i m a v g ( T 1 , T 2 ) = 1 | T 1 | n 1 T 1 max n 2 T 2 s i m ( n 1 , n 2 ) . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ3_HTML.gif
(3)

Note that Eq. (3) is not symmetric [12], i.e., it is not necessarily true that s i m a v g ( T 1 , T 2 ) = s i m a v g ( T 2 , T 1 ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq7_HTML.gif. We point out that in other works average often refers to a symmetric definition. Using the nomenclature of Pesquita et al. [9], Eq. (3) may be referred to as asymmetric best-match average, here average for short.

Instead of taking the average the maximum similarity between a term annotating one of the domain objects and a term annotating the other domain object can be used to define the following symmetric measure:
s i m m a x ( T 1 , T 2 ) = max n 1 T 1 , n 2 T 2 s i m ( n 1 , n 2 ) . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ4_HTML.gif
(4)
Equation (4) can be considered a simplified case of Eq. (3) because instead of averaging over all best-pairwise terms for each n 1 T 1 https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq8_HTML.gif compared to n 2 T 2 https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq9_HTML.gif only the highest similarity of all possible pairs is retained. Therefore, we will show the algorithm applied to Eq. (3) and sketch the changes for Eq. (4) later. One can use equation (3) or (4) to define a similarity between a set of query terms Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq10_HTML.gif, i.e., T 1 = Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq11_HTML.gif and an object in a database. Then, Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq10_HTML.gif can represent any set of terms from the ontology O whereas T 2 https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq12_HTML.gif refers to database objects (such as diseases annotated to HPO terms). As we are using this setup for the similarity queries we will omit the index and refer to T 2 https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq12_HTML.gif as the target set T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif. See Figure 1 for an example computation of sim avg .
https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Fig1_HTML.jpg
Figure 1

Example Computation of sim avg . Computation of sim avg on a DAG with six nodes. The target set is T = { B , C } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq13_HTML.gif (black nodes) and the query set Q = { D , F } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq14_HTML.gif (nodes with horizontal lines). The IC value of a node is shown in a small, dashed, attached oval. The most similar terms for D and F are B and C respectively, because IC(B) > IC(A) and IC(C) > IC(B). Therefore, s i m a v g ( Q , T ) = s i m ( D , B ) + s i m ( F , C ) 2 = I C ( B ) + I C ( C ) 2 = 2 + 2 . 5 2 https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq15_HTML.gif. Note that only terms involving nodes in T I M P L = { A , B , C } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq16_HTML.gif were considered in the calculation.

Because we later make use of scores derived at the maximization step in Eq. (3) we define:
s i m ( n 1 , T ) = max n 2 T s i m ( n 1 , n 2 ) , https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ5_HTML.gif
(5)

to be the target set similarity score of n1 against a target set T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif. To avoid confusion we will denote scores of the score distribution of sim avg by S and target set similarity scores s i m ( n , T ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq17_HTML.gif by s.

Definition of statistical significance for semantic similarity scores

In this paper we will present methods for analytically calculating the probability distribution of similarity scores for comparisons between a query set Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq10_HTML.gif with q terms against an item that has been annotated with a target set T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif of nodes. For example, if a clinician chooses a set Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq10_HTML.gif of HPO terms describing abnormalities seen in a patient and uses Eq. (3) to calculate an observed score S obs to a disease that has been annotated with terms of the HPO, we would like to know the probability of a randomly chosen set of q nodes achieving a score of S obs or greater. In this case, each disease in the database represents a target set (for instance, there are currently over 5000 diseases in the clinical database used by the Phenomizer at the HPO Web site).

In other words, our methods will be used to calculate a P- value for the null hypothesis that a similarity score of S obs or greater for a set of q query terms Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq10_HTML.gif and a target set T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif has been observed by chance. We take all queries to be equally likely and define the P- value to be the proportion of queries having a score of at least S obs :
P q , T s i m ( S S o b s ) = | { Q | s i m ( Q , T ) S o b s , Q = { n 1 , . . . , n q } V } | |V | q . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ6_HTML.gif
(6)

In this definition all nodes of V can be part of a query, even if one node is an ancestor of the other. Note that the number of distinct scores for the complete score distribution of P q , T s i m https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq18_HTML.gif is dependent on q , T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq19_HTML.gif, and the similarity measure.

Simulation of patients for clinical diagnosis

Similar to our previous work [17], we use simulations to compare different approaches. Using 1701 OMIM diseases currently annotated with 2-5 HPO terms in the Phenotypic abnormality subontology, we generated artificial queries by (i) taking all terms annotated to the disease with no noise or imprecision as the query (NONE), (ii) randomly exchanging one term if q = 3 or q = 4 and two terms if q = 5 (NOISE), (iii) with probability 0.5 exchange a term with one of its parent terms if possible, (IMPRECISION), or (iv) using first IMPRECISION then NOISE.

For each of the 1701 OMIM diseases we generate the query as described above and rank all diseases using one of the measures (Score, P-value sampled 103, 104, or 105 times, and P-value exact). We then calculate the rank of the disease from which the query was generated. In case of ties we take the average rank (e.g. if four diseases rank first with the same value, all four get rank 2.5). Note that for the rankings using P-values (sampled or exact) we ranked first by P-values and then by score.

Results

A naive algorithm: exhaustive computation of score distributions

We represent the score distribution as S D = { ( S 1 , F 1 ) , , ( S k , F k ) } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq20_HTML.gif. Every pair ( S i , F i ) S D https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq21_HTML.gif contains a unique score S i and a count F i that defines its frequency within the distribution.

A naive approach to calculating the complete score distribution is to determine the similarity of each possible term combination Q V of size q with the fixed target set T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif. The complete procedure is outlined in Algorithm 1. It requires two basic operations that are applied to the set S D https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq22_HTML.gif. The first operation called getScorePair returns the pair that represents the given score or nil in case the entry does not exist. The second operation denoted putScorePair puts the given pair into the set S D https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq22_HTML.gif, overwriting any previously added pair with the same score. For further analyses we assume that both operations have constant running time.

   Input: V, q, T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif

   Output: Score distribution S D = { ( S 1 , F 1 ) , , ( S k , F k ) } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq20_HTML.gif
  1. 1
    S D = https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq23_HTML.gif
     
  2. 2

    foreach Q = {n1, n2,..., n q } V do

     

3    S n e w s i m a v g ( Q , T ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq24_HTML.gif

4    ( S , F ) g e t S c o r e P a i r ( S D , S n e w ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq25_HTML.gif

5   if (S, F) ≠ nil then

6       p u t S c o r e P a i r ( S D , ( S n e w , F + 1 ) ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq26_HTML.gif

7   else

8       p u t S c o r e P a i r ( S D , ( S n e w , 1 ) ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq27_HTML.gif
  1. 9

    return S D https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq22_HTML.gif

     

Algorithm 1: Naive score distribution computation for sim avg

As the number of possible term combinations is ( | V | q ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq28_HTML.gif and each similarity computation (line 3) costs O ( q | T | ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq29_HTML.gif operations for Eq. (3) Algorithm 1 runs in O ( | V | q q | T | ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq30_HTML.gif time. A typical size of |V| = 10000 as for the HPO demonstrates that the naive approach is impractical for values q > 2. The naive approach neglects the relationships of the nodes in G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif and T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif. We will exploit these relationships in the next section and group nodes in G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif according to their contribution to the score distribution computation.

A faster algorithm: exploiting redundant computations

Recall that all terms from the target set T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif are contained in T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif. We will prove now that only the IC values of nodes in T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif are relevant for the score distribution computation.

Lemma 1. Given a DAG G = ( V , E ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq1_HTML.gif and a target set T = { n 1 , , n k } V https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq31_HTML.gif, all scores in the score distribution of the similarity measure of Eq. ( 3) are derived from IC values of the nodes in T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif.

Proof. Computing the complete score distribution involves repeatedly evaluating s i m a v g ( Q , T ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq32_HTML.gif in Alg. 1 using equation (3). The first step for the computation of Eq. (3) is to maximize sim(n1,n2) for each node n 1 Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq33_HTML.gif compared to nodes n 2 T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq34_HTML.gif. The maximum IC value for sim(n1, n2) must be taken from a node in T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif, because by definition A n c ( n 2 ) T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq35_HTML.gif.

Lemma 1 implies that the computations in the naive algorithm, which enumerates all nodes in V, are highly redundant as the size of T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif is an upper bound on the number of different target set similarities encountered during score distribution computation. Figure 2 shows the contribution of all possible queries of size q = 2 for an example ontology. For instance, whenever node C or D are part of a query the target set similarity score obtained from Eq. (5) is IC(C) = 4, highlighted in red in Figure 2, and used for computing s i m a v g ( Q , T ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq32_HTML.gif.
https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Fig2_HTML.jpg
Figure 2

Redundancy in Naive Score Distribution Computation with sim avg for Queries of Size Two. Computation of the score distribution for sim avg on a DAG G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif with four nodes for all possible queries of size two. The target set T = { A , B , C } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq36_HTML.gif is shown as black nodes. Note that T = T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq37_HTML.gif here. The IC value for nodes is shown in a small dashed oval. All computations of Eq. (5) that result in the same target similarity score are colored in blue, green, and red for the target set similarity scores 0, 2, and 4, respectively.

Therefore, instead of enumerating over the nodes in V, we will first group nodes that have the same target set similarity score s in the maximization step in Eq. (3). Denote all nodes n V that have the same target set similarity score s for a given target set T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif as N s :
N s = { n | n V , s i m ( n , T ) = s } . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ7_HTML.gif
(7)

Example 1. It can be seen in Figure 2 that N0 = {A}, N2 = {B}, and N4 = {C, D} for G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif with T = { A , B , C } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq36_HTML.gif.

Observe that two nodes n i , n j T I M P L , n i n j https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq38_HTML.gif, belong to the same set N s , if IC(n i ) = IC(n j ). This observation will be essential when we devise an algorithm for computing N s .

The intuition behind the fast computation is that instead of selecting combinations of all nodes of V and constructing the score distribution one by one, we focus on the combinations of different target set similarity scores s and use their frequency |N s | to avoid redundant enumeration. For any T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif the set U https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq39_HTML.gif of distinct target set similarity scores is defined as:
U = { I C ( n ) | n T I M P L } . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ8_HTML.gif
(8)
Instead of considering sets of nodes in V we will now consider multisets M q of target set similarity scores in U https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq39_HTML.gif, where |M q | = q. In order to do that we define as M https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq40_HTML.gif the multiset induced by all target similarity scores s and their corresponding multiplicities m, that is,
M = { ( s 1 , m 1 ) , , ( s d , m d ) | s i U , m i = | N s i | } . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ9_HTML.gif
(9)
Then M a l l q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq41_HTML.gif represents the set of all multi subsets of M https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq40_HTML.gif that have multiset cardinality q, i.e.,
M a l l q = { M q | M q M , | M q | = q } . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ10_HTML.gif
(10)

The value of sim avg computed for a particular M q is the same for all query sets of nodes that correspond to M q (see Figure 2, Example 2). Therefore, if we can calculate the number of such sets as well as the score corresponding to each multiset M q of target set similarity scores in U https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq39_HTML.gif, we can determine the distribution of similarity scores sim avg for all possible queries of any given size q.

Denote the similarity for a multiset M q as:
s i m a v g ( M q ) = 1 q ( s , m ) M q m s . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ11_HTML.gif
(11)
The number of ways of drawing m nodes from a component of size |N s | can be calculated using the binomial coefficient. The total number of combinations is then the product of all binomial coefficients, denoted as the multiset frequency for a multiset M q :
f r e q ( M q ) = ( s , m ) M q | N s | m . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ12_HTML.gif
(12)

Example 2. In total there are 2 query sets with s i m a v g ( Q , T ) = 2 https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq42_HTML.gif for the DAG in Figure 2, namely {A, C}, {A, D}. After preprocessing, we obtain N0 = {A}, N2 = {B}, and N4 = {C, D} (Example 1). Alg. 2 enumerates all valid multisets of cardinality 2 for the sets N s considering their size |Ns |. The only way of attaining an average score of 2 is to select one node out of N0 and N4, represented by the multiset M2 = {(0,1), (4,1)} for which sim avg (M2) = 2. The multiset frequency of M2 gives the same result as shown in Figure 2, f r e q ( M 2 ) = | N 0 | 1 | N 4 | 1 = 1 2 = 2 https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq43_HTML.gif. Instead of iterating over two sets we consider one multiset.

Theorem 1. Let S D = { ( S 1 , F 1 ) , , ( S k , F k ) } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq20_HTML.gif be the score distribution computed with sim avg for an ontology DAG G = ( V , E ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq1_HTML.gif, target set T V https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq44_HTML.gif and query size q. The frequency F with which any given score S occurs amongst all possible queries of size q is then:
F = M q M a l l q , s i m a v g ( M q ) = S f r e q ( M q ) . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ13_HTML.gif
(13)

A proof of Theorem 1 is provided in Appendix A and a faster algorithm based on Theorem 1 is shown in Alg. 2. We enumerate all distinct multisets of M a l l q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq41_HTML.gif and add their frequency to the score distribution S D https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq22_HTML.gif, instead of iterating over all sets of size q in Alg. 1, thereby reducing the number of operations. In order to apply the algorithm to score distribution computation for sim max , line 3 of Alg. 2 needs to be replaced. Instead of computing the average of all scores in the multiset, the maximum among them is assigned to S new .

Preprocessing of the DAG for faster computation

So far we have neglected how we can compute the values | N s | , s U https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq45_HTML.gif but we will introduce an efficient algorithm in this section. We denote the algorithm as preprocessing because computation of |N s | is independent of q. The preprocessing will divide the original graph into a set of connected components from which the |N s | values can be deduced.

   Input: M a l l q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq41_HTML.gif

   Output: Score distribution S D = { ( S 1 , F 1 ) , , ( S k , F k ) } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq20_HTML.gif
  1. 1
    S D = https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq23_HTML.gif
     
  2. 2

    foreach multiset M q M a l l q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq46_HTML.gif do

     

3   S new sim avg (M q )

4    ( S , F ) g e t S c o r e P a i r ( S D , S n e w ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq25_HTML.gif

5   if (S,F) ≠ nil then

6       p u t S c o r e P a i r ( S D , ( S n e w , F + f r e q ( M q ) ) ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq47_HTML.gif

7   else

8       p u t S c o r e P a i r ( S D , ( S n e w , f r e q ( M q ) ) ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq48_HTML.gif
  1. 9

    return S D https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq22_HTML.gif

     

Algorithm 2: Faster score distribution computation for sim avg

First, we invert the direction of all edges in E such that the edges are directed from the root towards the leaves of the DAG, and introduce edge weights w i,j to the edges of G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif. Let
w i , j = I C ( n i ) , if n i T I M P L max { w h , i | e h , i E } otherwise . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ14_HTML.gif
(14)

The edge weights are defined in a recursive manner. First, all weights of edges emerging from nodes in T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif are set. Then the maximum edge weight of all incoming edges for each node not in T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif are propagated to all outgoing edges of the node, and as such propagated throughout the graph. Computing the edge weights is efficiently done after the nodes of G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif have been sorted in topological order, see Alg. 3. We now iterate across all nodes n i V. For each node n i V , n i T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq49_HTML.gif, there is at least one path that leads to the node n j = argmax n k T s i m ( n i , n k ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq50_HTML.gif. If a node has multiple parents, then by construction of the edge weights, an edge with a maximum weight will be a member of a path to n j . We therefore remove all other incoming edges. If there are multiple incoming edges with an identical, maximum edge weight, one of them can be chosen arbitrarily and the others are removed (Alg. 3, lines 7-9). We now iterate over all remaining edges e i,j and remove all edges for which n i , n j T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq51_HTML.gif holds (Alg. 3, lines 10-12). Note that exactly | T I M P L | https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq52_HTML.gif many connected components C i one for each n i T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq53_HTML.gif remain.

For all pairs of connected components such that IC(n i ) = IC(n j ) for n i , n j T I M P L , n i n j https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq38_HTML.gif, the connected components C i and C j are merged to arrive at the desired sets N s , s U https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq54_HTML.gif (Alg. 3, lines 13-16).

All these steps are summarized in Alg. 3 and Figure 3.
https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Fig3_HTML.jpg
Figure 3

Overview of the Algorithm for Preprocessing. The general steps of Alg. 3 are shown on the DAG and T https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq4_HTML.gif of Figure 1. Nodes in T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif are colored in black. The IC value of a node is depicted in a dashed oval.

Theorem 2. Given a DAG G = ( V , E ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq1_HTML.gif and a target set T = { n 1 , , n k } V https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq55_HTML.gif the score distribution of Eq. (3) is computed by Alg. 2 and Alg. 3 in O ( | E | + | V | + M ( | T I M P L | , q ) ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq56_HTML.gif time and space.

Proof. The preprocessing of the DAG in Alg. 3 involves inverting edges, topological ordering of V,

   Input: V, T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif

   Output: node sets with identical target similarity score, i.e., N s
  1. 1

    for n i V in topological order do

     

2   for j in e i,j E do                                                                  /* Set weights */

3      if n i T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq53_HTML.gif then

4         w i,j IC(n i )

5      else

6         wi,j← max{wh,I|eh,i E}
  1. 7

    for n i V \ root do

     

8      choose eh,i E s.t. |wh,iw h',i for all edges e h',i E

9      remove all incoming edges of n i except eh,i
  1. 10

    for ei,j E do                                                      /* Connected components C i */

     

11      if n i , n j T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq51_HTML.gif then

12         remove ei,jfrom E
  1. 13

    for s { I C ( n i ) | n i T I M P L } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq57_HTML.gif do                              /* Merging */

     

14       N s = https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq58_HTML.gif

15      foreach n i T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq53_HTML.gif do

16         N s N s C i
  1. 17

    return N s

     

Algorithm 3: Graph preprocessing for faster computation

introducing edge weights to E, removing edges in E, and computing the connected components of G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif. This can be done with depth-first search (DFS) traversals of G https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq2_HTML.gif with to a worst-case performance of O ( | E | + | V | ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq59_HTML.gif time and space.

Algorithm 2 runs in O ( M ( | T I M P L | , q ) ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq60_HTML.gif time and space. The outer foreach loop runs over all distinct multisets with cardinality q. The multiset coefficient M ( | T I M P L | , q ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq61_HTML.gif provides an upper bound for the number of these multisets. In each iteration the computation of the similarity score (line 3) and the multiset frequency, freq(M q ), have constant cost assuming a preprocessed lookup table for binomial coefficients and if common partial sim avg values are stored between the iterations, avoiding recomputation for similar multisets. In total, Alg. 2 and Alg. 3 run in O ( | E | + | V | + M ( | T I M P L | , q ) ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq56_HTML.gif time and space.

The theorem concludes the improvement to the naive algorithm, for example on average T I M P L ~ 38 https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq62_HTML.gif for all diseases currently annotated with terms of the HPO, which currently has approximately 10000 terms and 13000 relations. For instance, for a query with 5 terms, the naive algorithm would thus run in time proportional to 100005 · 5 · 38 = 1.9 × 1022, and the new algorithm in time proportional to 9000 + 11000 + 5 · M(38, 5) = 4.3 × 106.

Experiments

We now show the results of the new algorithm applied to the HPO [7]. In our previous work we implemented the Phenomizer as a system for experts in the differential diagnosis in medical genetics; the Phenomizer can be queried with a set of HPO terms to get a ranked list of candidate diseases most similar to the query based on P-values derived from Resnik similarity scores, Eq. (3) [17]. However, for the Phenomizer we used Monte Carlo sampling to approximate the score distribution and we will investigate now the difference in using the exact P-value compared to sampling.

As we are interested in ranking diseases for differential diagnosis we will take a similar simulation approach as in [17] and generate sets of artificial patients for which we know the OMIM disease, see Methods. In Figure 4 the results are shown for the investigated scenarios NONE, NOISE, IMPRECISION, and NOISE + IMPRECISION. We compared the ranking of patients with the similarity score alone, sampling based P-values (103 - 105 repetitions, the latter used in the Phenomizer), and exact computation using the algorithm in this work. In all cases, using the exact P-value computation significantly outperforms the four alternative ranking methods (Mann-Whitney P-value < 0.001) and ranks the true disease on rank one most of the time. The improvement for the exact score distribution computation is due to the fine-grained resolution especially for small P-values, where sampling is often underrepresented, but which are important for selecting the best rank (see Additional File 1).
https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Fig4_HTML.jpg
Figure 4

Impact of Exact P -value Computation for Clinical Diagnostics with the HPO. Simulations for Clinical Diagnostics using the HPO. Patient (phenotype) data was simulated and queried against the complete database of all 4992 annotated diseases. The best result is obtained if the original disease is assigned the rank one (y-axis) by the search algorithm. Different approaches are compared (x-axis). Data were generated without error NONE and with NOISE (top row, left and right) and with IMPRECISION and both IMPRECISION and NOISE (bottom row, left and right) as explained in the Methods section. The mean rank is shown below each boxplot.

We then investigated the runtime for different q values as compared to using the naive algorithm and Monte Carlo sampling (Table 1). For that purpose we selected four diseases with a different number of annotated HPO terms, and therefore different size of T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif, and show the runtime of the three approaches in milliseconds. The naive algorithm cannot be utilized for q > 2. The exact P-value computation is faster than random sampling with 105 repetitions for q = 2,3 and for the disease with only 17 terms in T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif independent of the analyzed q. Starting from q = 4 the sampling based approach is faster for large | T I M P L | https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq52_HTML.gif because of the huge size of the score distribution to be computed, but even for q = 5 the complete score distribution can be computed in under 4 seconds for diseases with many annotations. Note again that the average size of T I M P L https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq3_HTML.gif is 38 in the HPO.
Table 1

Runtime in milliseconds averaged over 20 runs comparing the naive, exact, and sampled distribution computation for q = 2,3,4, and 5

Runtime Analysis with the HPO

    

runtime in milliseconds

OMIM ID

| T | https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq63_HTML.gif

| T I M P L | https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq52_HTML.gif

| U | https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq64_HTML.gif

naive

exact

sampled*

q = 2

      

264300

5

17

16

3779

4

50

613124

7

36

36

3794

6

53

113450

12

80

72

3789

6

65

129500

20

66

61

3702

15

89

q = 3

      

264300

5

17

16

~ 1.2 · 107

4

49

613124

7

36

36

~ 1.2 · 107

6

53

113450

12

80

72

~ 1.2 · 107

19

66

129500

20

66

61

~ 1.2 · 107

15

79

q = 4

      

264300

5

17

16

-

5

46

613124

7

36

36

-

20

55

113450

12

80

72

-

250

65

129500

20

66

61

-

135

77

q = 5

      

264300

5

17

16

-

7

48

613124

7

36

36

-

141

54

113450

12

80

72

-

3896

63

129500

20

66

61

-

1776

79

Four OMIM diseases with a varying number of annotated HPO terms ( | T | https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq63_HTML.gif) were used; 264300: 17-β Hydroxysteroid Dehydrogenase III deficiency, 613124: Hydrops fetalis, nonimmune, with gracile bones and dysmorphic features, 113450: Brachydactyly-distal symphalangism syndrome, 129500: Ectodermal dysplasia 2, hidrotic. Entries denoted "-" were terminated after four hours. *Sampling with 105 repetitions.

Discussion

In this work we have tackled the unstudied problem of computing the score distribution for similarity searches with ontologies. We have devised an efficient preprocessing of the underlying DAG of the ontology that reduces the complexity for similarity measures based on Resnik's popular definition of similarity [19]. We have introduced a new algorithm based on multiset enumeration, which can be applied to score distribution computation for Eq. (3) as well as variants based on maximum similarity Eq. (4). In experiments with the HPO, as well as in theory, we show that the new algorithm is much faster than exhaustive enumeration of the score distribution or resampling approaches and that it is applicable to current ontologies.

The algorithm we describe here can be used as a component of a procedure to find the best hit in a database, i.e., we need to calculate the score for each entry in the database and rank the results according to P-value. This allows users to enter a list of characteristics or features in order to identify objects whose characteristics best match the query using semantic similarity. We have implemented our algorithm in the setting of medical diagnostics, where the features are the signs and symptoms of diseases and the domain objects are diseases. We have previously shown that this kind of search is useful for medical differential diagnosis [17].

Summarizing all nodes that have the same target set similarity score makes use of the fact that the pairwise similarity defined by Resnik only considers the common ancestors of the relevant terms (Lemma 1). Extending the proposed algorithm for other popular semantic similarity measures based on the information content of a node, like Jiang and Conrath or Lin [20, 21], or the symmetric definition of Eq. (3) [12], has not been considered here as definition of pairwise similarity additionally incorporates the information content of the nodes in the query. Therefore, additional steps are necessary which render the computations more complicated. Although this can be considered a limitation of the current approach, we believe the methodology introduced here will prove useful for other measures as well. For example the term overlap similarity measure [22], comparably, only considers common ancestors of query and target set terms, thus an algorithm with similar complexity appears possible from the results presented in this paper. One of the reasons why the P-value based rankings outperform the rankings based on scores is that the former account for the annotation bias as observed by Wang et al. [23]. The best-match average semantic similarity measures based on Resnik, like Eq. (3), were shown to have a strong bias. The annotation bias is a further argument to use P-values instead of the similarity scores alone.

In the mentioned study by Wang et al. [23], the authors consider the comparison of two proteins via their annotated GO terms, instead of considering any possible subset of the ontology terms as query as in our search setup. Their approach is to compensate for the annotation bias by simulating the distribution of pairwise similarity scores for all annotated ontology term sets and normalizing using a power transformation. Similarly to our experiments, their method might improve when the exact score distribution is computed using our algorithm.

In a practical implementation of our algorithm, the P-values could be precomputed for each entry in the database (such as all the diseases in OMIM or each protein in the human proteome). For small q, the P-values could be calculated dynamically. This might be useful if users are allowed to filter out portions of the database from the search based on some predefined groups (for instance, in genetics, the differential diagnosis might be restricted to diseases showing a certain mode of inheritance).

Due to its simple structure the new algorithm could be parallelized to run with several threads with close to linear speedup, by keeping the scores in different hash structures for each thread and merging all hashes at the end to get the complete distribution. Also, as often only the P-value is of interest, a branch and bound formulation of the new algorithm might lead to a significant speedup in practice.

Conclusions

The algorithmic improvement reported here might prove useful for P-value computation of other semantic similarity measures that are based on the information content of a node as introduced by Resnik [12]. However, when the similarity score includes more dependencies the size of the complete score distribution may increase significantly. Further algorithmic development will be necessary to increase the class of similarity measures for which P-values can be computed efficiently.

We believe that our methods would be applicable to other applications in which users search for domain objects that best exemplify a set of desired attributes and that they can be used to improve bioinformatic methods that use the semantic similarity scores alone. For that purpose we implemented a software in Java that computes exact score distributions for both similarity measures discussed here. The software works with any ontology available in OBO format and is available for non-commercial and academic usage under: https://​compbio.​charite.​de/​svn/​hpo/​trunk/​src/​tools/​significance/​

Appendix A

In this Appendix, we will prove Theorem 1 for arbitrary q. In the following text, we will outline the approach of the proof and introduce a few new definitions. We can calculate the P-values, Eq. (6), by computing the frequency F i of each score S i in the score distribution, i.e., by calculating the number of queries that result in score S i for each possible score. We will consider all query sets Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq10_HTML.gif that result in score S, denoted as Q S https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq65_HTML.gif later in Eq. (15). These initial query sets consist of the nodes from the Ontology DAG G = ( V , E ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq1_HTML.gif. Subsequently, we will substitute sets of nodes Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq10_HTML.gif by multisets M q ( Q ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq66_HTML.gif over their target set similarity scores in Eq. (16). This is the important switch that establishes the independence of the number of nodes in the graph by only considering their target set similarity scores. At this step, changing from sets to multisets is necessary, because the same target set similarity score may occur more than once given nodes in a single Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq10_HTML.gif. However, the induced multisets from all sets in Q S https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq65_HTML.gif are themselves not unique and therefore we will use the multiset frequency, Eq. (12), over the set of unique multisets M S q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq67_HTML.gif given Q S https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq65_HTML.gif to compute the desired quantity F in the proof.

We are interested in the set Q S https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq65_HTML.gif of all sets {n1,..., n q } of nodes {n1,..., n q } V, which result in the same average score S. That is, Q S https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq65_HTML.gif is the set of all queries of size q that result in the same average score S:
Q S = { { n 1 , , n q } | { n 1 , , n q } V , s i m a v g ( { n 1 , , n q } , T ) = S } . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ15_HTML.gif
(15)
The core message of Theorem 1 is that we can define a multiset M q over the target set similarity scores s whose frequency can be used to compute the frequency F of scores S in the score distribution. A necessary first step therefore is to express a query set Q = { n 1 , , n q } V https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq68_HTML.gif as a multiset M q ( Q ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq66_HTML.gif:
M q ( Q ) = { ( s 1 , m 1 ) , , ( s o , m o ) | s i U Q , m i = m s i Q } , https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ16_HTML.gif
(16)
where
U Q = { s i | n i Q , s i m ( n i , T ) = s i } https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ17_HTML.gif
(17)
and
m s i Q = | { n i | n i Q , s i m ( n i , T ) = s i } | . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ18_HTML.gif
(18)

The underlying set U Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq69_HTML.gif for a multiset M q ( Q ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq66_HTML.gif consists of all existing distinct target set similarity scores s i of the nodes in Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq10_HTML.gif, Eq. (17), and their multiplicity is the number of nodes in Q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq10_HTML.gif that share the same score s i , Eq. (18).

Now that we know how to create a multiset of target set similarity scores from any given set of nodes in V, we need another variable M S q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq67_HTML.gif to represent all distinct multisets that can be generated using Eq. (16) from the set Q S https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq65_HTML.gif. The set of distinct multisets M S q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq67_HTML.gif generated for a given Q S https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq65_HTML.gif is defined as:
M S q = { M q ( Q ) | Q Q S } . https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ19_HTML.gif
(19)

We can now state the proof of Theorem 1 as follows.

Proof.
F = | Q S | https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ20_HTML.gif
(20)
= M q M S q ( s , m ) M q | N s | m https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ21_HTML.gif
(21)
= M q M S q f r e q ( M q ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ22_HTML.gif
(22)
= M q M a l l q , s i m a v g ( M q ) = S f r e q ( M q ) https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_Equ23_HTML.gif
(23)

Eq. (20) merely restates the definition of the Frequency F given by Eq. (15), namely the number of all queries Q V https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq70_HTML.gif that result in sim avg = S. Note that Eq. (15) is representing the number of such queries in terms of sets of nodes of the ontology. Eq. (21) switches the representation from nodes in V to multisets M S q https://static-content.springer.com/image/art%3A10.1186%2F1471-2105-12-441/MediaObjects/12859_2011_Article_4959_IEq67_HTML.gif over the similarity scores of nodes in V using Eq. (19) and the definition of multiset frequency given in Eq. (12). Eq. (22) follows directly from the definition of the multiset frequency in Eq. (12). The equality between Eq. (22) and (23) is a direct consequence of Eq. (15) and (19).

Declarations

Acknowledgements

We thank Martin Vingron for his insights that led to an earlier version of this manuscript. The authors would also like to thank the two anonymous reviewers for insightful comments.

Funding

MHS was funded by the International Max Planck Research School for Computational Biology and Scientific Computing. SK and PNR were supported by the Berlin-Brandenburg Center for Regenerative Therapies (BCRT) (Bundesministerium für Bildung und Forschung, project number 0313911). SB and PNR were supported by the Deutsche Forschungsgemeinschaft (DFG RO 2005/4-1).

Authors’ Affiliations

(1)
Max Planck Institute for Molecular Genetics
(2)
Ray and Stephanie Lane Center for Computational Biology, Carnegie Mellon University
(3)
Institute for Medical Genetics and Human Genetics, Charité-Universitätsmedizin Berlin
(4)
Berlin-Brandenburg Center for Regenerative Therapies (BCRT), Charité-Universitätsmedizin Berlin

References

  1. Robinson PN, Bauer S: Introduction to Bio-Ontologies. Chapman & Hall/CRC Mathematical & Computational Biology., Chapman & Hall; 2011.
  2. Rosse C, Mejino JLV: A reference ontology for biomedical informatics: the Foundational Model of Anatomy. J Biomed Inform 2003, 36(6):478–500. 10.1016/j.jbi.2003.11.007View ArticlePubMed
  3. Eilbeck K, Lewis SE, Mungall CJ, Yandell M, Stein L, Durbin R, Ashburner M: The Sequence Ontology: a tool for the unification of genome annotations. Genome Biol 2005, 6(5):R44. 10.1186/gb-2005-6-5-r44PubMed CentralView ArticlePubMed
  4. Bard J, Rhee SY, Ashburner M: An ontology for cell types. Genome Biol 2005, 6(2):R21. 10.1186/gb-2005-6-2-r21PubMed CentralView ArticlePubMed
  5. Degtyarenko K, de Matos P, Ennis M, Hastings J, Zbinden M, McNaught A, Alc'antara R, Darsow M, Guedj M, Ashburner M: ChEBI: a database and ontology for chemical entities of biological interest. Nucleic Acids Res 2008, 36(Database issue):D344-D350.PubMed CentralPubMed
  6. Smith CL, Goldsmith CAW, Eppig JT: The Mammalian Phenotype Ontology as a tool for annotating, analyzing and comparing phenotypic information. Genome Biol 2005, 6: R7. 10.1186/gb-2005-6-5-p7PubMed CentralView ArticlePubMed
  7. Robinson PN, Köhler S, Bauer S, Seelow D, Horn D, Mundlos S: The Human Phenotype Ontology: a tool for annotating and analyzing human hereditary disease. Am J Hum Genet 2008, 83(5):610–615. 10.1016/j.ajhg.2008.09.017PubMed CentralView ArticlePubMed
  8. Hancock JM, Mallon AM, Beck T, Gkoutos GV, Mungall C, Schofield PN: Mouse, man, and meaning: bridging the semantics of mouse phenotype and human disease. Mamm Genome 2009, 20(8):457–461. 10.1007/s00335-009-9208-3PubMed CentralView ArticlePubMed
  9. Pesquita C, Faria D, Falcao AO, Lord P, Couto FM: Semantic similarity in biomedical ontologies. PLoS Comput Biol 2009, 5(7):e1000443. 10.1371/journal.pcbi.1000443PubMed CentralView ArticlePubMed
  10. Yu H, Jansen R, Stolovitzky G, Gerstein M: Total ancestry measure: quantifying the similarity in tree-like classification, with genomic applications. Bioinformatics 2007, 23(16):2163–2173. 10.1093/bioinformatics/btm291View ArticlePubMed
  11. Lord PW, Stevens RD, Brass A, Goble CA: Investigating semantic similarity measures across the Gene Ontology: the relationship between sequence and annotation. Bioinformatics 2003, 19(10):1275–1283. 10.1093/bioinformatics/btg153View ArticlePubMed
  12. Couto F, Silva MJ, Coutinho PM: Measuring Semantic Similarity between Gene Ontology Terms. Data and Knowledge Engineering, Elsevier 2007., 61:
  13. Joshi T, Xu D: Quantitative assessment of relationship between sequence similarity and function similarity. BMC Genomics 2007, 8: 222. 10.1186/1471-2164-8-222PubMed CentralView ArticlePubMed
  14. Sevilla JL, Segura V, Podhorski A, Guruceaga E, Mato JM, Martínez-Cruz LA, Corrales FJ, Rubio A: Correlation between gene expression and GO semantic similarity. IEEE/ACM Trans Comput Biol Bioinform 2005, 2(4):330–338. 10.1109/TCBB.2005.50View ArticlePubMed
  15. Xu T, Du L, Zhou Y: Evaluation of GO-based functional similarity measures using S. cerevisiae protein interaction and expression profile data. BMC Bioinformatics 2008, 9: 472. 10.1186/1471-2105-9-472PubMed CentralView ArticlePubMed
  16. Lei Z, Dai Y: Assessing protein similarity with Gene Ontology and its use in subnuclear localization prediction. BMC Bioinformatics 2006, 7: 491. 10.1186/1471-2105-7-491PubMed CentralView ArticlePubMed
  17. Köhler S, Schulz MH, Krawitz P, Bauer S, Dölken S, Ott CE, Mundlos C, Horn D, Mundlos S, Robinson PN: Clinical diagnostics in human genetics with semantic similarity searches in ontologies. Am J Hum Genet 2009, 85(4):457–464. 10.1016/j.ajhg.2009.09.003PubMed CentralView ArticlePubMed
  18. Resnik P: Using Information Content to Evaluate Semantic Similarity in a Taxonomy. Proceedings of the 14th International Joint Conference on Artificial Intelligence 1995, 448–453.
  19. Resnik P: Semantic similarity in a taxonomy: an information-based measure and its application to problems of ambiguity in natural language. Artificial Intelligence Research 1999, 11: 95–130.
  20. Jiang J, Conrath D: Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy. Proc of the 10th International Conference on Research on Computational Linguistics 1997., 10:
  21. Lin D: An information-theoretic definition of similarity. Proc of the 15th International Conference on Machine Learning 1998., 15:
  22. Mistry M, Pavlidis P: Gene Ontology term overlap as a measure of gene functional similarity. BMC Bioinformatics 2008, 9: 327. 10.1186/1471-2105-9-327PubMed CentralView ArticlePubMed
  23. Wang J, Zhou X, Zhu J, Zhou C, Guo Z: Revealing and avoiding bias in semantic similarity scores for protein pairs. BMC Bioinformatics 2010, 11: 290. 10.1186/1471-2105-11-290PubMed CentralView ArticlePubMed
  24. Schulz MH, Köhler S, Bauer S, Vingron M, Robinson PN: Exact Score Distribution Computation for Similarity Searches in Ontologies. In Algorithms in Bioinformatics, 9th International Workshop, WABI 2009. Volume 5724. Edited by: Warnow T, Salzberg S. Springer LNBI; 2009.
  25. Blizard WD: Multiset Theory. Notre Dame Journal of Formal Logic 1989, 30: 36–66.View Article

Copyright

© Schulz et al; licensee BioMed Central Ltd. 2011

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://​creativecommons.​org/​licenses/​by/​2.​0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.