- Research article
- Open Access
Efficient algorithms for biological stems search
- Tian Mi^{1} and
- Sanguthevar Rajasekaran^{1}Email author
https://doi.org/10.1186/1471-2105-14-161
© Mi and Rajasekaran; licensee BioMed Central Ltd. 2013
- Received: 30 July 2012
- Accepted: 6 May 2013
- Published: 16 May 2013
Abstract
Background
Motifs are significant patterns in DNA, RNA, and protein sequences, which play an important role in biological processes and functions, like identification of open reading frames, RNA transcription, protein binding, etc. Several versions of the motif search problem have been studied in the literature. One such version is called the Planted Motif Search (PMS)or (l, d)-motif Search. PMS is known to be NP complete. The time complexities of most of the planted motif search algorithms depend exponentially on the alphabet size. Recently a new version of the motif search problem has been introduced by Kuksa and Pavlovic. We call this version as the Motif Stems Search (MSS) problem. A motif stem is an l-mer (for some relevant value of l)with some wildcard characters and hence corresponds to a set of l-mers (without wildcards), some of which are (l, d)-motifs. Kuksa and Pavlovic have presented an efficient algorithm to find motif stems for inputs from large alphabets. Ideally, the number of stems output should be as small as possible since the stems form a superset of the motifs.
Results
In this paper we propose an efficient algorithm for MSS and evaluate it on both synthetic and real data. This evaluation reveals that our algorithm is much faster than Kuksa and Pavlovic’s algorithm.
Conclusions
Our MSS algorithm outperforms the algorithm of Kuksa and Pavlovic in terms of the run time as well as the number of stems output. Specifically, the stems output by our algorithm form a proper (and much smaller)subset of the stems output by Kuksa and Pavlovic’s algorithm.
Keywords
- Hash Table
- Exact Algorithm
- Suffix Tree
- Input String
- Motif Search
Background
Motifs, or sequence motifs, are patterns of nucleotides or amino acids. Motifs are often related to primer selection, transcription factor binding sites, mRNA processing, transcription termination, etc. Sequence motifs of proteins are typically involved in functions such as binding to a target protein, protein trafficking, post-translational modifications, and so on. Motif search problem has been studied extensively due to its pivotal biological significance. Several types of algorithms have been proposed for motif search. In one such class of methods, putative motifs in an input biological query sequence are predicted based on a database of known motifs. Examples include [1-3]. In another class of methods, motifs are assumed to appear frequently in a set of sequences, like the same protein sequence from different species. Here the problem of motif search is reduced to that of finding subsequences that occur in many of the input sequences. Planted motif search (PMS)is one such formulation.
Numerous algorithms have been proposed to solve the PMS problem. The WINNOWER algorithm uses a graph to transform this problem into one of finding large cliques in the graph [4]. The PatternBranching algorithm introduces a scoring technique for all the motif candidates [5]. The PROJECTION algorithm repeatedly picks several random positions and uses a hash table with a threshold to limit the motif candidates [6]. Bailey 1995 [7] employs expectation maximization algorithms while Gibbs sampling is used in [8, 9]. MULTIPROFILER [10], MEME [11], and CONSENSUS [12] are also known PMS algorithms.
An exact PMS algorithm always outputs all the motifs present in a given set of sequences. MITRA employs a mismatch tree structure to generate the motif candidates efficiently [13]. RISOTTO constructs a suffix tree to compare sequences [14]. PMS1 [15] considers all the motif candidates and evaluates them using sorting. Voting uses a hash table to locate the motifs [16]. PMS2 is based on PMS1 and it extends smaller motifs to get longer motifs, and PMS3 forms a motif of a desired length using two smaller motifs [15]. PMSPrune introduces a tree structure for the motif candidates and uses a branch-and-bound algorithm to reduce the search space [17]. PMS4 is a speed-up technique that finds a superset of the motifs using a subset of the input sequences and validates those candidates [18]. PMS5 employs an Integer Linear Programming (ILP)as the branch-bound algorithm for a speedup [19]. PMS6 uses the solutions of such ILPs to generate motif candidates [20].
Most of the work on exact algorithms for PMS has focussed on DNA or RNA sequences where |Σ|=4. Little work has been done on larger alphabet sizes, such as on proteins. A recent work focusses on protein sequences [21]. However, the stemming algorithm proposed in this paper does not solve the PMS problem. In particular, it does not find motifs but rather motif stems. A motif stem (denoted as stem from hereon)can be thought of as an l-mer (for some relevant value of l)with some wild cards present in it. As a result, a stem stands for a set of motifs. A stemming algorithm generates stems (or motifs with wildcards)to represent motifs for large-alphabet inputs [21]. The stemming algorithm of [21] generates a very large set of stems (and hence a very large superset of motifs). In this paper we propose two algorithms for Motif Stems Search, MSS1 and MSS2, which outperform the stemming algorithm of [21]. The new algorithms generate a much smaller set of stems. The stems generated by the algorithm of [21] as well as MSS1 and MSS2 are guaranteed to be supersets of all the motifs present in a given set of input sequences.
Methods
Motif search on large alphabets
In this section we provide some definitions pertinent to PMS and MSS problems.
Definition 1
A sequence x=x[1,2,…,l] (|x|=l)on Σ (x[i]∈Σ, 1≤i≤l)is an l-mer.
Definition 2
Definition 3
Definition 4
Any such x is called an (l,d)-motif. Any l-mer of s_{ i } that is at a Hamming distance of ≤d from x is called an occurrence or instance of x.
Definition 5
Given an l-mer x, the d-neighbors of x is defined to be {y : |y|=l and H D(x,y)≤d}. The d-neighbors of x in any sequence s is defined to be the intersection of the d-neighbors of x and the set of l-mers in s.
Observation 1
If the Hamming distance between two l-mers x_{1} and x_{2} is larger than 2d (i.e., H D(x_{1},x_{2})≥2d)then no l-mer x_{3} exists such that H D(x_{1},x_{3})≤d and H D(x_{2},x_{3})≤d.
PMS algorithms are typically tested on random data with n=20 and m=600. Each input string is randomly generated such that each symbol in each string is equally likely to be any character from the alphabet. A motif is generated randomly. Randomly mutated versions of this motif are planted in the input strings (one mutated motif per string). For a given value of l, we call the pair (l,d)a challenging instance if d is the smallest value for which the number of (l,d)-motifs occurring in the input strings by random chance is ≥1. Some of the challenging instances are: (9, 2), (11, 3), (13, 4), (15, 5), (17, 6), (19, 7), and so on. One of the performance measures of interest for any exact algorithm is the largest challenging instance that it can solve. MITRA can solve the instance of (13, 4)[13], and RISOTTO [14] and Voting [16] successfully run on (15, 5). PMSPrune solves up to (19, 7)[17]. PMS5 [19] and PMS6 [20] can handle (23, 9). These statistics are based on DNA sequences where |Σ|=4.
The time complexities of exact algorithms typically depend exponentially on the size of Σ. PMS0 takes $O\left({m}^{2}\mathit{\text{nl}}\left(\genfrac{}{}{0ex}{}{l}{d}\right)\right|\Sigma {|}^{d})$ time, and PMS1 costs $O\left(\mathit{\text{mn}}\left(\genfrac{}{}{0ex}{}{l}{d}\right)\right|\Sigma {|}^{d}\frac{l}{w})$ time where w is the word length of the computer [15]. It needs $O\left(m{n}^{2}{l}^{d}\left(\genfrac{}{}{0ex}{}{l}{d}\right)\right|\Sigma {|}^{d})$ time for RISOTTO [14], $O\left(\mathit{\text{mn}}\left(\genfrac{}{}{0ex}{}{l}{d}\right)\right|\Sigma {|}^{d})$ for Voting [16], and $O\left({m}^{2}n\left(\genfrac{}{}{0ex}{}{l}{d}\right)\right|\Sigma {|}^{d})$ for PMSPrune [17].
When the size of the alphabet is large (e.g., |Σ|=20 for proteins), the above exact algorithms will take a very long time. Kuksa and Pavlovic have introduced a new version of the motif search problem and proposed an efficient algorithm to solve it on large alphabets. A motif stem is an l-mer with wildcards. Thus a stem represents a set of l-mers without wildcards. For example, if g ∗ a c c is a DNA stem, it represents the following 5-mers without wildcards: g g a a c,g c a a c,g t a a c, and gaaac. Given a set of strings from some alphabet, the problem of finding motif stems in them is known as the Motif Stem Search (MSS) problem. We focus on MSS in this paper.
Definition 6
Motif Stem Search (MSS)Problem. Input are N sequences and two integers l and d. The problem is to find a set of stems such that the set of l-mers represented by these stems is a superset of all the (l,d)-motifs present in the N sequences.
As stated above, there are many possible solutions to the MSS problem. The challenge then is to come up with a superset as small as possible which covers all the (l,d)-motifs. In other words, we want the number of l-mers (without wildcards)represented by the stems to be as small as possible.
MSS1 - A basic Algorithm
Based on OBSERVATION 1, if the Hamming distance between an l-mer x and a sequence s is larger than 2d, then no l-mer x’ exists such that H D(x,x^{′})≤d and H D(x^{′},s)≤d. This leads us to the following observation.
Observation 2
Given an l-mer x, if ∃s_{ i } such that H D(x,s_{ i })>2d, then none of x’s d-neighbors can be a motif.
The stemming algorithm of [21] works as follows. It makes use of OBSERVATION 2 crucially. OBSERVATION 2 states that an l-mer x in any input string cannot be an instance of an (l,d)-motif if there exists at least one input string s such that H D(x,s)>2d. The algorithm of [21] first identifies a set I of possible motif instances. An l-mer x in any input string s will be included in I if and only if H D(x,s^{′})≤2d for every input string s’. Having found such a set I, the algorithm then uses I to generate stems. The stems are found as follows: For every x,y∈I, the algorithm generates the common d-neighbors of x and y as stems. The union of all such stems will constitute candidate motif stems. This union is a superset of the motif stems. Finally, for each candidate stem, the algorithm checks if this is a correct answer or not. All valid stems that pass this test are output.
Numbers of wildcards
Non-matching region | Matching region | |
---|---|---|
d_{ x }≤d | 0≤i≤d_{ x } | d−max(i,d_{ x }−i) |
d_{ x }>d | d_{ x }−d≤i≤d | d−max(i,d_{ x }−i) |
- 1.
d _{ x }≤d: The number of wildcards i can vary from 0 to the size of the non-matching region. To make the total number of mismatches against x smaller than d, at most d−i wildcards can be placed in the matching region of x. Similarly, to make the total number of mismatches against x’ smaller than d, at most d−(d _{ x }−i)wildcards can be placed in the matching region of x’.
- 2.
d _{ x }>d: At least d _{ x }−d wildcards have to be placed in the non-matching region to eliminate the mismatches. Similar to case 1, in the matching region, at most d− max(i,d _{ x }−i)wildcards can be placed.
Algorithm 1 MSS1
In the matching region, d− max(i,d_{ x }−i)is an upper bound on the number of wildcards. However, it is not necessary to enumerate all the cases from 0 to d− max(i,d_{ x }−i). Similarly, it is not necessary to repeat stems generation for all pairs in I. For any x let ${x}_{1}^{i},{x}_{2}^{i},\dots ,{x}_{j}^{i}\in I$ be x’s 2d-neighbors in sequence s_{ i } (i.e., ${I}_{i}=\{{x}_{1}^{i},{x}_{2}^{i},\dots ,{x}_{j}^{i}\}$)and let O_{ i } be the set of motif instances in s_{ i }. Then, clearly, O_{ i }⊂I_{ i }. The motifs form a subset of stems that can be obtained between x and each of O_{ i }. The motif stems we generate are stems that can be obtained from l-mer pairs of $\left(\right(x,{x}_{1}^{i}),(x,{x}_{2}^{i}),\dots ,(x,{x}_{j}^{i}\left)\right)$. To minimize the number of stems generated from I, we have to use the sequence with the smallest j.
Observation 3
For any l-mer x, let its 2d-neighbors in sequence s_{ i } be ${I}_{i}={x}_{1}^{i},{x}_{2}^{i},\dots ,{x}_{j}^{i}$ (for 1≤i≤n). Then, the (l,d)-motifs are included in the stems set, which is generated from the pairs $(x,{x}_{1}^{i}),(x,{x}_{2}^{i}),\dots ,(x,{x}_{j}^{i})$.
The detailed MSS1 algorithm is given in Algorithm 1 and Algorithm 2.
In lines 2-18 we find the sequence in which x has the minimum number of 2d-neighbors. Also, if one sequence has no 2d-neighbor, the current l-mer x is skipped (line 12). The stems are generated by placing wildcards in each pair of x×I_{ m i n }, as shown in Algorithm 2.
Hamming distance is called m^{2}n times. Therefore, excluding wildcards placement, Algorithm 1 takes O(m^{2}n l)time.
Wildcards placement procedure is called (m−l+1)times and each time |I_{ m i n }|=m−l+1 in the worst case. Therefore, in this case, wildcards placement (line 4-16)in Algorithm 2 is run O(m^{2})times. The number of wildcards is no more than d. So line 4-16 takes $O\left(\genfrac{}{}{0ex}{}{l}{d}\right)$ time in the worst case. As a result, wildcards placement in MSS1 takes $O\left({m}^{2}\left(\genfrac{}{}{0ex}{}{l}{d}\right)\right)$ time. In the best case, wildcards placement procedure is only called once when all other l-mers in s_{1} have no 2d-neigbhors, and I_{ m i n }=1. The best case for line 4-16 is when d_{ x }=2d and it takes $O\left(\genfrac{}{}{0ex}{}{2d}{d}\right)$ time (see DISCUSSION for more analysis).
In summary, MSS1 takes O(m^{2}n l+|s t e m s|)time, where |s t e m s| is the total number of stems generated.
Algorithm 2 PlaceWildcards
MSS2 - A speedup Algorithm
Assume that we have computed the Hamming distance between x_{1} in s_{1} and ${x}_{j}^{\prime}$ in s_{ i }. Let this be $\mathit{\text{HD}}({x}_{1},{x}_{j}^{\prime})={d}_{1}$. Then, $\mathit{\text{HD}}({x}_{2},{x}_{j+1}^{\prime})$ can be obtained by comparing: 1)the first characters of x_{1} and x_{ j }; and 2)the last characters of x_{2} and x_{j+1}. Observe that the (l−1)-length prefix of x_{2} is the (l−1)-length suffix of x_{1}, and ${x}_{j}^{\prime}$ and ${x}_{j+1}^{\prime}$ also share the same (l−1)-mer.
If the first characters of x_{1} and x_{ j } match, then the d_{1} mismatches happen in the remaining (l−1)-long suffixes of x_{1} and x_{ j }. In this case, H D(x_{2},x_{j+1})=d_{1} if the last characters of x_{2} and x_{j+1} match; otherwise H D(x_{2},x_{j+1})=d_{1}+1. Similarly, if the first characters of x_{1} and x_{ j } do not match, then there are (d_{1}−1)mismatches in the remaining (l−1)-long suffixes of x_{1} and x_{ j }. In this case, H D(x_{2},x_{j+1})=d_{1}−1 if the last characters of x_{2} and x_{j+1} match; otherwise H D(x_{2},x_{j+1})=d_{1}. This observation is also mentioned in [4].
Observation 4
However, $\mathit{\text{HD}}({x}_{p},{x}_{q}^{\prime})$ where p>q is left out when OBSERVATION 4 is used repeatedly and reaches the end of s_{ i }. We simply append a copy of s_{ i } to s_{ i } to cover all the pairwise alignments (Figure 1A). Then, by calculating the Hamming distance only once and applying OBSERVATION 4 repeatedly, each diagonal in the matrix of Figure 1B can be computed in O(l+m)time.
We do the above for m diagonals from cell $({x}_{1},{x}_{1}^{\prime})$ to $({x}_{1},{x}_{m}^{\prime})$ in Figure 1B. Then the first and last (m−l+1)rows are used to form a complete (m−l+1)×(m−l+1)matrix. The l rows in the middle were eliminated since they are the extra rows caused by appending a copy of s_{ i }. Therefore, the computation of the 2d-neighbors of x from s_{1} in any sequence s_{ i } can be computed in O(m∗(m+l))=O(m^{2})time. The computation for all the sequences from s_{2} to s_{ n } takes a total of O(m^{2}n)time.
The pseudocode is given in Algorithm 3. In lines 6-10, the Hamming distance is calculated for the alignment of s_{1} with the j_{ t h } position of s_{ i }. Each of the remaining Hamming distances in this alignment is obtained in constant time by OBSERVATION 4 (line 12-26). Instead of appending a copy of s_{ i }, the mod operation is used. MSS2
N_{2d}[k][i] keeps the 2d-neighbors of the k_{ t h }l-mer in s_{1} in the i_{ t h } sequence s_{ i }. To build the matrix of 2d-neighbors of all the l-mers of s_{1} (N_{2d}[k][i]), it takes O(m^{2}n)time (lines 3-28). Lines 29-41 search the 2d-neighbors of each l-mer of s_{1}. If some sequence s_{ j } has no 2d-neighbors, the current i_{ t h }l-mer of s_{1} is skipped (lines 32-34). Otherwise, the 2d-neighbors in the sequence with the smallest size, I_{ m i n }, are used and the wildcards are placed.
MSS2 takes O(m^{2}n+|s t e m s|)time.
Optionally, a post-process phase is used following both MSS1 and MSS2 algorithms to refine the output stems. In the post-process phase, a stem is retained only if this stem has at least one neighbor in each sequence at a distance of ≤d. This phase takes a total of O(m n l∗|s t e m|)time.
An estimation on the number of stems
Example values of p given | Σ |=4 and | Σ |=20
(l,d) | | Σ|=4 | | Σ|=20 |
---|---|---|
(7,1) | 1.29∗10^{−2} | 6.03∗10^{−6} |
(9,2) | 4.89∗10^{−2} | 3.32∗10^{−5} |
(11,3) | 1.15∗10^{−1} | 1.11∗10^{−4} |
(13,4) | 6.38∗10^{−2} | 8.88∗10^{−5} |
(15,5) | 4.27∗10^{−4} | 8.22∗10^{−7} |
(17,6) | 5.82∗10^{−10} | 2.76∗10^{−20} |
(19,7) | 3.64∗10^{−12} | 1.91∗10^{−25} |
(21,8) | 1.43∗10^{−3} | 1.21∗10^{−5} |
Algorithm 3 MSS2
Challenging instances
Consider a PMS instance with n sequences of length m each. For a given value l, let d be the smallest integer such that the expected number of (l,d)motifs that occur by random chance is ≥1. We refer to (l,d)as a challenging instance. We can compute challenging instances as follows. Let the alphabet under concern be Σ with |Σ|=σ. The probability that two random characters in this alphabet match is 1/|σ|. Then assuming an IID background, the probability that the Hamming distance between two l-mers is no more than d is $p={\sum}_{i=0}^{d}\left(\genfrac{}{}{0ex}{}{l}{i}\right){\left(\frac{\sigma -1}{\sigma}\right)}^{i}{\left(\frac{1}{\sigma}\right)}^{l-i}$. For each sequence, the probability that a random l-mer has at least one d-neighbor (i.e., an l-mer with a Hamming distance of no more than d)in this sequence is P=1−(1−p)^{m−l+1}. This means that the expected number of randomly occurring (l,d)motifs in the n sequences is σ^{ l }P^{ n }. From this we can calculate the challenging instances. For σ=4, the challenging instances are (7,1),(9,2), etc. When σ=20, the challenging instances are (7,4),(9,5), etc. Because of Observation 1, in our tests of challenging instances of protein sequences, we have used the cases of (7,3),(9,4), and (11,5).
Results
We have evaluated our algorithms on the standard benchmark where n=20, m=600. Let |Σ|=20 (for proteins). We have used (l,d)values starting from (7,1)going up to (21,8).
The testing data was generated as follows: 1)20 sequences of length 600 each were generated such that each character in each sequence is equally likely to be one of the characters from the alphabet; 2)a motif of length l was generated randomly; 3)a random number of mismatch positions which is smaller than or equal to d was selected and the characters in these positions were replaced by other amino acids randomly to form a motif instance; 4)step 3)was done 20 times to generate 20 such instances and these were planted in the 20 sequences (at random places with one instance per sequence).
Time comparison of MSS, RISOTTO, and stemming algorithms
(l,d) | MSS1(s) | MSS2(s) | Post-process(s) | RISOTTO(s) | Stemming(s) |
---|---|---|---|---|---|
(7,1) | 23.2 | 3.7 | 0.03 | 4.7 | 48.0 |
(9,2) | 24.64 | 3.7 | 0.3 | 242.3 | 359.9 |
(11,3) | 27.5 | 3.7 | 2.0 | 7166.1 | 4674.2 |
(13,4) | 34.5 | 3.9 | 50.4 | - | - |
(15,5) | 38.8 | 4.7 | 74.5 | - | - |
(17,6) | 60.2 | 15.3 | 1459.0 | - | - |
(19,7) | 200.8 | 117.3 | 18364.1 | - | - |
(21,8) | 719.6 | 563.2 | 69340.8 | - | - |
Number of stems generated by MSS and stemming algorithms
(l,d) | MSS1/MSS2 | Post-process | Stemming |
---|---|---|---|
(7,1) | 2 | 1 | 928 |
(9,2) | 22 | 2 | 17894 |
(11,3) | 130 | 44 | 265587 |
(13,4) | 2250 | 1452 | - |
(15,5) | 5222 | 1032 | - |
(17,6) | 60168 | 23829 | - |
(19,7) | 521658 | 422019 | - |
(21,8) | 2255690 | 1297576 | - |
Comparison of MSS, ROSOTTO, and stemming algorithms on challenging instances
(l,d) | MSS1(s) | MSS2(s) | ROSITTO | Stemming |
---|---|---|---|---|
(7,3) | 225.9 | 615.7 | 7068.6 | >4hours |
(9,4) | 1051.0 | 1477.4 | >4hours | >4hours |
(11,5) | 5129.4 | 5503.0 | >4hours | >4hours |
Statistics on different alphabet sizes
| Σ| | MSS1(s)/ | s t e m s| | MSS2(s)/ | s t e m s| | Post-process(s)/ | s t e m s| | Stemming(s)/ | s t e m s| |
---|---|---|---|---|
40 | 25.1/190 | 3.6/190 | 2.4/45 | 2125.5/16669665 |
60 | 26.2/400 | 3.6/400 | 6.9/169 | 3023.4/18465345 |
80 | 23.6/50 | 3.6/50 | 0.4/4 | 3493.0/11380993 |
100 | 27.1/260 | 3.6/260 | 5.6/216 | 4464.9/17733385 |
Motif search on protein data
Protein motifs | #Source proteins | (l,d) | MSS2(s) | RISOTTO(s) | Stemming(s) |
---|---|---|---|---|---|
CPTINEPCC | 7 | (9,2) | 2.0 | 100.0 | 244.0 |
CRFYNCHHLHEPGC | 10 | (14,4) | 22.2 | >4hours | >4hours |
HTHPTQTAFLSSVD | 8 | (14,4) | 10.3 | >4hours | >4hours |
ILPPVPVPK | 14 | (9,2) | 3.8 | 105.8 | 582.0 |
PEPNGYLHIGH | 134 | (11,3) | 51.1 | 12827.0 | >4hours |
PSPTGFIHLGN | 36 | (11,3) | 6.5 | 4336.6 | 4561.0 |
PTVYNYAHIGN | 19 | (11,3) | 3.6 | 3358.9 | 4917.0 |
PYANGSIHLGH | 110 | (11,3) | 52.1 | 11363.2 | >4hours |
PYPSGQGLHVGH | 18 | (12,3) | 10.4 | >4hours | >4hours |
QELFKRISEQFTAMF | 9 | (15,4) | 47.6 | >4hours | >4hours |
QIKTLNNKFASFIDK | 9 | (15,4) | 20.3 | >4hours | >4hours |
SGYSSPGSPGTPGSR | 9 | (15,4) | 32.6 | >4hours | >4hours |
SSSSLEKSYELPDGQ | 10 | (15,4) | 41.3 | >4hours | >4hours |
VTVYDYCHLGH | 8 | (11,3) | 2.9 | 3145.8 | 2235.0 |
MSS2 vs. PMSPrune on DNA data
(l,d) | MSS2(s) | PMSPrune(s) |
---|---|---|
(7,1) | 4.1 | 3.3 |
(9,2) | 10.7 | 3.4 |
(11,3) | 87.2 | 8.1 |
Discussion and conclusions
The analysis in [21] shows that, assuming IID background, the expected number of the (l,2d)-motifs depends highly on the alphabet size |Σ|. Therefore, when |Σ| is large, the expected number of 2d-neighbors in the n m-length sequences is very small in comparison with the total number of l-mers (n(m−l+1)).
The proposed algorithms consider an even smaller size of candidates by introducing I_{ m i n }. In particular, for any given l-mer x, we focus on the sequence that has the smallest number of 2d-neighbors for x. The expected size of I_{ m i n } is $\frac{1}{n}$ times the total number of 2d-neighbors of x in all the sequences. Please note that we do not miss any of the valid motifs.
On the other hand, when generating the stems, as shown in Table 1, once i wildcards in the non-matching region are placed, it is known that the upper bound of wildcards in the matching region is d− max(i,d_{ x }−i). However, it is not necessary to enumerate all the cases from 0 to d− max(i,d_{ x }−i)in the matching region. As long as the case of (d− max(i,d_{ x }−i))wildcards cannot be eliminated, 0 to (d− max(i,d_{ x }−i)−1)wildcards are contained in the (d− max(i,d_{ x }−i))wildcards placement. Therefore, the proposed algorithms do not enumerate 0 to (d− max(i,d_{ x }−i)−1)wildcards placements in the output.
In the computation of the 2d-neighbors, MSS1 takes O(m^{2}n l)time and O(m)space. MSS2 takes O(m^{2}n)time and O(m^{2})space. The stemming algorithm of [21] uses sorting to compute the set I.
The proposed algorithms MSS1 and MSS2 provide an efficient way to solve the Motif Stems Search problem in terms of both time and space. Also, the stems generated by MSS1 and MSS2 form a much smaller subset, with less false predictions, of the stems generated by the algorithm of [21].
Declarations
Acknowledgements
This work has been supported in part by the following grants: NSF 0829916 and NIH R01-LM010101.
Authors’ Affiliations
References
- Mi T, Merlin JC, Deverasetty S, Gryk MR, Bill TJ, Brooks AW, Lee LY, Rathnayake V, Ross CA, Sargeant DP, Strong CL, Watts P, Rajasekaran S, Schiller MR: Minimotif Miner 3.0: database expansion and significantly improved reduction of false-positive predictions from consensus sequences. Nucleic Acids Res. 2012, 40: D252—D260-PubMed CentralView ArticlePubMedGoogle Scholar
- Gould CM, Diella F, Via A, Puntervoll P, Gemund C, Chabanis-Davidson S, Michael S, Sayadi A, Bryne JC, Chica C, Seiler M, Davey NE, Haslam NJ, Weatheritt RJ, Budd A, Hughes T, Pas J, Rychlewski L, Trave G, Aasland R, Helmer-Citterich M, Linding R, Gibson TJ: ELM: the status of the 2010 eukaryotic linear motif resource. Nucleic Acids Res. 2010, 38: 167-180.View ArticleGoogle Scholar
- Obenauer JC, Cantley LC, Yaffe MB: Scansite 2.0: proteome-wide prediction of cell signaling interactions using short sequence motifs. Nucleic Acids Res. 2003, 31: 3635-3641. 10.1093/nar/gkg584.PubMed CentralView ArticlePubMedGoogle Scholar
- Pevzner PA, hoi Sze S: Combinatorial approaches to finding subtle signals in DNA sequences. Proceedings of the Eighth International Conference on Intelligent Systems for Molecular Biology. 2000, Menlo Park: AAAI Press, 269-278.Google Scholar
- Price A, Ramabhadran S, Pevzner PA: Finding subtle motifs by branching from sample strings. Bioinformatics. 2003, 19: 149-155. 10.1093/bioinformatics/19.1.149.View ArticleGoogle Scholar
- Buhler J, Tompa M: Finding motifs using random projections. J Comput Biol. 2002, 9: 225-242. 10.1089/10665270252935430.View ArticlePubMedGoogle Scholar
- Bailey TL, Elkan C: Unsupervised learning of multiple motifs in biopolymers using expectation maximization. Mach Learning. 1995, 21: 51-80.Google Scholar
- Lawrence CE, Altschul SF, Boguski MS, Liu JS, Neuwald AF, Wootton JC: Detecting subtle sequence signals: a Gibbs sampling strategy for multiple alignment. Science. 1993, 262: 208-214. 10.1126/science.8211139.View ArticlePubMedGoogle Scholar
- Rocke E, Tompa M: An algorithm for finding novel gapped motifs in DNA sequences. Proceedings of the Second Annual International Conference on Computational Molecular Biology, RECOMB ’98. 1998, New York: ACM, 228-233.View ArticleGoogle Scholar
- Keich U, Pevzner P: Finding motifs in the twilight zone. Bioinformatics. 2002, 18: 1374-1381. 10.1093/bioinformatics/18.10.1374.View ArticlePubMedGoogle Scholar
- Timothy LB, Elkan C: Fitting a mixture model by expectation maximization to discover motifs in biopolymers. Proceedings of the Second International Conference on Intelligent Systems for Molecular Biology. 1994, Stanford, CA: AAAI Press, 28-36.Google Scholar
- Hertz GZ, Stormo GD: Identifying DNA and protein patterns with statistically significant alignments of multiple sequences. Bioinformatics. 1999, 15: 563-577. 10.1093/bioinformatics/15.7.563.View ArticlePubMedGoogle Scholar
- Eskin E, Pevzner PA: Finding composite regulatory patterns in DNA sequences. Bioinformatics. 2002, 18: 354-363. 10.1093/bioinformatics/18.suppl_1.S354.View ArticleGoogle Scholar
- Pisanti N, Carvalho A, Marsan L, Sagot MF: RISOTTO: Fast Extraction of Motifs with Mismatches. LATIN 2006: Theoretical Informatics, Volume 3887 of Lecture Notes in Computer Science. Edited by: Correa J, Hevia A, Kiwi M. 2006, Berlin / Heidelberg: Springer, 757-768.View ArticleGoogle Scholar
- Rajasekaran S, Balla S, hsi Huang C: Exact algorithm for planted motif challenge problems. J Comput Biol. 2005, 12: 249-259.View ArticleGoogle Scholar
- Chin FYL, Leung HCM: Voting algorithms for discovering long motifs. Proceedings of the Third Asia-Pacific Bioinformatics Conference (APBC2005). 2005, London: Imperial College Press, 261-271.View ArticleGoogle Scholar
- Davila J, Balla S, Rajasekaran S: Fast and practical algorithms for planted (l, d)motif search. IEEE/ACM Trans Comput Biol Bioinformatics. 2007, 4: 544-552.View ArticleGoogle Scholar
- Rajasekaran S, Dinh H: A speedup technique for (l, d)-motif finding algorithms. BMC Res Notes. 2011, 4: 54-10.1186/1756-0500-4-54.PubMed CentralView ArticlePubMedGoogle Scholar
- Dinh H, Rajasekaran S, Kundeti V: PMS5: an efficient exact algorithm for the (l, d)-motif finding problem. BMC Bioinformatics. 2011, 12: 410-10.1186/1471-2105-12-410.PubMed CentralView ArticlePubMedGoogle Scholar
- Bandyopadhyay S, Sahni S, Rajasekaran S: PMS6: A fast algorithm for motif discovery. Proc. 2012 IEEE 2nd International Conference on Computational Advances in Bio and Medical Sciences (ICCABS). 2012, New York: IEEE, 1-6.View ArticleGoogle Scholar
- Kuksa P, Pavlovic V: Efficient motif finding algorithms for large-alphabet inputs. BMC Bioinformatics. 2010, 11: S1-PubMed CentralView ArticlePubMedGoogle Scholar
- Davila J, Balla S, Rajasekaran S: Fast and practical algorithms for planted (l, d)motif search. IEEE/ACM Trans Comput Biol Bioinformatics. 2007, 4 (4): 544-552.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.