 Research
 Open Access
 Published:
RefSelect: a reference sequence selection algorithm for planted (l, d) motif search
BMC Bioinformatics volume 17, Article number: 266 (2016)
Abstract
Background
The planted (l, d) motif search (PMS) is an important yet challenging problem in computational biology. Patterndriven PMS algorithms usually use k out of t input sequences as reference sequences to generate candidate motifs, and they can find all the (l, d) motifs in the input sequences. However, most of them simply take the first k sequences in the input as reference sequences without elaborate selection processes, and thus they may exhibit sharp fluctuations in running time, especially for large alphabets.
Results
In this paper, we build the reference sequence selection problem and propose a method named RefSelect to quickly solve it by evaluating the number of candidate motifs for the reference sequences. RefSelect can bring a practical time improvement of the stateoftheart patterndriven PMS algorithms. Experimental results show that RefSelect (1) makes the tested algorithms solve the PMS problem steadily in an efficient way, (2) particularly, makes them achieve a speedup of up to about 100× on the protein data, and (3) is also suitable for large data sets which contain hundreds or more sequences.
Conclusions
The proposed algorithm RefSelect can be used to solve the problem that many patterndriven PMS algorithms present execution time instability. RefSelect requires a small amount of storage space and is capable of selecting reference sequences efficiently and effectively. Also, the parallel version of RefSelect is provided for handling large data sets.
Background
Motif discovery, as a main means to locate conserved fragments in biological sequences, is a fundamental problem in computational biology. The conserved fragments usually have special biological significance. For example, transcription factor binding sites in DNA sequences [1, 2] play a key role in gene expression regulation and they usually range from 5 to 25 base pairs; short protein sequence signatures [3, 4], which usually range from 10 to 36 residues, can be used in identifying potential interaction sites of proteins.
The planted (l, d) motif search (PMS) [5] is a famous formulation for motif discovery: given a data set D = {s _{1}, s _{2}, …, s _{ t }} with t nlength sequences over an alphabet Σ, q satisfying 0 < q ≤ t, and l and d satisfying 0 ≤ d < l < n, the goal is to find one or more llength strings m such that m occurs in at least q sequences in D with up to d mismatches. The string m is called a (l, d) motif, and each occurrence of m is called a motif instance. Finding all (l, d) motifs present in the input sequences is NPcomplete [6].
In the PMS problem, the value of q implies the corresponding sequence model of motif discovery (i.e., the distribution of motif occurrences in the input sequences). The usual sequence models include OOPS, ZOOPS and TCM [7], representing that each input sequence contains one occurrence, zero or one occurrence and zero or more occurrences, respectively. When q = t and 0 < q < t, the PMS problem corresponds to the OOPS and ZOOPS or TCM sequence models, respectively.
There have been numerous motif discovery algorithms [8, 9]. They are either approximate or exact, based on whether the algorithm can always find all motifs or the optimum motif. The approximate algorithms usually adopt probability analysis and statistical methods. For instance, the two classical algorithms MEME [10] and Gibbs Sampling [11] identify motifs by using expectation maximization and Gibbs sampling techniques, respectively. In general, these approximate algorithms can solve the problem in a short time but cannot guarantee global optimum.
In this paper, we mainly focus on exact motif discovery algorithms, which can find all (l, d) motifs by traversing the whole search space. The main indicator to assess exact algorithms is time performance, and researchers usually compare exact algorithms on the challenging PMS problem instances, for which the expected number of random (l, d) motifs present in the sequences is more than 1 [12]. For the exact algorithms proposed in earlier years, such as WINNOWER [5], DPCFG [13] and RecMotif [14], their search space is composed of (n – l + 1)^{t} possible alignments of motif instances. In recent years, the exact algorithms verify all the llength patterns in the O(∑^{l}) search space, and then output the patterns with motif property; we call them the patterndriven PMS algorithms [15–24].
The patterndriven PMS algorithms have better time performance than other exact algorithms so far in identifying both short motifs and long motifs with weak signal. Their basic idea is to generate candidate motifs by using several reference sequences in the input, and then verify each candidate motif one by one. Specifically, they generate candidate motifs by using all possible htuple T = (x _{1}, x _{2}… x _{ h }) composed of h llength strings coming from h distinct reference sequences. In existing patterndriven PMS algorithms, h is 1 for PMSP [15] and PMSPrune [16]; h is 2 for StemFinder [17], PairMotif [18], qPMS7 [19] and TravStrR [20]; h is 3 for iTriplet [21] and PMS5 [22]; for PMS8 [23] and qPMS9 [24], h is greater than or equal to 3 and selfadaptive in dealing with different PMS problem instances. Moreover, these algorithms use k = t – q + h reference sequences to generate candidate motifs, ensuring that there exists at least one htuple T so that each llength string in T is a motif instance.
Although patterndriven PMS algorithms outperform other exact algorithms, most of them use the first k sequences in the input as reference sequences, without considering the effect of different reference sequences on time performance. In this study we found that given a data set, different reference sequences may lead to quite different number of candidate motifs, especially for large alphabets. So, in dealing with different inputs with the same scale, the patterndriven PMS algorithms may exhibit sharp fluctuations in running time. For instance, we randomly generate multiple groups of data sets with ∑ = 20, t = 20, q = 20 and n = 600 following the method described in the results and discussion section. When solving the (19, 9) problem instance, qPMS7 sometimes consumes 6.1 minutes, but sometimes over 48 hours. Some other patterndriven PMS algorithms like TravStrR and PMS8, suffer from the same problem (see Supplement 1 for more examples).
To solve this problem, we propose a method named RefSelect to quickly select reference sequences that generate a small number of candidate motifs. RefSelect can bring a practical time improvement of the stateoftheart patterndriven PMS algorithms, without doing any modifications to them.
Methods
Problem description and notations
Reference sequence selection problem
Given a data set D = {s _{1}, s _{2}, …, s _{ t }} over an alphabet Σ that contains t sequences of length n, the (l, d) problem instance (0 ≤ d < l < n) and the number of reference sequences k (1 < k < t) required by the patterndriven PMS algorithms, the task is to select k reference sequences from D to form the reference sequence set D', such that when using D' the patterndriven PMS algorithms can efficiently solve the (l, d) problem instance without sharp fluctuations in running time.
In the reference sequence selection problem the value of provided k should be greater than 1. If k is 1, it means that the candidate motifs will be generated from multiple single lmers; in this case, no matter how we select a reference sequence, the number of generated candidate motifs is fixed. In fact, k is greater than 1 for all the efficient and recently proposed patterndriven PMS algorithms.
We evaluate a reference sequence selection algorithm from two perspectives. One is the time performance. The time cost of the reference sequence selection algorithm should be as small as possible because it is a preprocessing for patterndriven PMS algorithms. It will be meaningless if it costs too much time to select reference sequences. The other is validity, namely whether the reference sequence selection algorithm brings a good speedup for patterndriven PMS algorithms. The speedup is the ratio of T _{1} to T _{ rs } + T _{2}, where T _{1} is the running time of the patterndriven PMS algorithms on the input sequences of original order, T _{ rs } is the running time of the reference sequence selection algorithm, and T _{2} is the running time of the patterndriven PMS algorithms on the input sequences of new order generated by the reference sequence selection algorithm.
Table 1 summarizes the notations used in this paper. Notice that, a sequence specially refers to an nlength string in a data set, and an lmer refers to a short string of length l (l < n).
Overview of RefSelect
We introduce why and how to select reference sequences for the patterndriven PMS algorithms. Let us consider the following two observations, which indicate how the Hamming distance between pairs of lmers affects the number of candidate motifs. Examples and detailed discussion for the two observations are given in the results and discussion section.
Observation 1. For two lmers x and x', the smaller their Hamming distance d _{ H }(x, x'), the larger the number of their common candidate motifs M _{ d }(x, x').
Observation 2. For a tuple T of h lmers, when it contains pairs of lmers with a relatively small Hamming distance, it generates a relatively large number of candidate motifs.
Based on the two observations, different reference sequences may lead to different number of candidate motifs. The patterndriven PMS algorithms utilize all tuples of h lmers in k (0 < h ≤ k) reference sequences to generate candidate motifs. Once there are relatively more pairs of lmers with small Hamming distance in these htuples, more candidate motifs will be generated.
Since the time performance of patterndriven PMS algorithms mainly depends on the number of generated candidate motifs, we should select the reference sequence set generating a small number of candidate motifs. For example, as shown in Fig. 1, assume the input sequence set D is {s _{1}, s _{2}, s _{3}, s _{4}} where each sequence has two lmers and we select k = 3 reference sequences from D. In the figure, the thicker the dotted line, the more candidate motifs are generated by the associated two lmers. Obviously, {s _{2}, s _{3}, s _{4}} is the optimal reference sequence set.
Naturally, we select reference sequences by evaluating the number of generated candidate motifs, ensuring the selected reference sequences generate a small number of candidate motifs. When we evaluate the number of candidate motifs generated from a tuple T of h lmers, it is difficult to directly compute the number of common candidate motifs shared by all the h lmers in T, denoted by N _{1}. An alternative way is to compute the sum of the number of common candidate motifs shared by each pair of lmers in T, denoted by N _{2} = ΣM _{ d }(x _{ i }, x _{ j }) for 1 ≤ i < j ≤ h. As shown in Fig. 5, N _{1} and N _{2} have the consistent tendency with the variation of the Hamming distance between the pairs of lmers in T.
Furthermore, we use (1) and (2) to evaluate N _{ r }(D'), the number of candidate motifs generated from the reference sequence set D'. It is defined as the sum of the number of common candidate motifs shared by each pair of lmers in D' (i.e., in every two sequences in D').
Our method of selecting reference sequences includes two steps. The first step is to compute the number of candidate motifs generated from every two sequences in D, in order to quickly evaluate the number of candidate motifs generated from the specified k reference sequences in the next step. The second step is to select k sequences from D to form the reference sequence set D' such that the number of candidate motifs generated from D' is as small as possible. In the following, we describe the two steps in detail.
Step 1: computing the number of candidate motifs
We compute the number of candidate motifs generated from every two sequences s _{ i } and s _{ j } in D according to (2). For an lmer x in s _{ i } and an lmer x' in s _{ j }, the number of their common candidate motifs M _{ d }(x, x') depends on their Hamming distance d _{ H }(x, x') [18]. The details about how to compute M _{ d }(x, x') are described in [18]. In implementing RefSelect, we store the values of M _{ d }(x, x') under different d _{ H }(x, x') in a table in advance. Once we know d _{ H }(x, x'), we can immediately get M _{ d }(x, x') by looking up the table in O(1) time.
Thus, the core operation of (2) is to compute the Hamming distance between every two lmers x ∈_{ l } s _{ i } and x' ∈_{ l } s _{ j }. For any two sequences of length n, we have O(n ^{2}) pairs of lmers. A simple method is to traverse all these pairs of lmers; for each pair of lmers x and x', the Hamming distance can be computed in O(l) time by comparing the characters x[i] and x'[i] for 1 ≤ i ≤ l. The time complexity of this method is O(ln ^{2}).
We introduce a more efficient method to compute the Hamming distance between every pair of lmers in s _{ i } and s _{ j }. We fill an n × n matrix M, where the element in row a (1 ≤ a ≤ n) and column b (1 ≤ b ≤ n) is denoted by M[a, b]. Let l _{ min } = min(a, b), str _{1} = s _{ i }[a – l _{ min } + 1…a], str _{2} = s _{ j }[b – l _{ min } + 1…b]; then, M[a, b] is the number of such position i that str _{1}[a – l _{ min } + i] = str _{2}[b – l _{ min } + i] for 1 ≤ i ≤ l _{ min }, namely M[a, b] = l _{ min } − d _{ H }(str _{1}, str _{2}). For example, in Fig. 2, a < b, str _{1} = s _{ i }[1…a], str _{2} = s _{ j }[b – a + 1…b], and then M[a, b] = a − d _{ H }(s _{ i }[1…a], s _{ j }[b – a + 1…b]), which denotes the number of positions where the two characters are identical in the alignment of str _{1} and str _{2}.
In filling the matrix M, we initialize M[a, b] with 0 for the case of min(a, b) = 0, and obtain M[a + 1, b + 1] based on M[a, b]:
where both a and b range from 0 to n.
With the matrix M, we use (4) to compute the Hamming distance between a pair of lmers str _{1}' and str _{2}', where str _{1}' = s _{ i }[a – l + 1…a] and str _{2}' = s _{ j }[b – l + 1…b] are the lmers at the position a (a ≥ l) of s _{ i } and the position b (b ≥ l) of s _{ j }, respectively.
Our method is mainly to fill the matrix M. That is, we need to compute n ^{2} elements one by one for any two nlength sequences s _{ i } and s _{ j }. In computing each element M[a, b] by (3) in O(1) time, we simultaneously compute the Hamming distance between the lmer at position a (a ≥ l) of s _{ i } and that at position b (b ≥ l) of s _{ j } by (4) in O(1) time. Therefore, the time complexity of computing the Hamming distance for all pairs of lmers in two sequences is reduced to O(n ^{2}).
Step 2: selecting reference sequences
After getting the number of candidate motifs generated from every two sequences in D, we can evaluate the number of candidate motifs generated from a set of reference sequences according to (1). In this section, we introduce how to select a set of reference sequences that generates a small number of candidate motifs.
First, let us consider the exhaustive search strategy. For the data set D consisting of t sequences, after evaluating the number of candidate motifs for every possible k sequences in D, the exhaustive search regards the k sequences corresponding to the minimum number of candidate motifs as the reference sequences. Its time complexity is as follows.
Obviously, the running time of the exhaustive search grows dramatically with the increase of the number of input sequences t and the number of selected reference sequences k. A simple experiment can show that the exhaustive search is too timeconsuming for large input: we set Σ = 4, n = 200, q = t = 200 and k = t × 5 % = 10, and the running time of the exhaustive search exceeds one day on personal computers.
In order to quickly select reference sequences, we convert the problem to graph clustering. The t sequences in D are taken as t nodes in a graph. The similarity between two nodes s _{ i } and s _{ j } is related to the number of candidate motifs generated from s _{ i } and s _{ j } (i.e., N _{ r }(s _{ i } , s _{ j })). We hope that the nodes corresponding to the reference sequences with small number of candidate motifs form a dense subgraph, and they belong to the same cluster after graph clustering. We use the MCL algorithm [25] to complete clustering, and set the inflation parameter as 1.8 following [26]. We describe the details involved in clustering as follows.
Similarity measure
We design the similarity of two nodes (sequences) s _{ i } and s _{ j } based on N _{ r }(s _{ i } , s _{ j }). Simultaneously, we consider the following two factors.
First, we further increase the effect of the pairs of lmers in s _{ i } and s _{ j } with small Hamming distance on the total number of candidate motifs generated from s _{ i } and s _{ j }. By doing this, it is helpful for the clustering process to distinguish different reference sequence sets that lead to different number of candidate motifs. Specifically, we use (5) instead of (2) to evaluate the number of candidate motifs generated from two sequences s _{ i } and s _{ j }.
Second, we aim to put a set of sequences D' to the same cluster such that every two sequences s _{ i } and s _{ j } in D' generate a small number of candidate motifs. So, we should ensure that s _{ i } and s _{ j } have a larger similarity when they generate a smaller number of candidate motifs. Finally, we compute the similarity of s _{ i } and s _{ j } as follows:
Cluster refinement
The clustering process may produce more than one cluster, and there may not be exact k nodes in each cluster. We refine each obtained cluster C in order to get a set of k reference sequences. Then, we sort the sets of reference sequences and output the set with the highest score.
For the cluster C with only one node, we take it as an invalid cluster, since the node in C has a low similarity with other nodes. For the cluster C with two or more nodes, it corresponds to three cases: (a) there are exact k nodes in C; (b) there are more than k nodes in C; (c) there are less than k nodes in C. For Case (a), we can get the reference sequence set D' directly by using the k sequences in C. Next, we introduce how to refine C under Cases (b) and (c).
For Case (b), we use greedy strategy to select k sequences from C (C > k) to form D'. First, we initialize D' with {s _{ a }, s _{ b }} such that sim(s _{ a }, s _{ b }) = max{sim(s _{ i }, s _{ j })} for all s _{ i }, s _{ j } ∈ C and s _{ i } ≠ s _{ j }. Then, we repeatedly choose a node s _{ r } satisfying (7) from C − D' and add it to D' until D' = k.
For Case (c), we use the similar method to choose k − C nodes from D − C, and add them to C to form D'. First, D' is initialized with C. Then, we repeatedly choose a node s _{ r } satisfying (8) from D − D' and add it to D' until D' = k.
Figures 3 and 4 show examples under Case (b) with k = 3 and Case (c) with k = 4, respectively. Differences between the two cases are: in Case (b), we get D' by selecting reference sequences (nodes) from the subgraph corresponding to the cluster C; while in Case (c), we get D' by selecting reference sequences (nodes) from the whole graph and adding them to C.
We describe how to refine a cluster C in Algorithm 1. Because the process that we select reference sequences by using greedy strategy is similar to the Prim algorithm for computing minimum spanning tree, the time complexity of Algorithm 1 is O(C^{2}lgC) and O((t − C)^{2}lg(t − C)) under Case (b) and Case (c), respectively.
After cluster refinement, if we obtain more than one reference sequence set D', we score each D' by (9), and then output the D' with the highest score.
Whole algorithm
This section gives the whole algorithm of RefSelect.
In line 1 of the pseudocode, we initialize D' with an empty set. Lines 2 to 5, corresponding to the first step of RefSelect, compute the similarity of any two nodes (sequences). The core operation of this step is to compute the Hamming distance from all lmers in s _{ i } to all lmers in s _{ j } in O(n ^{2}) time, for any two sequences s _{ i } and s _{ j } in D. Therefore, the time complexity of this step is:
Lines 6 to 10, corresponding to the second step of RefSelect, cluster the t sequences in D by using the MCL algorithm and refine each obtained cluster. The time complexity of clustering is O(t ^{3}). The time complexity of refining clusters is negligible with respect to the time complexity of the first step. So, the time complexity of RefSelect is:
In executing RefSelect, we need O(tn) space to store the input sequence set D, O(n ^{2}) space to store the matrix M for computing Hamming distance, and O(t ^{2}) space to store the similarity matrix of t input sequences. So, the space complexity of RefSelect is O(tn + n ^{2} + t ^{2}).
Parallel implementation
To efficiently deal with large data sets, we can further accelerate RefSelect by using parallel processing. RefSelect consists of two steps, and we mainly use parallel processing for the first step. The reasons are (1) the first step is the bottleneck of the whole RefSelect algorithm in running time as shown in Table 6, and (2) the first step is easy to parallelize, because it repeatedly calculates the number of candidate motifs generated from two sequences s _{ i } and s _{ j } in D and each calculation is independent of the others.
We implement the parallel version of RefSelect by using OpenMP [27, 28], which provides a flexible programming model for shared memory architectures and allows to add parallelism into serial codes easily by using one or several OpenMP directives. The pseudocode is shown in Algorithm 3, where we add an OpenMP “for” directive before the inner iterations of the first step to split parallel iteration spaces across threads.
The reason why we add the directive before the inner iterations (line 4) rather than the outer iterations (line 2) is for the consideration of load balancing among threads. Note that, the number of inner iterations is not fixed for each outer iteration. If we add the directive before the outer iterations, the smaller the value of i, the more computational work will be needed by the thread processing the ith outer iteration.
Results and discussion
Analysis of effect of Hamming distance on candidate motif number
First, we consider the case of two lmers x and x', and analyze the effect of their Hamming distance d _{ H }(x, x') on the number of their common candidate motifs M _{ d }(x, x'). Tables 2 and 3 give the values of M _{ d }(x, x') with d _{ H }(x, x') varying from 0 to 2d under the DNA and protein data, respectively. In both tables, the values are obtained by using two challenging PMS problem instances. We can find that M _{ d }(x, x') increases with the decrease of d _{ H }(x, x') for both the DNA and protein data.
Second, we consider the case of h (h > 2) lmers containing pairs of lmers with different Hamming distance, and analyze the effect of Hamming distance on the number of common candidate motifs shared by the h lmers. In our example, we set h as 3 and the three lmers x _{1}, x _{2} and x _{3} can form three pairs of lmers; then, we fix d _{ H }(x _{1}, x _{2}) = 2d − 2 and vary d _{ H }' = (d _{ H }(x _{1}, x _{3}) + d _{ H }(x _{2}, x _{3}))/2 from 2d − 2 to 2d − 7. Figure 5(a) and (b) give the tendency of the number of common candidate motifs M _{ d }(x _{1}, x _{2}, x _{3}) in contact with the decrease of d _{ H }' on (19, 7) problem instance for the DNA data and (19, 9) problem instance for the protein data, respectively. The yaxis is in logscale. We can see that no matter for the DNA or protein data, M _{ d }(x _{1}, x _{2}, x _{3}) increases with the decrease of d _{ H } '. In other words, when h (h > 2) lmers contain some pairs of lmers with a relatively small Hamming distance, they generate a relatively large number of candidate motifs. Also, the tendency of M _{ d }'(x _{1}, x _{2}, x _{3}) = M _{ d }(x _{1}, x _{2}) + M _{ d }(x _{1}, x _{3}) + M _{ d }(x _{2}, x _{3}) is given in Fig. 5. Both M _{ d }'(x _{1}, x _{2}, x _{3}) and M _{ d }(x _{1}, x _{2}, x _{3}) increase with the decrease of d _{ H }', namely they have the consistent tendency.
In addition, Tables 2 and 3 also give the probability that the Hamming distance between two random lmers x and x' is i (0 ≤ i ≤ 2d), denoted by p _{ i } and calculated by (10). As seen from the tables, p _{ i } decreases with the decrease of the Hamming distance i.
For two nlength sequences s _{ i } and s _{ j }, let E _{ i } denote the expected number of the pair of lmers x ∈_{ l } s _{ i } and x' ∈_{ l } s _{ j } with d _{ H }(x, x') = i. It is calculated by (11). The bold values in Tables 2 and 3 indicate that E _{ i } is less than 1 in the case of n = 600.
Although they rarely occur in two sequences, some pairs of lmers with E _{ i } < 1 are usually contained in the whole data set. The reasons are: first, the whole data set can form multiple pairs of sequences, which increases the probability of the occurrence of the pairs of lmers with Hamming distance i; second, the conservation of motifs makes some highly similar motif instances form some pairs of lmers with E _{ i } < 1.
From the tables, when E _{ i } is less than 1, the value of M _{ d }(x, x') is relatively large, especially for the protein data. Thus, the more pairs of lmers with E _{ i } < 1 in the reference sequence set, the more candidate motifs generated by the algorithms.
Results on practical time improvement of PMS algorithms
In this section we check the validity of RefSelect as follows. First, we use RefSelect to select k reference sequences from the given t input sequences, and adjust the order of the t input sequences by preposing the k sequences; RefSelect is implemented in C++ and its running time is denoted by T _{ rs }. Second, we test patterndriven PMS algorithms on the input sequences of original order and that of new order, obtaining the running time T _{1} and T _{2}, respectively. Finally, we compare T _{1} with T _{ rs } + T _{2}.
Three patterndriven PMS algorithms qPMS7 [19], TravStrR [20] and PMS8 [23] are chosen to participate in the test. They are all newly proposed algorithms and outperform the previous exact algorithms on challenging instances. Notice that qPMS9 [24] is also a newly proposed PMS algorithm with good time performance; we do not choose it as a tested algorithm and related discussion is given in the applicability of RefSelect section. All the tested algorithms are executed on a 2.67 GHz single core and a 4 Gbyte Memory, except for PMS8, which is executed on a 16core platform in solving the (21, 10) and (23, 11) instances of the protein data.
In the experiments, we generate data sets following [5]. First, we randomly generate t sequences of length n and a motif m of length l, and randomly choose q (0 < q ≤ t) out of the t sequences; then, for each of the q sequences, we generate a random motif instance m' that differs from m in at most d positions, and implant m' into a random position of the sequence. For each specific test instance, we generate five data sets to get an average result.
First, we fix t = 20, n = 600 and q = 20, and give in Tables 4 and 5 the results on challenging instances of the protein and DNA data, respectively. For qPMS7 and TravStrR, k is set as 2, while for PMS8 k is set dynamically under different (l, d) instances according to [23]. From this experiment, we find that:

(1)
The running time of RefSelect on each of these data sets is less than one second, which is nearly negligible and not listed in Tables 4 and 5.

(2)
RefSelect can make the tested algorithms solve the PMS problem steadily in an efficient way. For example, for the (19, 9) problem instance in Table 4, the minimum and maximum running time of qPMS7 are reduced to 140.00s and 287.00s from 820.00s and 37121.00s after using the reference sequences selected by RefSelect.

(3)
The speedup on the protein data is significantly larger than that on the DNA data. We give the explanation by using Tables 2 and 3. The fact that Patterndriven PMS algorithms sometimes show poor performance is mainly caused by the pairs of lmers with E _{ i } < 1 in the reference sequences; these lmers can generate more candidate motifs. The larger the difference between the number of candidate motifs for E _{ i } < 1 and that for E _{ i } ≥ 1, the larger speedup can be achieved. As can be seen from Tables 2 and 3, the difference on the protein data (large alphabet) is significantly larger than that on the DNA data (small alphabet).

(4)
Larger speedup is achieved on larger (l, d) instances for the protein data. This can also be explained by E _{ i }. That is, as shown in Tables 2 and 3, the difference between the number of candidate motifs for E _{ i } < 1 and that for E _{ i } ≥ 1 increases with the increase of l and d.
Second, we discuss the case of q < t by fixing Σ = 20 (protein data), t = 20 and n = 600. In the above three algorithms, we only choose qPMS7 as the tested algorithm because PMS8 cannot solve the PMS problem with q < t and TravStrR usually quits unexpectedly in our test environment. We select t – q + 2 reference sequences for qPMS7, and test it by using the same (l, d) instances with [19]. On the one hand, we set q as 15, and test qPMS7 on different (l, d) instances; as shown in Fig. 6(a), RefSelect makes qPMS7 perform better and the speedup increases with the increase of l and d. On the other hand, we fix (l, d) = (19, 8) and test qPMS7 by varying q from 10 to 19; as shown in Fig. 6(b), RefSelect can effectively accelerate qPMS7 under different q.
Finally, we test the effect of the sequence length n on the speedup brought by RefSelect for existing algorithms. In the experiment, we fix Σ = 4 (DNA data) and q = t = 20, and vary n from 100 to 500; the tested algorithm is qPMS7 and PMS problem instances are (21, 8) and (23, 9). The results are shown in Fig. 7. Overall, the speedup increases with the decrease of n. This is because, according to (11) and Table 2, the smaller the value of n, the larger the difference between the number of candidate motifs for E _{ i } < 1 and that for E _{ i } ≥ 1.
The fact that RefSelect works better for short input sequences makes sense to motif discovery in nextgeneration or highthroughput sequencing data sets, such as ChIPchip [29] and ChIPseq [30] data sets. These data sets have a better resolution for containing motifs than the traditional promoter sequences. For example, the length of each sequence in ChIPseq data sets is usually 200 base pairs, while the length of a promoter sequence is about 1000 base pairs. Therefore, RefSelect can bring a better practical time improvement of the patterndriven PMS algorithms on the ChIPseq data sets than that on the traditional promoter sequences.
Assessment of RefSelect on large data sets
All experiments involved in the previous section focus on the data sets of small scale, namely the number of input sequences t is small. In recent years, with the rapid development of highthroughput technologies, which allows genomewide identification of motifs, the data sets such as ChIPseq [30] contain hundreds or more sequences. Thus, it is necessary to further assess the time performance and validity of RefSelect on large data sets.
First, we make the following settings in the experiment: set the maximum value of t as 600, as the ChIPtailored version of MEME can effectively identify motifs by using 600 sequences randomly selected from the whole ChIPseq data sets [31]; set k as 5 % × t, as most of the sequences in ChIPseq data sets contain motif instances; set the sequence length n as 200, as the resolution that ChIPseq sequences contain motifs is higher than that for traditional promoter sequences [9].
Since there is not an exact algorithm that can efficiently deal with large data sets, we assess the validity of RefSelect as follows. Let N _{ original } denote the number of candidate motifs generated from the first k sequences in the original input sequences, and N _{ improved } denote the number of candidate motifs generated from the k reference sequences selected by RefSelect. Then, we compute N _{ original }/N _{ improved }. A larger N _{ original }/N _{ improved } indicates more candidate motifs can be reduced.
On the above basis, we get the running time of RefSelect and N _{ original }/N _{ improved } on both the DNA and protein data sets, by varying t from 50 to 600. From the results shown in Table 6, we can find that: (1) RefSelect can quickly select reference sequences from these data sets, and its running time is independent of the alphabet size; (2) the running time of RefSelect increases with the number of input sequences t and exceeds one minute when tackling the task of t = 600; (3) RefSelect can still reduce the generated candidate motifs, especially for the protein data (large alphabet). Besides the running time of the whole RefSelect algorithm, we also list the running time of the first step of RefSelect, which shows that the first step is the bottleneck of the whole RefSelect algorithm.
Second, we test the parallel version of RefSelect. It is pretty common that a ChIPseq data sets contains more than one thousand sequences, and parallel processing is a good choice in this case. In the experiment, we use the protein data sets with k = 5 % × t and n = 200. We give the running time in Table 7 by varying t from 200 to 1600 and the number of threads from 1 to 8. We can find that the acceleration of RefSelect through parallel processing is obvious, and the speedup is almost linearly proportional to the number of threads.
Applicability of RefSelect
For the proper use of RefSelect, we summarize the applicability of RefSelect as follows.

(1)
RefSelect can accelerate such patterndriven PMS algorithms that use random or the first k ≥ 2 sequences in the input as reference sequences to generate candidate motifs. For the efficient and recently proposed PMS algorithms, RefSelect is applicable for qPMS7, TravStrR and PMS8, but not for qPMS9, which does not use fixed k reference sequences to obtain htuples.

(2)
RefSelect can deal with large data sets containing hundreds or even more sequences.

(3)
The speedup brought by RefSelect for PMS algorithms is affected by the alphabet size. The larger the alphabet size, the larger the speedup.

(4)
The speedup brought by RefSelect for PMS algorithms is also affected by the sequence length n, which increases with the decrease of n.

(5)
RefSelect works better on the challenging instances with large l and d. For the challenging instances with small l and d, however, it is not necessary to use RefSelect, for they can be quickly solved by existing PMS algorithms.
Moreover, it is necessary to declare the following two points. First, the instability of the time performance is not reported in the previous literatures [19, 20, 23]. This is because we find that in their experimental data, the implanted motif instances differ from the motif in exact d positions. In this case, the probability of containing pairs of lmers with E _{ i } < 1 in the reference sequences is small, and accordingly the number of generated candidate motifs is also small. But it should be pointed out that, in reality motif instances differ from the motif in at most d positions, which leads to the execution time instability for some of the existing algorithms.
Second, although RefSelect is not applicable for qPMS9, which can solve challenging instances with larger l and d than previous algorithms, our research is still valuable. The reason is that qPMS9 cannot be used as a substitute for other PMS algorithms; we found in the experiments that qPMS9 sometimes exits unexpectedly with an out of memory error. Particularly, this phenomenon becomes frequent in dealing with challenging PMS instances of large (l, d) such as (l, d) = (21, 10) and (23, 11).
Conclusions
We build the reference sequence selection problem and propose a method named RefSelect to select reference sequences for the patterndriven PMS algorithms, in order to solve the problem that many patterndriven PMS algorithms present execution time instability. RefSelect requires a small amount of storage space and is capable of selecting reference sequences efficiently and effectively. Also, the parallel version of RefSelect is provided for handling large data sets. For the stateoftheart algorithms qPMS7, TravStrR and PMS8, RefSelect enables them steadily solve PMS problems in an efficient way without doing any modification to these algorithms.
Our work in this paper only focuses on selecting reference sequences for the patterndriven PMS algorithms. It is recommended that further research be undertaken in selecting reference sequences for the iterative optimization algorithms of finding motifs in large data sets. These algorithms, such as MEMEChIP [31], usually randomly select hundreds of sequences from a large input to make motif discovery, with a low chance of discovering infrequent motifs [32]. Thus, elaborate selection of sequences may help them obtain more motif information.
References
 1.
Tompa M, Li N, Bailey TL, Church GM, Moor BM, Eskin E, Favorov AV, Frith MC, Fu Y, Kent WJ, Makeev VJ, Mironov AA, Noble WS, Pavesi G, Pesole G, Régnier M, Simonis N, Sinha S, Thijs G, van Helden J, Vandenbogaert M, Weng Z, Workman C, Ye C, Zhu Z. Assessing computational tools for the discovery of transcription factor binding sites. Nat Biotechnol. 2005;23(1):137–44.
 2.
D’haeseleer P. How does DNA sequence motif discovery work. Nat Biotechnol. 2006;24(8):959–61.
 3.
Fang J, Haasl RJ, Dong Y, Lushington GH. Discover protein sequence signatures from proteinprotein interaction data. BMC Bioinform. 2005;6:277.
 4.
Redhead E, Bailey TL. Discriminative motif discovery in DNA and protein sequences using the DEME algorithm. BMC Bioinform. 2007;8:385.
 5.
Pevzner PA, Sze SH. Combinatorial approaches to finding subtle signals in DNA sequences. In: Altman R, Bailey TL, editors. Proceedings of the Eighth International Conference on Intelligent Systems for Molecular Biology. California: AAAI Press; 2000. p. 269–78.
 6.
Evans PA, Smith A, Wareham HT. On the complexity of finding common approximate substrings. Theor Comput Sci. 2003;306:407–30.
 7.
Bailey TL, Elkan C. Fitting a mixture model by expectation maximization to discover motifs in biopolymers. In: Altman R, Brutlag D, editors. Proceedings of the 2nd International Conference on Intelligent Systems for Molecular Biology. California: AAAI Press; 1994. p. 28–36.
 8.
Das M, Dai H. A survey of DNA motif finding algorithms. BMC Bioinform. 2007;8 Suppl 7:S21.
 9.
Zambelli F, Pesole G, Pavesi G. Motif discovery and transcription factor binding sites before and after the next generation sequencing era. Brief Bioinform. 2013;14(2):225–37.
 10.
Bailey TL, Williams N, Misleh C, Li WW. MEME: discovering and analying DNA and protein sequence motifs. Nucleic Acids Res. 2006;34:369–73.
 11.
Lawrence CE, Altschul SF, Boguski MS, Liu JS, Neuwald AF, Wootton JC. Detecting subtle sequence signals: a Gibb's sampling strategy for multiple alignment. Science. 1993;262:208–14.
 12.
Buhler J, Tompa M. Finding motifs using random projections. J Comput Biol. 2002;9:225–42.
 13.
Yang X, Rajapakse JC. Graphical approach to weak motif recognition. Genome Inform. 2004;15(2):52–62.
 14.
Sun H, Low MYH, Hsu WJ, Rajapakse JC. RecMotif: a novel fast algorithm for weak motif discovery. BMC Bioinform. 2010;11 Suppl 11:S8.
 15.
Davila J, Balla S, Rajasekaran S. Space and time efficient algorithms for planted motif search. In: Yi P, Zelikovsky A, editors. Proceedings of the Second International Workshop on Bioinformatics Research and Applications. UK: LNCS; 2006. p. 822–9.
 16.
Davila J, Balla S, Rajasekaran S. Fast and practical algorithms for planted (l, d) motif search. IEEE/ACM Trans Comput Biol Bioinform. 2007;4(4):544–52.
 17.
Yu Q, Huo H, Vitter JS, Huan J, Nekrich Y. An efficient exact algorithm for the motif stem search problem over large alphabets. IEEE/ACM Trans Comput Biol Bioinform. 2015;12(2):384–94.
 18.
Yu Q, Huo H, Zhang Y, Guo H. PairMotif: a new patterndriven algorithm for planted (l, d) DNA motif search. PLoS ONE. 2012;7(10), e48442.
 19.
Dinh H, Rajasekaran S, Davila J. qPMS7: a fast algorithm for finding (l, d)motifs in DNA and protein sequences. PLoS ONE. 2012;7(7), e41425.
 20.
Tanaka S. Improved exact enumerative algorithms for the planted (l, d)motif search problem. IEEE/ACM Trans Comput Biol Bioinform. 2014;11(2):361–74.
 21.
Ho ES, Jakubowski CD, Gunderson SI. iTriplet, a rulebased nucleic acid sequence motif finder. Algorithm Mol Biol. 2009;4(14).
 22.
Dinh H, Rajasekaran S, Kundeti VK. PMS5: an efficient exact algorithm for the (l, d)motif finding problem. BMC Bioinform. 2011;12:410.
 23.
Nicolae M, Rajasekaran S. Efficient sequential and parallel algorithms for planted motif search. BMC Bioinform. 2014;15:34.
 24.
Nicolae M, Rajasekaran S. qPMS9: an efficient algorithm for querum planted motif search. Sci Rep. 2015;5:7813.
 25.
van Dongen S. Graph clustering by flow simulation. PhD thesis. The Netherlands: University of Utrecht; 2000.
 26.
Brohee S, van Helden J. Evaluation of clustering algorithms for proteinprotein interaction. BMC Bioinform. 2006;7:488.
 27.
Dagum L, Menon R. OpenMP: an industrystandard API for sharedmemory programming. IEEE Comput Sci Eng. 1998;5(1):46–55.
 28.
Sato M. OpenMP: parallel programming API for shared memory multiprocessors and onchip multiprocessors. In: Aboulhamid EM, editor. Proceedings of the 15th international symposium on system synthesis. New York, USA: ACM Press; 2002. p. 109–11.
 29.
Lee TI, Johnstone SE, Young RA. Chromatin immunoprecipitation and microarraybased analysis of protein location. Nature Protocols. 2006;1(2):729–48.
 30.
Mardis ER. ChIPseq: welcome to the new frontier. Nature Methods. 2007;4:613–4.
 31.
Machanick P, Bailey TL. MEMEChIP: motif analysis of large DNA datasets. Bioinformatics. 2011;27(12):1696–7.
 32.
Quang D, Xie X. EXTREME: an online EM algorithm for motif discovery. Bioinformatics. 2014;30(12):1667–73.
 33.
Yu Q, Huo H, Zhao R, Feng D, Vitter JS, Huan J. Reference sequence selection for motif searches. In: Ma B, Rajasekaran S, editors. Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine. Washington D.C., USA: IEEE Press; 2015. p. 569–74.
Acknowledgements
A preliminary version [33] of this work appeared in the proceedings of IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 9–12 November 2015, Washington D.C., USA.
Declarations
Publication of this article was funded by the National Natural Science Foundation of China under Grant 61173025, 61373044 and 61502366, the China Postdoctoral Science Foundation under Grant 2015M582621, and the Fundamental Research Funds for the Central Universities under Grant JB150306 and XJS15014.
This article has been published as part of BMC Bioinformatics Vol 17 Suppl 9 2016: Selected articles from the IEEE International Conference on Bioinformatics and Biomedicine 2015: genomics. The full contents of the supplement are available online at https://sites.google.com/site/feqond/refselect.
Availability of data and material
The executable program of RefSelect and all supplements are available at https://sites.google.com/site/feqond/refselect.
Authors’ contributions
Initial idea of the research was from QY. QY, RZ and HH designed and implemented the proposed algorithm. All authors participated in analysis and manuscript preparation.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Ethics approval and consent to participate
Not applicable.
Author information
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Yu, Q., Huo, H., Zhao, R. et al. RefSelect: a reference sequence selection algorithm for planted (l, d) motif search. BMC Bioinformatics 17, 266 (2016) doi:10.1186/s1285901611306
Published
DOI
Keywords
 Planted (l, d) motif search
 Patterndriven
 Reference sequences