Basic definitions
Let w be a string of length n of symbols over an alphabet \Sigma. w[i] denotes the ith symbol of w and w[i…j] the substring of w from position i to j, 1 ≤ i, j ≤ n. w[1…i] is the prefix of w ending at position i and w[j…n] is the suffix of w starting at position j. A substring of w is proper if it is different from w. A substring of w is internal if it is neither a prefix nor a suffix of w.
A read r is a string over the alphabet {A, C, G, T} which is assumed to be sorted by the alphabetical order < such that A < C < G < T. ⊴ denotes the lexicographic order of all substrings of the reads induced by the alphabetical order < . Let n be the length of r. The reverse complement of r, denoted by \stackrel{\u2015}{r}, is the sequence \stackrel{\u2015}{r\left[n\right]}\dots \stackrel{\u2015}{r\left[1\right]}, where \stackrel{\u2015}{a} indicates the WatsonCrick complement of base a.
Computing suffix and prefixfree read sets
The first step of our approach for assembling a collection of reads is to eliminate reads that are prefixes or suffixes of other reads. Here we describe a method to recognize these reads. Consider an ordered set \mathcal{R}=({r}_{1},\dots ,{r}_{m}) of reads, possibly of variable length, in which some reads may occur more than once (so \mathcal{R} is indeed a multiset). We assume that, for all i, 1 ≤ i ≤ m, the ith read r_{
i
} in \mathcal{R} is virtually padded by a sentinel symbol $_{
i
} at the right end and that the alphabetical order < is extended such that A < C < G < T < $_{1} < $_{2} < · · · < $_{
m
}.
We define a binary relation ≺ on \mathcal{R} such that r_{
i
} ≺ r_{
j
} if and only if i < j. That is, ≺ reflects the order of the reads in the input. \mathcal{R} is prefixfree if for all reads r in \mathcal{R} there is no r′ in \mathcal{R}\setminus \left\{r\right\} such that r is a prefix of r′. \mathcal{R} is suffixfree if for all r in \mathcal{R} there is no read r′ in \mathcal{R}\setminus \left\{r\right\} such that r is a suffix of r′.
To obtain a prefix and suffixfree set of reads we lexicographically sort all reads using a modified radixsort for strings, as described in [14]. In this algorithm, the strings to be sorted are first inserted into buckets according to their first character. Each bucket is then sorted recursively according to the next character of all reads in the bucket. A bucket always consists of reads which have a common prefix. Once a bucket is smaller than some constant, the remaining suffixes of the reads in the bucket are sorted by insertion sort [15].
During the sorting process, the length of the longest common prefix (lcp) of two lexicographically consecutive reads is calculated as a byproduct. For two lexicographically consecutive reads r and r′ with an lcp of length ℓ = r, we can conclude that r is a prefix of r′. If ℓ < r′, then r is a proper prefix of r′ and we mark r. If ℓ = r′, then r and r′ are identical and we mark the read which is larger according to the binary relation ≺ .
To handle reverse complements and to mark reads which are suffixes of other reads, one simply applies this method to the multiset \stackrel{\u2015}{\mathcal{R}}=({r}_{1},\dots ,{r}_{m},{r}_{m+1},\dots ,{r}_{2m}) where {r}_{m+i}=\overline{{r}_{i}} for all i, 1 ≤ i ≤ m. As \stackrel{\u2015}{\mathcal{R}} includes the reverse complements of the reads, the method also marks reads which are suffixes of other reads. This is due to the observation that if read r is a suffix of read r′, then \stackrel{\u2015}{r} is a prefix of \stackrel{\u2015}{{r}^{\prime}}.
In a final step of the algorithm one eliminates all reads from \mathcal{R} which have been marked. The remaining unmarked reads from \mathcal{R} are processed further. The algorithm to compute a suffix and prefixfree set of reads runs in O\left(m\frac{{\lambda}_{\text{max}}}{\omega}\right) time, where λ_{max} is the maximum length of a read and ω is the machine’s word size. As we consider λ_{max} to be a constant (which does not imply that the reads are all of the same length), the algorithm runs in O(m) time.
Computing suffixprefix matches
Suppose that \mathcal{R} is a suffix and prefixfree set of m reads. Let ℓ_{
min
} > 0 be the minimum length parameter. The set SPM\left(\mathcal{R}\right) of suffixprefix matches (SPMs, for short) is the smallest set of triples 〈r, r′, ℓ〉 such that r,r\prime \in \mathcal{R} and strings u, v, w exist such that r = uv, r′ = vw, and v =ℓ ≥ ℓ_{
min
}. ℓ is the length of a suffixprefix match 〈r, r′, ℓ〉 . The suffixprefix matching problem is to find all suffixprefix matches. As the reads of length smaller than ℓ_{
min
} cannot, by definition, contribute any SPM, we can ignore them and thus we assume that \mathcal{R} only contains reads of length at least ℓ_{
min
}.
The method to solve the suffixprefix matching problem presented here consists of two main algorithms. The first algorithm identifies and lexicographically sorts all SPMrelevant suffixes, i.e. a subset of all suffixes of all reads from which one can compute all suffixprefix matches. The second algorithm enumerates these matches given the sorted list of all SPMrelevant suffixes.
Consider a suffixprefix match 〈r, r′, ℓ〉 . By definition, the suffix of length ℓ of r exactly matches the prefix of length ℓ of r′. Obviously, the suffix of r involved in the match starts at some position j, 2\hspace{0.17em}\le \hspace{0.17em}\mathrm{j\hspace{0.17em}}\le \hspace{0.17em}\leftr\right{\ell}_{\mathit{min}}+1 in r. This implies that r must be at least of length ℓ_{
min
} + 1. The suffix cannot start at the first position in r, as otherwise r would be a prefix of some other read, contradicting our assumption that \mathcal{R} is prefixfree.
To enumerate the set of all suffixprefix matches of length at least ℓ_{
min
}, we preprocess all reads and determine all proper suffixes of the reads which may be involved in a suffixprefix match. More precisely, for all reads r we determine all matching candidates, i.e. all proper suffixes s of r such that the length of s is at least ℓ_{
min
} and there is a read r′ such that s and r′ have a common prefix of length at least k, where k is an userdefined parameter satisfying k\hspace{0.17em}\le \hspace{0.17em}min\{{\ell}_{\mathit{min}},\frac{\omega}{2}\}. There are two reasons for imposing this constraint on k: First, we want to represent a string of length k over an alphabet of size 4 in one machine word, thus k\hspace{0.17em}\le \frac{\omega}{2}. Second, the suffixes of the reads from which we take the prefixes of length k have minimum length ℓ_{
min
}, thus we choose k ≤ ℓ_{
min
}.
The set of all matching candidates and all reads forms the set of all (ℓ_{
min
}, k)SPMrelevant suffixes. For simplicity sake, we use the notion SPMrelevant suffixes if ℓ_{
min
} and k are clear from the context. While all SPMs can be constructed from the SPMrelevant suffixes, not all SPMrelevant suffixes lead to an SPM.
An efficient algorithm for identifying and sorting all SPMrelevant suffixes
The first two phases of our algorithm follow a strategy that is borrowed from the counting sort algorithm [15]. Like this, our algorithm has a counting phase and an insertion phase. However, in our problem, the elements (i.e. SPMrelevant suffixes) to be sorted are only determined during the algorithm. Moreover, the number of keys (i.e. initial kmers) whose occurrences are counted is on the order of the number of elements to be sorted. Therefore, in a space efficient solution, it is not trivial to access a counter given a key. We have developed a time and space efficient method to access the counter for a key, exploiting the fact that counting and inserting the SPMrelevant suffixes does not have to be done immediately. Instead, the items to be counted/inserted are first buffered and then sorted. A linear scan then performs the counting or inserting step.
In contrast to counting sort, our algorithm uses an extra sorting step to obtain the final order of elements presorted in the insertion phase. Under the assumption that the maximum read length is a constant (which does not imply that the reads are all of the same length), our algorithm runs in O(n) time and space, where n is the total length of all reads. To the best of our knowledge a method employing a similar strategy has not yet been developed for the suffixprefix matching problem.
We first give a description of our algorithm using string notation. In a separate section, we explain how to efficiently implement the algorithm. In the following, we only consider the reads in the forward direction. However, it is not difficult to extend our method to also incorporate the reverse complements of the reads and we comment on this issue at the end of the methods section.
The initial k mer of some sequence s is the prefix of s of length k. To determine the matching candidates efficiently, we first enumerate the initial kmers of all reads and store them in a table of size m. This can be done in O(m) time. The notion table size always refers to the number of entries in the table. The next step lexicographically sorts the kmers in the table in ascending order. This string sorting problem can be transformed into an integer sorting problem (see Implementation) which can be solved by radixsort [15] in O(m) time and O(m) extra working space.
In the next step, a linear scan of the sorted kmers removes duplicates from the table and counts how many times each kmer occurs in the table. This scan requires O(m) time. Let d ≤ m be the number of different kmers. These can be stored in a table K of size d.
The counts for the elements in K require another table C of size d. In addition to the duplicate removal and counting, the linear scan of the sorted kmers constructs two sets P and Q, the size of which depends on two user defined parameters k′ ≤ k and k ″≤ k. P is the set of all initial k′mers of the reads. Q is the set of all kmers r[kk\prime \prime +1\dots k] for some r\in \mathcal{R}. We assume that elements can be added to P and Q in constant time and that membership in these sets can be decided in constant time. Thus the linear scan constructs P and Q in O(m) time. As P is a subset of a set of size {4}^{{k}^{\prime}}, P can be stored in {4}^{{k}^{\prime}} bits. Q requires {4}^{k\prime \prime} bits.
Up until now, only the initial kmers of the reads were considered, resulting in a sorted table K of d nonredundant keys (i.e. initial kmers of reads), a table C of size d for counting kmers and two sets P and Q. By construction, each count in C is at least 1 and the sum of the counts is m. The next task is to enumerate, for all reads r, the suffixes of r at all positions j, \mathrm{2\hspace{0.17em}}\le \hspace{0.17em}\mathrm{j\hspace{0.17em}}\le \hspace{0.17em}\leftr\right{\ell}_{\mathit{min}}+1. r has r − ℓ_{
min
} such suffixes. For each such suffix s (which by construction is of length ≥ ℓ_{
min
}), one extracts two strings v = s[1…k′] and w=s[kk\prime \prime +1\dots k]. If v does not occur in P, then v is not a prefix of any read in \mathcal{R} and thus s is not a matching candidate and can be discarded. If w does not occur in Q, then w\ne r[kk\prime \prime +1\dots k] for all reads r\in \mathcal{R} and thus s is not a matching candidate and can be discarded. Thus P and Q serve as filters to efficiently detect suffixes which can be discarded. For read r the suffixes s and corresponding strings v and w can be enumerated in O(r − ℓ_{
min
}) time. Checking membership in P and in Q requires constant time. Therefore, each read r is processed in O(r − ℓ_{
min
}) time. Thus the enumeration and checking requires O\left({\sum}_{r\in \mathcal{R}}\rightrm\phantom{\rule{0.33em}{0ex}}{\ell}_{\mathit{min}}) time altogether.
The next task is to process a suffix, say s which has passed the Pfilter and the Qfilter. That is, v = s[1…k′] ∈ P and w=s[kk\prime \prime +1\dots k]\in Q holds. One now has to check if u = s[1…k] occurs in K to verify if s is a matching candidate. If the latter is true, the appropriate counter needs to be incremented. Hence this is the counting phase of our algorithm. The simplest way to check the occurrence of u in K, is to perform a binary search, taking u as the key. However, this would require O(log_{2}d) time for each kmer passing the filters. This is too slow. Using a hash table turned out to be too slow as well and would require too much extra space, which we do not want to afford.
We propose an efficient method that works as follows: Store each kmer s[1.k] passing the P and the Qfilter in a buffer B of fixed size b=\frac{d}{\gamma} for some constant γ > 1. Once B is full or all kmers have been added to B, sort the elements in B in ascending lexicographic order. Then perform a binary search in K, but only for the first element in B, say x. As B is sorted, x is the smallest element. The binary search for x in K finds the smallest element in K greater than or equal to x using O(log_{2 }d) time. If such an element occurs in K, say at index e, then simultaneously scan B beginning with the first index and K beginning at index e. For any element in B that is equal to an element in K, say at index i in K, increment the counter in C at the same index.
This simultaneous linear scan of B and (a part of) K takes O(b + d) time and finds all kmers from B occurring in K. Once the scan and the associated increments are done, the buffer is emptied for the next round. Suppose that there are in total b^{∗}kmers that have passed B. Thus there are \u2308\frac{{b}^{\ast}}{b}\u2309 rounds filling the buffer. Each round is associated with a sorting step, a binary search and a linear scan. Sorting requires O(b) time using radixsort. This gives a running time of O\left(\frac{{b}^{\ast}}{b}(b+lo{g}_{2}d+(b+d\left)\right)\right)=O\left(\frac{{b}^{\ast}}{b}(b+d)\right)=O\left(\frac{{b}^{\ast}}{b}(b+b\gamma )\right)=O\left({b}^{\ast}\right(1+\gamma \left)\right)=O\left({b}^{\ast}\right). As {b}^{\ast}\le \mathrm{\hspace{0.17em}n}:={\sum}_{r\in \mathcal{R}}\leftr\right, the running time is linear in the total length of the reads.
Once all reads have been processed, for any initial kmer u of any read, the following holds: If u is the ith initial kmer in K, then C[i] is the number of SPMrelevant suffixes of which u is a prefix. To prepare for the insertion phase, compute the partial sums of C in an array π of size d + 1, such that π[0] = C[0], \pi \left[i\right]=\pi [i1]+C\left[i\right] for all i, 1 ≤ i ≤ d − 1, and π[d] = π[d − 1]. π[d] is the number of all SPMrelevant suffixes. One creates a table S of size g: = π[d] to hold pairs of read numbers and read offsets. As in the counting phase, enumerate all suffixes of reads of length at least ℓ_{
min
} passing the P and the Qfilter. Suppose that s is such a suffix of read number p and with read offset q. Let u be the initial kmer of s. Then we store (p, q, u) in a buffer B′ of fixed size \frac{b}{2}. We choose this buffer size, as the elements in B′ require twice as much space as the elements in B. As in the counting phase, sort the buffer in lexicographic order of the kmers it stores, and then process the buffer elements using the kmer, say u, as a key to determine if u matches some element in K, say at index i. Then insert (p, q) at index π[i] − 1 in S and decrement π[i].
After all b^{∗} elements passed the buffer and have been processed, S holds all SPMrelevant suffixes (represented by read numbers and read offsets) in lexicographic order of their initial kmers. Let u be the ith kmer in K. Then all SPMrelevant suffixes with common prefix u are stored in S from index π[i] to \pi [i+1]1. Thus S can uniquely be divided into buckets of SPMrelevant suffixes with the same initial kmer. Each such bucket can be sorted independently from all other buckets. Moreover, each SPMrelevant suffix not occurring in the ith bucket, has an initial kmer different from u and thus cannot have a common prefix of length ≥ ℓ_{
min
} with the suffixes in the ith bucket. As a consequence, all suffixprefix matches are derivable from pairs of SPMrelevant suffixes occurring in the same bucket. Thus, the suffixprefix matches problem can be divided into d subproblems, each consisting of the computation of suffixprefix matches from a bucket of SPMrelevant suffixes. This problem is considered later.
To sort the ith bucket one extracts the remaining suffixes relevant for sorting the bucket and stores them in a table. This strategy minimizes the number of slow random accesses to the reads. Consider the ith bucket and let (p, q) be one of the suffixes in the bucket, referring to the suffix of read r_{
p
} at read offset q. Then extract the suffix of r_{
p
} starting at position q + k. As the maximum read length is considered to be constant, the total size of the remaining suffixes to be stored is O\left(\pi \right[i+1]\pi [i\left]\right). The remaining suffixes can be sorted using radixsort in O\left(\pi \right[i+1]\pi [i\left]\right) time. An additional linear time scan over the sorted suffixes of the bucket delivers a table L of size \pi [i+1]\pi \left[i\right]1, such that L_{
j
} is the length of the longest common prefix of the suffixes S\left[\pi \right[i]+j1] and S[π[i] + j] for all j, 1\hspace{0.17em}\le \hspace{0.17em}\mathrm{j\hspace{0.17em}}\le \hspace{0.17em}\pi [i+1]\pi \left[i\right]1.
Sorting all remaining suffixes and computing the lcptable L thus requires O(β_{
max
}) space and O\left({\sum}_{i=0}^{d1}\left(\pi \right[i+1]\pi [i\left]\right)\right)=O\left(g\right) time where β_{
max
} is the maximum size of a bucket and g is the total number of SPMrelevant suffixes. The bucket of sorted SPMrelevant suffixes and the corresponding table L are processed by Algorithm 2 described after the following implementation section and Algorithm 3 described in Additional file 1, Section 7.
All in all, our algorithm runs in O(m + n + g) = O(n) time and O(m + 4^{k′} + 4^{k′′} + β_{
max
} + g + n) space. As we choose k′′ ≤ k′ ∈ O(log_{2} n) and m, g, and β_{
max
} are all smaller than n, the space requirement is O(n). Thus the algorithm for identifying and sorting all (ℓ_{
min
}, k)SPMrelevant suffixes is optimal.
Implementation
We will now describe how to efficiently implement the algorithm described above. An essential technique used in our algorithm are integer codes for kmers. These are widely used in sequence processing. As we have three different mersizes (k, k′, and k″) and dependencies between the corresponding integer codes, we shortly describe the technique here. In our problem, a kmer always refers to a sequence of which it is a prefix. Therefore, we introduce integer codes for strings of length ≥ k: For all strings s of length at least k define the integer code ϕ{\varphi}_{k}\left(s\right)={\sum}_{i=1}^{k}\phantom{\rule{0.12em}{0ex}}{4}^{ki}, where ϕ is the mapping [A → 0, C → 1, G → 2, T → 3] uniquely assigning numbers from 0 to 3 to the bases in the alphabetical order of the bases. Note that only the first k symbols of s determine ϕ_{
k
}(s), which is an integer in the range [0…4^{k} − 1]. For all strings s and s′ of length at least k, s ⊴ s′ implies ϕ_{
k
}(s) ≤ ϕ_{
k
}(s′), where ⊴ denotes the lexicographic order of strings and ≤ denotes the order of integers.
Besides ϕ_{
k
}, we use the encodings ϕ_{k′} and ϕ{\varphi}_{k\prime \prime}^{k} for some k′, k ″≤ k. ϕ_{k′} encodes the prefix of s of length k′ and is defined in analogy to ϕ_{
k
} (replacing k by k′). ϕ{\varphi}_{k\prime \prime}^{k}\left(s\right) encodes the suffix s[kk\prime \prime +1\dots k] of s[1…k] of length k″, i.e. ϕ{\varphi}_{k\prime \prime}^{k}\left(s\right)={\sum}_{i=1}^{k\prime \prime}\phantom{\rule{0.12em}{0ex}}{4}^{k\prime \prime i}\varphi \left(s\right[kk\prime \prime +i\left]\right). ϕ_{k′}(s) and ϕ{}_{k\prime \prime}^{k}\left(s\right) can be computed from ϕ_{
k
}(s) according to the following equations:
{\varphi}_{{k}^{\prime}}\left(s\right)=\frac{{\varphi}_{k}\left(s\right)}{{4}^{kk\text{'}}}
(1)
{\varphi}_{k\prime \prime}^{k}\left(s\right)={\varphi}_{k}\left(s\right)\mathit{\text{mod}}\phantom{\rule{0.12em}{0ex}}{4}^{{k}^{\u2033}}
(2)
We implement kmers by their integer codes. Each integer code can be computed in constant time by extracting the appropriate sequence of consecutive bit pairs from a 2bit per base encoding of the read set. In our implementation, we use the representation and the appropriate access functions from the GtEncseq software library [16]. As k\hspace{0.17em}\le \frac{\omega}{2} we can store each integer code in an integer of the machine’s word size. We sort m integer codes for the initial kmers using quicksort, adapting the code from [17]. Our implementation works without recursion and uses an extra stack of size O(log_{2} m) to sort m integers. This small additional space requirement is the main reason to choose quicksort instead of radixsort, which is usually more than twice as fast, but requires O(m) extra working space, which we do not want to afford.
The sets P and Q are implemented by bit vectors v_{
P
} and v_{
Q
} of 4^{k′} and 4^{k′′} bits, respectively. Bit v_{
P
}q is set if and only if q = ϕ_{k′}(r) for some r\in \mathcal{R}. Bit v_{
Q
}q is set if and only if q={\varphi}_{{k}^{\prime \prime}}^{k}\left(r\right) for some read r\in \mathcal{R}. To obtain the bit index, one computes ϕ_{k′}(s) and {\varphi}_{k\prime \prime}^{k}\left(s\right) from ϕ_{
k
}(s) using Equations (1) and (2). Equation (1) can be implemented by a bitwise right shift of 2(k − k′) bits. Equation (2) can be implemented by a bitwise and operation with the integer 2^{2k′′} − 1. Thus, given the integer code for s, both ϕ_{k′}(s) and ϕ_{k″}^{k}(s) can be computed in constant time. Therefore, the sets P and Q can be constructed in O(m) time and each access takes constant time.
When determining the kmer codes in the counting phase and in the insertion phase, we sweep a window of width k over the sequence reads and compute the integer code for the sequence in the current window in constant time.
We implement the counts by a byte array of size d and store counts larger than 255 in an additional hash table. Additional file 1, Section 1 gives the details.
The partial sums in table π are bounded by g, the number of SPMrelevant suffixes. For large read sets, g can be larger than 2^{32} − 1. However, as the partial sum are strictly increasing, one can implement π by a 32 bit integer table PS of size d + 1, such that PS[i] = π[i] mod 2^{32} for any i, 0 ≤ i ≤ d and an additional integer table of size {2}^{max\{0,\u2308lo{g}_{\mathrm{2\hspace{0.17em}}}g\u230932\}}marking the boundaries of carry bits. Details are given in Additional file 1, Section 2.
For the insertion phase we need a representation of the read set (2n bits), table K (2kd bits), set P and Q (4^{k′} and 4^{k′′} bits, respectively), table π (32(d + 1) bits) and table S of size g. As S holds pairs of read numbers and read offsets, each entry in S is stored compactly in \sigma =\u2308lo{g}_{2}\mathrm{\hspace{0.17em}m}\u2309+\u2308lo{g}_{2}({\lambda}_{\text{max}}{\mathcal{}}_{\mathit{min}})\u2309 bits. This would give an integer array of size \u2308\frac{g\phantom{\rule{0.33em}{0ex}}\sigma}{\omega}\u2309 if we would store S completely. But we do not, as we employ a partitioning strategy, explained next.
Although the data structures representing tables S, K, P and π are of different sizes, their access follows the same scheme: Suppose that i is the smallest index, such that \frac{g}{2}\le \mathrm{\hspace{0.17em}\pi}\left[i\right]. Roughly half of the suffixes to be inserted in S are placed in buckets of lower order (with index ≤ i) and the other half are placed in buckets of higher order (with index > i). The buckets of lower order are associated with the kmers in K up to index i. Hence, for these, one needs table K and PS only up to index i. Let s be some suffix of length ≤ ℓ_{
min
} such that ϕ_{
k
}(s) ≤ K[i]. To apply the Pfilter to s, one checks v_{
P
} at index \frac{{\varphi}_{k}\left(s\right)}{{4}^{k{k}^{\prime}}}\le \frac{K\left[i\right]}{{4}^{k{k}^{\prime}}}, which is in the first half of vector v_{
P
}. This strategy, dividing tables S, K, P and π into q = 2 parts of roughly the same size, can be generalized to q > 2 parts. Each part is defined by a lower and an upper integer code and by corresponding lower and upper boundaries referring to sections of the four mentioned tables. Partitioning S means to only allocate the maximum space for holding all buckets belonging to a single part.
The four tables that can be split over q parts require h(g, k, d, k′′, σ) = 2kd + 4^{k″} + 32(d + 1) + g σ bits. Hence, in the insertion phase, our method requires 2n+{4}^{k\u2033}+\frac{h(g,k,d,k\u2033,\sigma )}{q} bits, where 2n + 4^{k′′} bits are for the representation of the reads and the set Q (which cannot be split). As gσ dominates all other terms, h(g, k, d, k′′, σ) is much larger than 2n + 4^{k′′} so that the space gain of our partitioning strategy is obvious. As the space required for the insertion phase for any number of parts can be precalculated, one can choose a memory limit and calculate the minimal number of parts such that the limit is not exceeded. In particular, choosing the space peak of the counting phase as a memory limit for the insertion phase allows for balancing the space requirement of both phases. More details on the partitioning technique are given in Additional file 1, Section 3.
An obvious disadvantage of the partitioning strategy (with, say q, parts) is the requirement of q scans over the read set. However, the sequential scan over the read set is very fast in practice and only makes up for a small part of the running time of the insertion phase.
The expected size of a bucket to be sorted after the insertion phase is smaller than the average read length. The maximum bucket size (determining the space requirement for this phase) is 1 to 2 orders of magnitude smaller than d. As we can store \frac{\omega}{2} bases in one integer of ω bits, the remaining suffixes (which form the sort keys) can be stored in {\beta}_{\mathit{max}}\left(\frac{{\lambda}_{\text{max}}k}{\omega}+2\right) integers, where β_{
max
} is the maximum size of a bucket and λ_{max} is the maximum length of a read. The additional constant 2 is for the length of the remaining suffix, for the read number and the read offset. The sort keys are thus sequences of integers of different length which have to be compared up to the longest prefix of the strings they encode. We use quicksort in which \frac{\omega}{2} bases are compared using a single integer comparison. As a side effect of the comparison of the suffixes, we obtain the longest common prefix of two compared suffixes in constant extra time, and store this in a table L of the size of the bucket. The suffixes in the bucket and the table L are passed to Algorithm 2, described next, and to Algorithm 3( Additional file 1, Section 7).
An efficient algorithm for computing suffixprefix matches from buckets of sorted SPMrelevant suffixes
The input to the algorithm described next is a bucket of sorted SPMrelevant suffixes, with the corresponding table L, as computed by the algorithm of the previous subsection. Consider the ith bucket in S and let H_{
j
} = S[π[i] + j] be the jth suffix in this bucket for all j, 0 ≤ j ≤ β − 1 where β = π[i + 1] − π[i] is the size of the bucket. By construction, we have H_{j1} ⊴ Hj, L_{
j
} ≥ k, and L_{
j
} is the length of the longest common prefix of H_{j−1} and H_{
j
} for j, 1 ≤ j ≤ β − 1.
Note that the bucketwise computation does not deliver the lcpvalues of pairs of SPMrelevant suffixes on the boundary of the buckets. That is, for all i > 0, the length of the longest common prefix of S[π[i] − 1] and S[π[i]] is not computed, because S[π[i] − 1] is the last suffix of the (i − 1)th bucket and S[π[i]] is the first suffix of the ith bucket. However, as both suffixes belong to two different buckets, their longest common prefix is smaller than k (and thus smaller than ℓ_{
min
}) and therefore not of interest for the suffixprefix matching problem.
The suffixes occurring in a bucket will be processed in nested intervals, called lcpintervals, a notion introduced for enhanced suffix arrays by [18]. We generalize this notion to table H and L as follows: An interval e.f, 0 ≤ e < f ≤ β − 1, is an lcpinterval of lcpvalue ℓ if the following holds:

e=0\phantom{\rule{0.25em}{0ex}}\text{or}{L}_{e}<\mathcal{}\text{,}
(3)

{L}_{q}\ge \mathcal{}\phantom{\rule{0.25em}{0ex}}\text{for all}\mathrm{\hspace{0.17em}q},e+1\le q\hspace{0.17em}\le f\text{,}
(4)

{L}_{q}=\mathcal{}\phantom{\rule{0.25em}{0ex}}\text{for at least one}q,e+1\le q\hspace{0.17em}\le f\text{,}
(5)

f=\beta 1\phantom{\rule{0.25em}{0ex}}\text{or}{L}_{f+1}<\mathcal{}\text{.}
(6)
We will also use the notation ℓ − [e.f] for an lcpinterval [e.f] of lcpvalue ℓ. If ℓ − [e.f] is an lcpinterval such that w = H_{
e
}[1…ℓ] is the longest common prefix of the suffixes H_{
e
}, H_{e+1}, …, H_{
f
}, then [e.f] is called the winterval.
An lcpinterval ℓ′ − [e′.f′] is said to be embedded in an lcpinterval ℓ − [e.f] if it is a proper subinterval of ℓ − [e.f] (i.e., e ≤ e′ < f′ ≤ f) and ℓ′ > ℓ. The lcpinterval ℓ − [e.f] is then said to be enclosing [e′.f′]. If [e.f] encloses [e′.f′] and there is no interval embedded in [e.f] that also encloses [e′.f′], then [e′.f′] is called a child interval of [e.f] and [e.f] is the parent interval of [e′.f′]. We distinguish lcpintervals from singleton intervals [e′] for any e′, 0 ≤ e′, ≤ β − 1. [e′] represents H_{e′}. The parent interval of [e′] is the smallest lcpinterval [e.f] with e ≤ e′ ≤ f.
This parent–child relationship of lcpintervals with other lcpintervals and singleton intervals constitutes a virtual tree which we call the lcpinterval tree for H and L. The root of this tree is the lcpinterval 0 − [0.(β − 1)]. The implicit edges to lcpintervals are called branchedges. The implicit edges to singletonintervals are called leafedges. Additional file 1, Section 10 gives a comprehensive example illustrating these notions.
Abouelhoda et al. ([18], Algorithm 4.4) present a linear time algorithm to compute the implicit branchedges of the lcpinterval tree in bottomup order. When applied to a bucket of sorted suffixes, the algorithm performs a linear scan of tables H and L. In the eth iteration, 0 ≤ e ≤ β − 2, it accesses the value L_{e+1} and H_{
e
}. We have nontrivially extended the algorithm to additionally deliver leaf edges. The pseudocode, with some additions in the lines marked as new, is given in Algorithm 1 (Figure 1). We use the following notation and operations:

A stack stores triples (ℓ, e, f) representing an lcpinterval ℓ − [e.f]. To access the elements of such a triple, say sv, we use the notation sv.lcp (for the lcpvalue ℓ), sv.lb (for the left boundary e) and sv.rb (for the right boundary f).

stack.push(e) pushes an element e onto the stack.

stack.pop pops an element from the stack and returns it.

stack.top returns a reference to the topmost element of the stack.

⊥ stands for an undefined value.

process_leafedge(firstedge, itv, (p, q)) processes an edge from the lcpinterval itv to the singleton interval representing the suffix r_{
p
}[q…r_{
p
}]. firstedge is true if and only if the edge is the first processed edge outgoing from itv.

process_branchedge(firstedge, itv, itv’) processes an edge from the lcpinterval itv to the lcpinterval itv’. The value itv’.rb is defined and firstedge is true if and only if the edge is the first edge outgoing from itv.

process_lcpinterval(itv) processes the lcpinterval itv. The value itv.rb is defined.
Depending on the application, we use different functions process_leafedge, process_branchedge, and process_lcpinterval.
Additional file 1, Section 4, explains why Algorithm 1 also delivers the leaf edges of the lcpinterval tree in the correct bottomup order.
Consider a path in the lcpinterval tree from the root to a singleton interval [e′] representing H_{e′} = r_{
p
}[q…r_{
p
}]. Let ℓ − [e.f] be an lcpinterval on this path, and consider the edge on this path outgoing from ℓ − [e.f]. If the edge goes to an lcpinterval of, say lcpvalue ℓ′, then the edge is implicitly labeled by the nonempty sequence {r}_{p}[q+\mathcal{l}\dots q+\mathcal{l}\prime 1]. Suppose the edge goes to a singleton interval: Then the edge is implicitly labeled by the nonempty sequence r_{
p
}[q + ℓ…r_{
p
} − 1]$_{
p
}. If q + ℓ = r_{
p
}, then r_{
p
}[q + ℓ…r_{
p
} − 1] is the empty string, which implies that the edge to the singleton interval is labeled by the sentinel $_{
p
}. Such an edge is a terminal edge for r_{
p
}. If the read offset q is 0, we call [e′] a wholeread interval for r_{
p
}, and the path in the lcpinterval tree from the root to [e′] a wholeread path for r_{
p
}.
Consider a suffixprefix match 〈 r_{
p
}, r_{
j
}, ℓ 〉 , such that the suffix w of r_{
p
} of length ℓ has a prefix u of length k. Recall that u is the common prefix of all suffixes in the ith bucket. Due to the implicit padding of reads at their end, the symbol following w as a suffix of r_{
p
} is $_{
p
}. By definition, w is also prefix of r_{
j
} and the symbol in r_{
j
} following this occurrence of w is different from $_{
p
}. Thus, there is a winterval [e.f] in the lcpinterval tree for H and L. [e.f] is on the path from the rootinterval to the wholeread leaf interval for r_{
j
}. Moreover, there is a terminal edge for r_{
p
} outgoing from [e.f]. Vice versa, an lcpinterval of lcpvalue ℓ on the path to the wholeread leaf interval for r_{
j
} and with a terminal edge for r_{
p
} identifies the suffixprefix match 〈 r_{
p
}, r_{
j
}, ℓ 〉 . This observation about suffixprefix matches is exploited in Algorithm 2 (Figure 2) which performs a bottomup traversal of the lcpinterval tree for H and L, collecting wholeread leaves and terminal edges for lcpintervals of lcpvalue at least ℓ_{
min
}. More precisely, whenever a wholeread leaf for r_{
p
}, 1 ≤ p ≤ m, is found (line 9), p is appended to the list W. With each lcpinterval itv on the stack used in the bottomup traversal, an integer itv.firstinW is associated. The elements in W[itv.firstinW…W] are exactly the read numbers of wholeread leaves collected for itv. The value of itv.firstinW is set whenever the first edge outgoing from itv is detected: If the first edge outgoing from itv is a leafedge, no previous wholeread leaf for itv has been processed: Thus W + 1 is the first index in list W where the whole read leaf information (if any) for itv will be stored (see line 8). If the first edge is a branchedge to lcpinterval itv′, then the corresponding subset of W for itv′ must be inherited to itv. Technically, this is achieved by inheriting the firstinWattribute from itv′ to itv, see line 18 of Algorithm 2.
Whenever a terminal edge for read r_{
p
}, outgoing from an interval itv is processed (line 11), p is added to the list T. Suppose that this terminal edge is outgoing from the lcpinterval itv. The first symbol of the label of the terminal edge is $_{
p
}. Suppose there is a branchedge outgoing from itv to some lcpinterval itv′. Then the first symbol, say a, of the implicit label of this edge must occur more than once. Thus it cannot be a sentinel, as these are considered different in the lexicographic ordering of the suffixes. Hence the first symbol a is either A, C, G or T. As these symbols are, with respect to the lexicographic order, smaller than the sentinels, the branchedge from itv to itv’ appears before the terminal edge from itv. Hence the terminal edges outgoing from itv′ have been processed in line 25, and so we only need a single list T for the entire algorithm.
As soon as all edges outgoing from itv have been processed, we have collected the terminal edges in T and the wholeread leaves in W. If itv.lcp exceeds the minimum length, Algorithm 2 computes the cartesian product of T with the appropriate subset of W and processes the corresponding suffixprefix matches of length itv.lcp in line 25. At this point suffixprefix matches may be output or postprocessed to check for additional constraints, such as transitivity. Once the cartesian product has been computed, the elements from T are no longer needed and T is emptied (line 26). Note that the algorithm empties W once an lcpinterval of lcpvalue smaller than ℓ_{
min
} is processed. After this event, there will only be terminal edges from vintervals such that the longest common prefix of v and the reads in W is smaller than ℓ_{
min
}. Therefore there will be no suffixprefix match of the form 〈_, w, ℓ〉 such that ℓ ≥ ℓ_{
min
} and w is a read represented in W. So the list can safely be emptied.
The lcpinterval tree for H and L contains β leafedges. As all lcpintervals have at least two children, there are at most β − 1 branchedges and β lcpintervals. As each of the three functions specified in Algorithm 2 is called once for every corresponding item, the number of functions calls is at most 3β − 1. Recall that Algorithm 2 is applied to each bucket and the total size of all buckets is g. Hence there are at most 3g − d calls to the three functions. process_leafedge and process_branchedge run in constant time. The running time of process_lcpinterval is determined by the number of SPMs processed. Assuming that the processing takes constant time, the overall running time of Algorithm 2 for all buckets is O(g + z) where z is the number of processed SPMs.
Handling reverse complements of reads
Reads may originate from both strands of a DNA molecule. For this reason, suffixprefix matches shall also be computed between reads and reverse complements of other reads. Handling the reverse complements of all reads is conceptually easy to integrate into our approach: One just has to process \stackrel{\u2015}{\mathcal{R}} instead of \mathcal{R}.
The three steps which involve scanning the reads are extended to process both strands of all reads. This does not require doubling the size of the read set representation, as all information for the reverse complemented reads can efficiently be extracted from the forward reads. Additional file 1, Section 5, shows how to compute the integer codes for the reversed reads from the integer codes of the forward reads in constant time.
The scan of the reverse complemented reads has a negligible impact on the runtime. Of course, the size of the table S, K and PS roughly doubles when additionally considering reverse complements.
When computing suffixprefix matches some minor modifications are necessary: Applying Algorithm 2 to \stackrel{\u2015}{\mathcal{R}} finds all SPMs, including some redundant ones, which we want to omit. This is formalized as follows: an SPM\u3008r,s,\mathcal{l}\u3009\in SPM\left(\stackrel{\u2015}{\mathcal{R}}\right) is nonredundant if and only if one of the following conditions is true:

r\in \mathcal{R},s\in \mathcal{R}
(7)

r\in \mathcal{R},s\in \stackrel{\u2015}{\mathcal{R}},r\prec \stackrel{\u2015}{\mathit{s}}

r\in \stackrel{\u2015}{\mathcal{R}},s\in \mathcal{R},s\prec \stackrel{\u2015}{\mathit{r}}\text{.}
For any SPM, these conditions can easily be checked in constant time, see Algorithm 3 ( Additional file 1, Section 7).
Recognition of transitive and irreducible suffixprefix matches
For the construction of the string graph, we do not need transitive SPMs. An SPM\u3008r,t,\mathcal{l}\prime \prime \u3009 is transitive if and only if there are two SPMs 〈r, s, ℓ〉 and 〈s, t, ℓ′〉 such that ℓ + ℓ′ = s+ ℓ″. Figure 3 shows a concrete example of a transitive SPM. An SPM which is not transitive is irreducible.
The following theorem characterizes an SPM by a read and a single irreducible SPM satisfying a length constraint and a match constraint, see Figure 4 for an illustration.
Theorem 1. Let 〈r, t, ℓ″〉 be an SPM. Then 〈r, t, ℓ″〉 is transitive if and only if there is an s\in \mathcal{R} and an irreducible SPM 〈s, t, ℓ′〉 such that ℓ′ > ℓ″, r − ℓ″ ≥ s − ℓ′ and s[1…s − ℓ′] = r[r − ℓ″ − (s − ℓ′) + 1…r − ℓ″].
The proof of Theorem 1 can be found in Additional file 1, Section 6.
If the SPM 〈r, t, ℓ″〉 is transitive and 〈s, t, ℓ′〉 is the SPM satisfying the conditions of Theorem 1, then we say that 〈r, t, ℓ″〉 is derived from 〈s, t, ℓ′〉.
Theorem 1 suggests a way to decide the transitivity of an SPM 〈r, t, ℓ〉: Check if there is an irreducible SPM 〈s, t, ℓ′〉 from which it is derived. The check involves comparison of up to s − ℓ′ symbols to verify if s[1…s − ℓ′] is a suffix of r[1\dots r\mathcal{l}\prime \prime ]. As there may be several irreducible SPMs from which 〈r, t, ℓ″〉 may be derived, it is necessary to store the corresponding left contexts: For any sequence s and any ℓ′, 1 ≤ ℓ′ < s, the left context LC(s, ℓ′) of s is the nonempty string s[1…s − ℓ′].
Due to the bottomup nature of the traversal in Algorithm 2, the SPMs involving the different prefixes of a given read are enumerated in order of match length, from the longest to the shortest one. Thus, Algorithm 2 first delivers the irreducible SPM 〈s, t, ℓ′〉 from which \u3008r,t,\mathcal{l}\prime \prime \u3009 is possibly derived, because \mathcal{l}\prime >\mathcal{l}\prime \prime.
From Theorem 1 one can conclude that the first SPM, say 〈s, t, ℓ′〉, found on a wholeread path for t is always irreducible. Hence, one stores LC(s, ℓ′). An SPM 〈r, t, ℓ″〉 detected later while traversing the same wholeread path for t is classified as transitive if and only if LC(s, ℓ′) is a suffix of LC(r, ℓ″) (see Figure 5 for an illustration). If 〈r, t, ℓ″〉 is transitive it is discarded. Otherwise, LC(r, ℓ″) must be stored as well to check the transitivity of the SPMs found later for the same wholeread path. So each SPM is either classified as transitive, or irreducible, in which case a left context is stored. To implement this method, we use a dictionary D of left contexts, with an operation LCsearch(D, s), which returns true if there is some t ∈ D such that t is a suffix of s. Otherwise, it adds s to D and returns false. Such a dictionary can, for example, be implemented by a trie [19] storing the left contexts in reverse order. In our implementation we use a blindtrie [20]. In Additional file 1, Section 7 we present a modification of Algorithm 2 to output nonredundant irreducible SPMs only.
Recognition of internally contained reads
At the beginning of the methods section we have shown how to detect reads which are prefixes or suffixes of other reads. When constructing the string graph we also have to discard internally contained reads, which are contained in other reads without being a suffix or a prefix. More precisely, r\in \mathcal{R} is internally contained, if a read r\prime \in \mathcal{R} exists, such that r′ = urw for some nonempty strings u and v. In Additional file 1, Section 8, we show how to efficiently detect internally contained reads.
Construction of the assembly string graph
Consider a read set \mathcal{R} which is suffix and prefixfree. The assembly string graph [8] is a graph of the relationships between the reads, constructed from SP{M}^{\mathit{nr}}\left(\stackrel{\u2015}{\mathcal{R}}\right), the set of all nonredundant irreducible SPMs from SPM\left(\stackrel{\u2015}{\mathcal{R}}\right) restricted to reads which are not internally contained.
For each r\in \mathcal{R} the graph contains two vertices denoted by r.B and r.E, representing the two extremities of the read. B stands for begin, E stands for end.
For each nonredundant irreducible SPM\u3008r,s,\mathcal{l}\u3009\in SP{M}^{\mathit{nr}}\left(\stackrel{\u2015}{\mathcal{R}}\right) satisfying ℓ ≥ ℓ_{
min
}, the graph contains two directed edges, defined as follows:

1.
if \u3008r,s,\mathcal{l}\u3009\in SP{M}^{\mathit{nr}}\left(\stackrel{\u2015}{\mathcal{R}}\right) there are two edges:

2.
if \u3008r,\overline{s},\mathcal{l}\u3009\in SP{M}^{\mathit{nr}}\left(\overline{\mathcal{R}}\right) there are two edges:

3.
if \u3008\stackrel{\u2015}{s},r,\mathcal{l}\u3009\in SP{M}^{\mathit{nr}}\left(\overline{\mathcal{R}}\right) there are two edges:
In our implementation of the string graph, vertices are represented by integers from 0 to 2m − 1. To construct the graph from the list of nonredundant irreducible SPMs, we first calculate the outdegree of each vertex. From the counts we calculate partial sums. In a second scan over the list of SPMs, we insert the edges in an array of size 2ρ, where \rho =\leftSP{M}^{\mathit{nr}}\right(\overline{\mathcal{R}}\left)\right. This strategy allows to allocate exactly the necessary space for the edges and to access the first edge outgoing from a vertex in constant time. The array of edges is stored compactly using 2\rho (\u2308lo{g}_{2}\left(2m\right)\u2309+\u2308lo{g}_{2}({\lambda}_{\text{max}}{\mathcal{l}}_{\mathit{min}})\u2309) bits, where λ_{max} is the maximum length of a read. \u2308lo{g}_{2}\left(2m\right)\u2309 bits are used for the destination of an edge (the source of the edge is clear from the array index where the edge is stored). \u2308lo{g}_{2}({\lambda}_{\text{max}}{\mathcal{l}}_{\mathit{min}})\u2309 bits are used for the length of the edge label.
To output the contigs, we first write references (such as read numbers and edge lengths) to a temporary file. Once this is completed, the memory for the string graph is deallocated, and the read sequences are mapped into memory. Finally, the sequences of the contigs are derived from the references and the contigs are output.
To verify the correctness of our string graph implementation and to allow comparison with other tools, we have implemented the graph cleaning algorithms described in [9] as an experimental feature. More sophisticated techniques, such as the network flow approach described in [8], are left for future work, as the main focus of this paper lies in the efficient computation of the irreducible SPMs and the construction of the string graph.