How genome complexity can explain the difficulty of aligning reads to genomes

Background Although it is frequently observed that aligning short reads to genomes becomes harder if they contain complex repeat patterns, there has not been much effort to quantify the relationship between complexity of genomes and difficulty of short-read alignment. Existing measures of sequence complexity seem unsuitable for the understanding and quantification of this relationship. Results We investigated several measures of complexity and found that length-sensitive measures of complexity had the highest correlation to accuracy of alignment. In particular, the rate of distinct substrings of length k, where k is similar to the read length, correlated very highly to alignment performance in terms of precision and recall. We showed how to compute this measure efficiently in linear time, making it useful in practice to estimate quickly the difficulty of alignment for new genomes without having to align reads to them first. We showed how the length-sensitive measures could provide additional information for choosing aligners that would align consistently accurately on new genomes. Conclusions We formally established a connection between genome complexity and the accuracy of short-read aligners. The relationship between genome complexity and alignment accuracy provides additional useful information for selecting suitable aligners for new genomes. Further, this work suggests that the complexity of genomes sometimes should be thought of in terms of specific computational problems, such as the alignment of short reads to genomes.


Background
Advances in next-generation sequencing technologies have driven the development of computational approaches to address the problem of aligning short reads to reference genomes [1][2][3][4][5][6][7][8][9][10]. Even so, the alignment problem remains challenging due to the presence of genomic repeats that are much longer than reads. Yu et al. [11] evaluated alignment performance of several aligners on repetitive regions selected from CpG islands and concluded that long repeats seriously degraded alignment performance.
Researchers generally believe that the difficulty of aligning short reads is very much related to the complexity of genomes; it is easier to misalign short reads when the genomes of interest have long and complicated repeat patterns. While there has been an interest in measuring complexity of strings, recent attention has been focused on complexity of DNA sequences [12][13][14][15]. Whiteford et al. [15] utilized k-mer frequencies as a way to visualize and understand the complexity of genomes. Kurtz et al. [14], similarly, annotated plant genomes with k-mer frequencies so that repeat structures and characteristics can be easily visualized. With the same approach to understanding genome complexity, Chor et al. [13] analyzed k-mer spectra of over 100 species and observed multimodal spectra for regions with specific CG content characteristics. Unfortunately, these measures cannot be easily quantified and immediately adopted to study how complexity affects the difficulty of short-read alignment.
In a recent study, Becher et al. [12] introduced a measure known as the I-complexity, which seems most promising as a tool to correlate sequence complexity to difficulty of short-read alignment. The authors showed several interesting properties of this measure, including its closeness to the Lempel-Ziv complexity and its efficient computation in linear time. The I-complexity can be easily adopted for our purpose and will be used among others to understand how genome complexity affects the difficulty of short-read alignment.
In this paper, we propose measures of complexity that are best suited for the analysis and understanding of the difficulty of short-read alignment and how such measures might be helpful in selecting appropriate aligners for new genomes. The inspiration for this work lies in the observation that complex repeat structures in DNA that affect the performance of computational tasks are length specific. For instance, in finding regulatory motifs in DNA sequences, repeated structures of interest are around 8-25 characters long. On the other hand, in aligning reads to genomes, such repeats probably have little effect on the performance of aligners. This means that measures such as the I-complexity that are general and not length-specific might not be best.

I-complexity and D-complexity
Becher et al. [12] introduced the I-complexity as a measure of complexity of strings. It is proportional to the number of distinct substrings of the input string. Specifically, the I-complexity of a DNA sequence g is defined to be: where LCP [i] is the length of the longest common prefix of the suffixes of g starting at positions S[i] and at S[i -1], and S is the suffix array of g. The suffix array S of g stores implicitly lexicographically sorted suffixes of g; i.e. for i < j, g S[i]···|g| (the suffix starting at index S[i]) is lexicographically smaller than g S[j]···|g| (the suffix starting at index S[j]).
The somewhat non-intuitive definition of the I-complexity has some advantages. The authors established upper and lower bounds for I(g), and showed that it was close to the Lempel-Ziv complexity of g. Further, it can be computed in linear time because the suffix and LCP arrays can be constructed in linear time [16,17].
Although the I-complexity will be used in our attempt to explore the relationship between complexity and difficulty of alignment, we introduce a similar measure, D(g), counts directly the rate of distinct substrings: where f(x) denotes the number of occurrences of x in g. To be precise, D(g) is equal to the total number of distinct substrings divided by the total number of substrings of g. D(g) can be computed in linear time, due to the following lemma.
Suppose that a substring s of g occurs exactly k times. Then, there will be a block of size k in the suffix array that corresponds to k suffixes that have s as the common prefix. More specifically, assume that s is the common prefix of the suffixes of g starting at positions . We will call the occurrence of s at position S[i] its representative occurrence, and its occurrences at Each repeat occurrence of s is a prefix of the longest common prefix of the suffixes starting at . This means, each repeat occurrence of s is accounted for uniquely by the values of If we focus on a position, for example i + 1, we can see that the longest common prefix between S[i + 1] and S[i] (let's call it p 1···j) accounts uniquely for j repeat occurrences, namely p 1 , p 1···2 , · · · , p 1···(j−1). One of these is s; the rest are repeat occurrences of other substrings. Thus, each repeat occurrence is accounted for uniquely in some entry of LCP and each entry of LCP accounts uniquely for some repeat occurrences. That implies that |g| i=1 LCP[i] accounts for the total repeat occurrences of all substrings of g.
Further, |g| i=1 i is the total number of substrings of g, since there are exactly i substrings starting at position i. Thus, if we remove all repeat occurrences from the total number of substrings, we will get precisely the total number of representative occurrences. This means

Length-sensitive measures of complexity
In addition to the I and D, we introduce two notions of length-sensitive measures of genome complexity. The motivation is that, depending on which computational tasks that are affected by the complexity of genomes, only a narrow range of repeat lengths play an important role; for instance, one would expect repeats that affect the finding of regulatory motifs to be much shorter than those that affect the alignment reads to genomes. Given a number k, we define D k and R k as follows: where f (x) is the number of occurrences of x in g. D k and R k measure the rates of distinct substrings and repeats, respectively, of length k. R k and D k are not exact "opposites" because R k does not account for nonrepeats, whereas D k does. R k is related to the function C (k, r) proposed by Whiteford et al. [15]. C(k, r) is the count of k-mers repeating exactly r times. Therefore, R k = r>1 r · C(k, r). D k and R k can be computed in linear time and space using suffix and LCP arrays.
Proof A k-substring of g must start at an index between 1 and |g|-k +1. Further, if LCP [j] <k, the k-prefix of the suffix starting at S[j] is different from the k-prefix of the suffix starting at S[j -1]. Thus, each j for which S [j] ≤ |g| -k + 1 and LCP [j] <k represents exactly one distinct k-substring.
On the other hand, if S[j] > |g|-k+1 or LCP [j] ≥ k, then the k-substring starting at S[j] does not exist or is not distinct. Since LCP runs through all positions of g, all distinct k-substrings are uniquely accounted for.
Proof A k-repeat is a substring x of length k, with f (x) >1. Since the suffix array S is sorted lexicographically, S forms consecutive runs of k-repeats, which are k-prefixes of the suffixes stored implicitly by S. More specifically, each interval [i, j] ∈ I k corresponds to all occurrences of exactly one k-repeat. The number of occurrences for each k-repeat is exactly j -i + 2.
I k can be computed in linear time by scanning the LCP array once. Note that the index of LCP runs from 1 to |g|, and LCP [1] = 0.
Relating genome complexity to difficulty of aligning short reads to genomes I, D, D k , and R k provide quantitative measures of complexity for each genome. Intuitively, the more distinct substrings a reference genome has (i.e. high values of I, D, and D k ), the easier to align reads to the reference genome. Conversely, the more long repeats the genome has, the more difficult to align reads to it correctly (since the probability of mismatching of a read with a wrong substring is higher.) We measure the performance of an alignment algorithm using precision and recall, where precision is defined as the fraction of aligned reads that are correct (i.e. number of correctly aligned reads divided by the total number of aligned reads); and recall is defined as the fraction of reads that are correctly aligned (i.e. number of correctly aligned reads divided by the total number of reads). These definitions were also used by Liu et al. [8].
To correlate complexity values to difficulty of alignment, for each measure of complexity, we computed the linear correlation between the complexity values of sequences in a diverse data set including 100 genomic sequences, and the actual performance for each of 10 popular aligners. A good measure of complexity will correlate highly with alignment performance.

Aligners and genomic data
We selected from NCBI and EMBL-EBI databases a total of 100 genomic sequences from bacteria, plants, and eukaryotes (including human chromosomes) with diverse complexity. Detail information of these sequences is described in Tables 1, 2, and 3. "N" bases were removed from these genomic sequences because they were not real contents and constituted false long repeats that inappropriately affected the true complexity of the genomes.
We used default parameters to run these programs because these aligners appeared to perform well and consistent over the 100 genomes at such settings.
It is not possible to compute the number of correctly aligned reads for real reads because positions of real reads in reference genomes are not known. Consequently, precision and recall cannot be computed using real reads. For this reason, we simulated reads for each genome, 2x coverage of reads at lengths 50, 75, and 100 using the wgsim program [18]. Reads were generated with default parameters; sequencing error rates equal to 0.5%, 1%, 2%, and mutation rates between 0.1% and 1%, of which 15% are indels. These parameters should be sufficiently realistic for the current sequencing technologies and a large range of organisms.
Overview performance of aligners Figure 1 compares the running times of the aligners as a function of genome size (with 2x coverage). To take advantage of multiple CPU cores, one could manually partition reads into separate sets and run multiple instances across the number of cores. But since some of the aligners were not designed for multiple cores, it made more sense to compare them in single-threaded mode. We found that SHRiMP2 was roughly a magnitude slower than the fastest aligners for larger genomes. Therefore, it was therefore excluded from the figure.    Based on running time, Masai, SOAP2 and SeqAlto were among the fastest. In terms of precision and recall, the average performance over 100 genomic sequences for read lengths 50, 75 and 100 is summarized in Table 4. All aligners were generally very accurate and increasingly so at longer read lengths. On average, CUSHAW2, Masai, and Smalt performed consistently well across read lengths 50-100, whereas Bowtie2, BWA-SW and SeqAlto performed equally well at read lengths 75-100, but were slightly inferior at read length 50. Strictly based on numbers, SHRiMP2 had very good accuracy (in terms of precision and recall), but for larger genomes, it ran very slow. Performance of GASSST seemed peculiar with some recall values larger than 1. This is possible if a read is aligned to multiple locations by the aligner and counted as correct more than once by the SAMtool evaluation package, which allows a gap (default value of 20) between predicted and actual read positions.
In brief, many of these aligners (e.g. Bowtie2, CUSHAW2, SeqAlto) performed similarly accurately on the tested 100 genomes. Without additional information, it can be difficult to decide between these high-performing aligners. It would be useful if we could predict how accurately they perform on new genomes. To explore the aligners' performance on new genomes, we will examine the correlation between various measures of genome complexity and alignment accuracy.

Correlation between genome complexity and alignment performance
Our experiments showed that an appropriate choice of length-sensitive measure of complexity correlated highly with short-read performance of most aligners across read lengths, rates of mutation and sequencing error. Figures 2,  3, and 4 show the correlation between complexity measures D k , R k , D, I and alignment performance (precision and recall) at read lengths 50, 75, and 100, respectively, for the 100 genomes. We see that the D-complexity surprisingly had no correlation to performance across all aligners. The I-complexity (Becher et al. [12]) had better but still very low correlation, with correlation coefficients between 0 and -0.3.
We can see that there is a value of k for which D k that correlated highly with performance for both precision and recall, across all read lengths of 50, 75, and 100. For most aligners, the correlation coefficients were approximately  0.95. The only noticeable exception is for GASSST whose correlation coefficients were comparatively lower than those of the others. We think the explanation for this is in GASSST's peculiar performance as we reported earlier, whereby its recalls were above 1 for many of the 100 genomes. Additionally, we could see that when recalls were comparative lower for mrFAST and BWA-SW at read length 50, their correlations were also comparative lower than the other aligners'. It is important to note that some aligners were designed to work optimally with longer reads and consequently do not work very well with shorter reads. One can conclude that if aligners perform predictably in their comfort zones, D k (or R k ), is a good complexity measure that correlates highly to the accuracy of aligning reads to genomes. A close examination of the results shows that the value of k for which D k correlated highest with performance was very close to the read length. For example, Figure 1 Running time (in seconds) of aligners as function of genome size with 2x coverage, read length equal to 100, sequencing error at 2%, mutation rate at 0.1%. at read length 100, D 100 had the highest correlations across aligners; at both read lengths 50 and 75, D 50 had the highest correlations, although D 25 also had very high correlations at read length 50. Thus, the most accurate measure of complexity to understand the difficulty of short-read alignment is length sensitive. Intuitively, this is because repeats of length close to 75, for example, influence the accuracy of the alignment of reads of length 75.
The fact that the best value of k is less than or equal to read length, and not larger than it implies that D k accounts for approximate repeats. To see this, observe that a 75-mer repeat might not be part of a 100-mer, but surely contains several 50-mer repeats (26 of them, to be precise). This means that D 100 neglects to account for several 50-mers, whereas D 50 accounts for all of these, and these 50-mer repeats directly have an influence of the accuracy of aligning reads of length 75. This is probably why D 50 had a better correlation profile to complexity than D 100 did. The fact that D k accounts for approximate repeats longer than k can be explained formally by the so-called q-gram lemma, which states that two sequences of length k with edit distance e share at least nq + 1 -qe q-grams. An estimate of complexity involving counting approximate repeats might give better correlation. However, the computing of approximate repeats is computational expensive compared to linear time computation of D k . The best computation of approximate repeats we know of using a lossless filter [19] has an average running time of O(n 2 ). For long genomes, this running time is not desirable. Since D k and R k already correlated quite highly (approximately 0.95) for many aligners, a more efficient running time (linear instead of quadratic) seemed to be a better trade-off than a potentially better correlation.
At different rates of sequencing error and mutation, respectively, we observed similarly high correlation between performance of the aligners and length-sensitive measures of complexity. To study the correlation at different rates of sequencing error and mutation rates, respectively, we chose to correlate D 100 and alignment performance on reads of length 100. This case is representative for the correlation between the most appropriate length-sensitive measure and aligners' performance at a given read length. Figure 5 and 6 show the correlation between D 100 and aligners' performance of aligning reads of length 100 at different sequencing error rates and different mutation rates, respectively. Across all rates of sequencing error and mutation, the correlation between precision of all aligners and D 100 ranged from high to very high. The lowest correlation was obtained for GASSST at about 0.75. Correlations for the other aligners were around 0.95. Similarly, the correlation between recall and D100 was also high for almost all aligners. Overall, compared to precision, recall was, however, not as highly correlated. This might be explained by some aligners' conservative strategies, which aim to make few false positive alignments at the expense of making more false negatives. Further, as expected, at higher rates of sequencing error and mutation, respectively, correlation between performance and complexity decreased. Although, this decrease in correlation is affected more by increasing sequencing errors and by increasing mutations.

Predicting aligner performance for unknown genomes
The existence of many short-read aligners makes it challenging for researchers to pick the best one for their genomes of interest. Surveys such as [20] compared popular software packages on a few known genomes and served as a good starting point. But when it comes to adopt a particular software package, the decision seems to be a mixture of many factors including the authors' reputation, past familiarity with the software, its alignment accuracy, its quality and ease of use, its resource usage (running time and RAM), and recommendations of fellow researchers. Our focus is on accuracy, defined in terms of precision and recall.
To predict accuracy of a particular aligner on unknown genomes, researchers currently rely on its accuracy on known genomes. Such prediction can be based on summary statistics such as the top figure in Figure 7. This figure shows precisions and recalls of the aligners across 100 genomes in a boxplot figure, which shows medians, interquartile ranges among other statistics. Considering both statistics on precision and recall, we can see that with the exception of MrFast and SOAP2 (and maybe GASSST), the rest of the aligners had similar precisions and recalls across the 100 genomes. While the aligners' performance appeared similar on known genomes, what remains uncertain is, however, how well they perform on new genomes. Figure 5 Correlation coefficients between D100 and aligners' performance (precision and recall) of aligning reads of length 100 at sequencing error rates of 0.5%, 1%, and 2%.
To remove this uncertainty and make more informed decisions, we might additionally incorporate correlation between genome complexity and accuracy. To illustrate this strategy, consider the bottom figure in Figure 7 shows the correlation between D 100 (since it is the best at read length 100) of the aligners' precision and recall across the 100 genomes. Comparing the performance of the high-performing aligners identified in the previous step (those other than MrFast and SOAP2), we can see that they have different levels of correlations. For instance, Bowtie2 had noticeably lower correlations (0.89 for precision and 0.90 for recall) than CUSHAW2 for both precision and recall). Thus, although Bowtie2 and CUSHAW2 had similar accuracies for the 100 genomes, we expect that CUSHAW2 will more likely have similar accuracies for unknown genomes.
The effect of k on D k and R k Measures D k and R k are length specific and may have different characteristics for different values of k.  shows the cumulative distributions of D k and R k with k = 12, 25, 50, 100. We can see that the distributions of D k and R k are similar, but in an opposite fashion. For D 12 or R 12 , the distribution of complexity of the 100 genomes is quite uniform across the range from 0 to 1. With k > 12, however, the distribution is quite non-uniform. As k becomes larger, the distribution of D k (or R k ) becomes much more concentrated toward 1 (or 0).