S-conLSH: alignment-free gapped mapping of noisy long reads

Background The advancement of SMRT technology has unfolded new opportunities of genome analysis with its longer read length and low GC bias. Alignment of the reads to their appropriate positions in the respective reference genome is the first but costliest step of any analysis pipeline based on SMRT sequencing. However, the state-of-the-art aligners often fail to identify distant homologies due to lack of conserved regions, caused by frequent genetic duplication and recombination. Therefore, we developed a novel alignment-free method of sequence mapping that is fast and accurate. Results We present a new mapper called S-conLSH that uses Spaced context based Locality Sensitive Hashing. With multiple spaced patterns, S-conLSH facilitates a gapped mapping of noisy long reads to the corresponding target locations of a reference genome. We have examined the performance of the proposed method on 5 different real and simulated datasets. S-conLSH is at least 2 times faster than the recently developed method lordFAST. It achieves a sensitivity of 99%, without using any traditional base-to-base alignment, on human simulated sequence data. By default, S-conLSH provides an alignment-free mapping in PAF format. However, it has an option of generating aligned output as SAM-file, if it is required for any downstream processing. Conclusions S-conLSH is one of the first alignment-free reference genome mapping tools achieving a high level of sensitivity. The spaced-context is especially suitable for extracting distant similarities. The variable-length spaced-seeds or patterns add flexibility to the proposed algorithm by introducing gapped mapping of the noisy long reads. Therefore, S-conLSH may be considered as a prominent direction towards alignment-free sequence analysis.

mapping caused by repetitive regions. Low GC bias and the ability to detect DNA methylation [1] from native DNA made SMRT data appealing for many real life applications. However, the high sequencing error rate of 13-15% per base [3] poses a real challenge in sequence analysis. Specialized methods like BWA-MEM [4], BLASR [5], rHAT [6], Minimap2 [7], lordFAST [8], etc., have been designed to align noisy long reads back to the respective reference genomes. BLASR [5] clusters the matched words from the reads and genome after indexing using suffix arrays or BWT-FM [9]. It uses a probability-based error optimization technique to find the alignment. BWA-MEM [4], originally designed for short read mapping, has been extended for PacBio and Oxford nanopore reads (with option -x pacbio and -x ont2d respectively) by efficient seeding and chaining of short exact matches. However, both methods are too slow to achieve a desired level of sensitivity [6]. This issue was addressed by rHAT [6] using a regional hash table where windows from the reference genome with the highest k-mer matches are chosen as candidate sites for further extension using a direct acyclic graph. Unfortunately, this method has a large memory footprint if used with the default word length of k = 13 , and it fails to accommodate longer k-mers to resolve repeats. Minimap2 [7], a recently developed method, uses concave gap cost, efficient chaining and fast implementation using SSE or NEON instructions to align reads with high sensitivity and speed. Another new method lordFAST [8] has been introduced to align PacBio's continuous long reads with improved accuracy. MUMmer4 [10], a versatile genome alignment system, also has an option for PacBio read alignment (-l 15 -c 31), although it is less sensitive and accurate than the specialized aligners.
However, all the above mentioned methods come with large computational costs. Here, time and memory consumption are dominated by the alignment overhead. On top of that, alignment algorithms are often unable to correctly align distant homologs in the "twilight zone" with 20-35% sequence identity, as such weak similarities are difficult to distinguish from random similarities. For these reasons, alignment-free methods have become popular in recent years. See [11][12][13] for recent review papers and [14] for a systematic evaluation of these approaches. An alignment-free method Minimap [15] has been developed in 2016 for mapping of reads to the appropriate positions in the reference genome. Minimap groups approximate colinear hits using single linkage clustering to map the reads. However, Minimap suffers from low specificity. In this article, a new alignment-free method called S-conLSH has been proposed to overcome the above mentioned problems. Being suitable for low conserved areas and less computationally expensive, S-conLSH is sensitive as well as very fast at the same time.
A large proportion of the sequencing errors in SMRT data are indels rather than mismatches [3]. This makes it even more complicated to differentiate genomic variations from sequence errors. To resolve this issue, a concept of 'context-based' Locality Sensitive Hashing (conLSH) has been introduced by Chakraborty and Bandyopadhyay [16]. Locality Sensitive Hashing (LSH) [17,18] has been successfully applied in many real lifescience applications, ranging from genome comparison [19,20] to large scale genome assembly [21]. In LSH, points close to each other in the feature space are hashed into localized slots of the hash table. However, in practice, the neighborhood or context of an object plays a key role in measuring its similarity to another object. Chakraborty and Bandyopadhyay [16] have shown that contexts of symbols (a base in reference to DNA) are important to decide the closeness of strings. They proposed conLSH to group sequences in localized slots of the hash table if they share a common context. However, a match for the entire context is a stringent criterion, considering the error profile of SMRT data. Even a mismatch or indel of length one, caused by a sequencing error, may mislead the aligner.
Therefore, to address this problem, an idea of spaced-context is introduced in this article. Unlike conLSH [16] which produces base-level alignments of the sequences using Sparse Dynamic Programming (SDP) based algorithm, the proposed method (S-conLSH) is an alignment-free tool. It employs multiple spaced-seeds or patterns to find gapped mappings of noisy SMRT reads to reference genomes. The spaced-seeds are strings of 0's and 1's where '1' represents the match position and '0' denotes don't care position where matching in the symbols is not mandatory. The substring formed by extracting the symbols corresponding to the '1' positions in the pattern is defined as the spaced-context of a sequence. Therefore, a spaced-context can minimize the effect of erroneous bases and, thereby, enhances the quality of mapping because it does not check all the bases for a match. This differentiates the proposed method from conLSH which looks into the entire context to compute the hash values.
A pattern-based approach was originally proposed by [22] when they developed Pat-ternHunter, a fast and sensitive homology search tool. Later, multiple patterns or "spaced seeds" were proposed by the same authors [23]. Efficient algorithms to find optimal sets of patterns have been introduced by [24] and [25]. A fast alignment-free sequence comparison method using multiple spaced seeds has been described in [26], see also [27] and [28].
The algorithm, S-conLSH, described in this article is an alignment-free tool designed for mapping of noisy and long reads to the reference genome. The concept of Spacedcontext has been elaborated in the Methods section along with a description of the proposed algorithm.

Results
Six different real and simulated datasets of E.coli, A.thaliana, O.sativa, S.cerevisiae and H.sapiens have been used to benchmark the performance of S-conLSH in comparison to other state-of-the art aligners, viz., Minimap2 [7], lordFAST [8], Minimap [15], conLSH [16] and MUMmer4 [10]. All these methods are executed in a setting designed for PacBio read alignment (see Table 1). The default parameter settings used for S-conLSH are K = 2 , context size ( 2 + 1 ) = 7 , L = 2 and z = 5 (refer to the Methods section of this article for details of the S-conLSH parameters). The two patterns used in our experiment in the default set up of L = 2 are 11111110011111110000 and 111111100000111111100. The datasets used in the experiment have been summarized in Table 2. For the sake of simplicity, the results have been demonstrated by executing different aligners in a single thread. Please refer to the Tables [S-1] to [S-5] of Additional file 1: Note 1 for detailed review of their performance in multi-threaded mode.
The aligner, rHAT [6] has been excluded from the study, as it has been reported to malfunction in certain scenarios [7]. The PacBio read alignment module of BWA-MEM [4] has been replaced by Minimap2, as it retains all the main features of BWA-MEM, while being 50×faster and more accurate. Therefore, the results of BWA-MEM Table 1 Parameter settings and commands used by different methods for mapping of PacBio SMRT reads
The results demonstrated in this article are organized into three categories: (1) performance on simulated datasets, (2) study on real PacBio reads, and (3) Robustness of S-conLSH for different parameter settings.

Experiment on simulated dataset
To study the accuracy of SMRT read mapping, a total of 146,932 noisy long reads have been simulated from hg38 human genome using PBSIM [30] command "pbsim --data-type CLR --depth 1 --length-min 1 --length-max 200000 --seed 0 --sample-fastq real.fastq hg38.fa. The error profile has been sampled from three real human PacBio RS II P5/C3 reads listed below, concatenated as real.fastq.
The simulated reads from 5 different Human chromosomes are used to test the performance of S-conLSH in comparison to the other standard aligners. The sensitivity and precision have been computed based on the ground truth as obtained from the .maf files of PBSIM. A read is considered to be mapped correctly (as defined by [8]) if (1) it gets mapped to the correct chromosome and strand; and (2) the target subsequence of reference genome where the read maps to, must overlap with the true mapping by at least 1bp. The sensitivity is measured as a fraction of correctly mapped reads out of the total number of reads. Precision is defined, in the same way, as the fraction of correctly mapped reads out of the total number of mapped reads. Table 3 summarizes the number of correct mappings, sensitivity, precision, and running time by different methods, Minimap, Minimap2, lordFAST, MUMmer4, conLSH, and S-conLSH for a total of 146,932 reads simulated from five different human chromosomes. The number of reads extracted from each chromosome is listed in Table 3. The result shows that S-conLSH produces the highest number of correct mappings among all five aligners for different chromosomes of Human-sim dataset. S-conLSH maps 32,111 reads out of total 32,290 reads of Chr#1, among which 31,964 mappings are found to be correct when compared with the ground truth. Minimap2 is the second highest in producing the correct mappings in this case. A similar scenario has been generally observed for the four other chromosomes as well. It is clear that Minimap2 always aligns all the reads to some location in the reference genome, but produces more incorrect mappings when compared to S-conLSH. Evidently, S-conLSH provides the highest sensitivity for all the chromosomes considered. Minimap, on the other hand, exhibits higher precision but lower sensitivity as it leaves a large number of reads unaligned. The number of unaligned reads by Minimap increases for large and complicated real datasets. As can be seen, S-conLSH takes 38 CPU seconds to map the reads of chromosome 1, which is slightly slower than Minimap. The speed of Minimap is achieved as it maps a smaller number of reads compared to other aligners. Interestingly, S-con-LSH has been found to have smaller mapping time than all the remaining algorithms, while having the maximum number of correctly mapped reads. As there was no separate indexing and aligning time available for MUMmer4, the total time is mentioned as "Mapping time". MUMmer4 has been found to consume a large amount of time to achieve a desired level of sensitivity. It is evident from Table 3 that the indexing time is quite low for both Minimap and Minimap2. Indexing time for S-conLSH is relatively higher, though it is much smaller as compared to lordFAST. Here, it may be noted that indexing is performed only once for a given reference genome, while the read mapping will need to be performed every time a different individual is sequenced. The compressed and memory-efficient B-tree indexing of conLSH makes it the fastest in processing of reference genomes. However, the mapping time of con-LSH is large as it performs base-to-base alignments using Sparse Dynamic Programming. The stringent ungapped matching requirement of the aligner over the entire context of the sequences results in lower sensitivity, after lordFAST. The proposed alignment-free tool, S-conLSH, has been found to be useful in such cases as it obtains the gapped mapping of the noisy reads using spaced-contexts.
While this section reports results of single-threaded execution, the Tables [S-1]-[S-5] of the Additional file 1: Note 1 exhibit the performance boost-up of S-conLSH in multithreaded systems. S-conLSH achieves more than 50% reduction in mapping time when run in 4 concurrent threads over the single-threaded version of itself. The indexing time of S-conLSH also improves with higher degree of parallelism and becomes comparable with that of Minimap2 when the number of threads is equal to 8. Moreover, this performance achievement comes with almost no additional burden of memory requirement. Please refer to Additional file 1: Note 1 for a detailed report.  Table 2 for details). A comparative study of running time, percentage of reads aligned and coverage by different aligners has been detailed in Table 4 for real human SMRT subread named m130929_024849_42213_c100518541* _s1_p0.1.subreads.fastq consisting of 23,235 reads . Results on MUMmer4 are excluded since it takes inordinately long.  It can be seen that S-conLSH provides the highest coverage value among the five standard methods used in the experiment. Minimap does not have any coverage statistics, as it is unable to produce alignment as SAM file. The performance in terms of indexing and mapping time, as shown in Table 4, is similar to that has already been observed for simulated datasets. The percentage of read alignment is the highest by Minimap2 and lordFAST. This is similar to the scenario obtained on simulated datasets where Minimap2 and lordFAST align all the reads against the reference genome, even though it may contain some incorrect mappings. S-conLSH, on the other hand, has a mapping ratio of 99.9% , which is lower than Minimap2 and lordFAST. This is due to the fact that S-conLSH gives higher priority to the mapping accuracy and it leaves a few reads unaligned if potential target locations are not found. S-conLSH has a higher memory footprint of about 13GB for indexing the entire human genome.

Table 5 Comparative study of running time, percentage of reads aligned by different aligners for four datasets of E. coli-real, A. thaliana-real, O. sativa-real and S. cerevisiae-real
Similar results are observed for E.coli-real, A.thaliana-real, O.sativa-real, and S.cerevisiae-real real PacBio datasets as can be seen in Table 5. It is clear that S-con-LSH is among the fastest in terms of mapping time, after Minimap. However, Minimap fails to align a good portion of the reads for large datasets like A.thaliana and S.cerevisiae. The aligner, conLSH, on the other hand, requires lower indexing time but higher mapping time to align a reasonable amount of reads to the reference genome. However, the alignment quality of conLSH is often compromised as studied in previous subsections. The percentage of reads mapped by S-conLSH is generally lower than Minimap2 and lordFAST, as it tries to ensure the best of the mapping quality. It is, however, difficult to measure the quality of read mappings for real datasets, as there is no ground truth available for such cases.

Robustness of S-conLSH for different parameter settings
An exhaustive experiment with different values of K, , L, and z has been carried out to study the robustness of the proposed method S-conLSH. Table 6 summarizes the study of indexing and mapping time along with the percentage of reads aligned for different values of S-conLSH parameters on real human SMRT dataset m130929_024849_42213_c100518541*_s1_p0.1.subreads.fastq. As can be observed the best performance (highest percentage of read mapping in minimum time) is achieved with the settings K = 2 , (2 + 1) = 7 , L = 2 and z = 5 . The mapping time increases with L as it directly corresponds to the number of hash tables (one for each of the L different spaced-seeds) used to retrieve the target locations. Therefore the search becomes more rigorous as it considers all spaced-contexts obtained from L different patterns. This is, however, useful for highly sensitive applications at the expense of a few more seconds of mapping time. An efficient solution could be the use of S-con-LSH with higher values of L, distributed over multiple concurrent threads. Please refer to Additional file 1: Note 2 regarding the performance of S-conLSH for different values of L in a multi-threaded system. Indexing time, on the other hand, is proportional to the product (2 + 1)K . It seems that z has little effect on the performance of S-conLSH as the running time and the percentage of reads aligned mostly stay invariant with z. However, the parameter z is important to enhance the sensitivity of the method. This is reflected in Table 7 when studied on simulated reads. The highest number of correct mappings is obtained with the default settings (shown in italic) when z = 5 . The zeros in the spaced-seed help to find the distant similarities as it encompasses a larger portion of the sequence while the weight ( (2 + 1)K ) of the pattern remains the same. However, a very large value of z may degrade the accuracy as it joins unrelated contexts together. It is evident from Tables 6 and 7 that the performance of S-conLSH remains reasonably good irrespective of the variation of the parameter values. Therefore, it can be concluded that the algorithm S-conLSH is quite robust even though it requires some tuning of different parameters for the best performance.

Discussion and conclusions
S-conLSH is one of the first alignment-free reference genome mapping tools achieving a high level of sensitivity. Earlier, Minimap was designed to map reads against the reference genome without performing an actual base-to-base alignment. However, the low sensitivity of Minimap precluded its applications in real-life domains. Minimap2 is one of the best performing state-of-the-art alignment-based methods which provides an excellent balance of running time and sensitivity. The method described in this article, S-conLSH, has been observed to outperform Minimap2 in respect of sensitivity, precision, and mapping time. However, it has a longer indexing time and a higher memory footprint. Nevertheless, sequence indexing is a one-time affair, and memory is inexpensive nowadays.
The spaced-context in S-conLSH is especially suitable for extracting distant similarities. The variable-length spaced-seeds or patterns add flexibility to the proposed algorithm. Multiple patterns (with higher values of L) increase the sensitivity but at the cost of more time. Moreover, with the introduction of don't care positions, the patterns become longer, thus providing better performance in resolving conflicts that occur due to the repetitive regions. The provision of rehashing for chimeric read alignment and reverse strand mapping make S-conLSH ideal for applications in the real-life sequence analysis pipeline.
A memory-efficient version of the S-conLSH can be developed in the future. The algorithm, at its current stage, can not conclude on the optimal selection of the patterns. A study on finding the optimal set of spaced-seeds can be carried out in future to improve the performance of the algorithm. Though the experiment demonstrated in this article is confined to the noisy long reads of PacBio datasets, it can be further extended on ONT reads as well. Finally, we would like to conclude with a strong expectation that the proposed method S-conLSH will draw the attention of the peers as one of the best performing reference mapping tools designed so far.

Methods
The algorithm S-conLSH for mapping noisy long reads to the reference genome essentially consists of two steps, reference genome indexing and read mapping. The complete workflow of S-conLSH is provided in Fig. 1 and the entire procedure is detailed below.

Reference Genome Indexing
The reference genome is sliced into overlapping windows, and these windows are hashed into hash tables using suitably designed S-conLSH functions (see Definition 5) as shown in Fig. 1 For each noisy long read, S-conLSH utilizes the same hash function for computing the hash values and retrieves sequences of the reference genome that are hashed in the same position as the read. Finally, the locations of the sequences with the highest hits are chained and reported as an alignment-free mapping of the query read (see Fig. 1). By default, S-conLSH provides alignment-free mappings of the SMRT reads to the reference genome. If a base level alignment is required, S-conLSH provides an option (--align 1) to generate alignment in SAM format using ksw library (https ://githu b.com/attra ctive chaos /klib). Some key aspects of S-conLSH are detailed in the following subsections.

Context based locality sensitive hashing
Locality Sensitive Hashing [17,18] is an approximate near-neighbor search algorithm, where the points having a smaller distance in the feature space, will have a higher probability of making a collision. Under this assumption, a query is compared only to the objects having the same hash value, rather than to all the items in the database. This makes the algorithm work in sublinear time. In the definitions below, we use the following notations: For a string x of length d over some set of symbols and 1 ≤ i ≤ j ≤ d , x[i] denotes the ith symbol of x, and x[i..j] denotes the (contiguous) substring of x from position i to position j. If H is a finite set of functions defined on some set X, for any h ∈ H , randomly drawn with uniform probability, and x, y ∈ X , Pr H [h(x) = h(y)] denotes the probability that h(x) = h(y).
The definition of Locality Sensitive Hashing as introduced in [17,18] is given below: Definition 1 (Locality Sensitive Hashing) [17,18] Let (X, D) be a metric space, let H be a family of hash functions mapping X to some set U, and let R, c, P 1 , P 2 be real numbers with c > 1 and 0 ≤ P 2 < P 1 ≤ 1 . H is said to be (R, cR, P 1 , P 2 )-sensitive if for any x, y ∈ X and h ∈ H • Pr H [h(x) = h(y)] ≥ P 1 whenever D(x, y) ≤ R , and • Pr H [h(x) = h(y)] ≤ P 2 whenever D(x, y) ≥ cR.
To illustrate the concept of locality sensitive hashing for DNA sequences, let us consider a finite set = {A, T , C, G} called the alphabet, together with an integer d > 0 . Let X be the set of all length-d words over , endowed with the Hamming distance, and let U be the alphabet . For 1 ≤ i ≤ d , let the function h i : X → U be defined by h i (x) = x[i] , ∀x ∈ X . Next, let R and cR be real numbers with c > 1 and 0 ≤ R < cR ≤ d , and define P 1 = d−R d and P 2 = d−cR d . Then the set H = {h i : 1 ≤ i ≤ d } is (R, cR, P 1 , P 2 )-sensitive. To see this, observe that for any two words p, q ∈ X , the probability Therefore, P 1 > P 2 as cR > R . This proves that the family of hash functions In biological applications, it is often useful to consider the local context of sequence positions and to consider matching subwords, as shown in conLSH [16]. It groups similar sequences in the localized slots of the hash tables considering the neighborhoods or contexts of the data points. A context in connection to sequence analysis can be formally defined as: To define context based locality sensitive hashing, the above example is generalized such that, for a given subword length (2 + 1) < d , each hash function in H will map words containing the same length-(2 + 1) subwords at some position to the same bucket in U. The subword length (2 + 1) is called the context size, where is the context factor.

Definition 3 (Context based Locality Sensitive Hashing (conLSH))
Let be a set called the alphabet. Let and d be integers with (2 + 1) < d . Let X be the set of all length-d words over and U be the set of all length-(2 + 1) words over . For R, cR, P 1 , and P 2 as above, a (R, cR, P 1 , P 2 )-sensitive family H of functions mapping X to U is called (R, cR, , P 1 , P 2 )-sensitive, if for each h ∈ H , there are positions i h and j h with + 1 ≤ i h , j h ≤ d − such that for all p, q ∈ X one has h(p) = h(q) whenever holds.
Gapped read mapping using spaced-context based locality sensitive hashing The proposed method S-conLSH, uses spaced-seeds or patterns of 0's and 1's in connection with S-conLSH function. For a pattern P , the spaced-context of a DNA sequence can be defined as: Sequences sharing a similar spaced-context with respect to a pre-defined pattern P , are hashed together in S-conLSH.
The concept of gap-amplification is used in locality sensitive hashing to ensure that the dissimilar items are well separated from the similar ones. To do this, gap between the probability values P 1 and P 2 needs to be increased. This is achieved by choosing L different hash functions, g 1 , g 2 , . . . , g L , such that g j is the concatenation of K randomly chosen hash functions from H , i.e., g j = (h 1,j , h 2,j , . . . , h K ,j ) , for 1 ≤ j ≤ L . This procedure is known as "gap amplification" and K is called the "concatenation factor" [18]. For every hash function g j , 1 ≤ j ≤ L , there is a pattern P j associated with it. The spaced-context based Locality Sensitive Hashing is now defined as follows: Definition 5 (Spaced-context based Locality Sensitive Hashing (S-conLSH)) Let sw j (x) be the spaced-context of sequence x with respect to the binary pattern P j of length ℓ , 1 ≤ j ≤ L . Let P j be defined by the regular expression 0 * (1) (2 +1) K 0 * . Therefore, the weight of P j , i.e., ℓ w = (2 + 1)K . The maximum value of ℓ would be (2 + 1)K + z(K + 1) assuming that at most z zeros are present between two successive contexts of 1's in P j , where z ≥ 0 is an integer parameter. Let d be an integer with ℓ ≤ d , X be the set of all length-d words over and U be the set of all length-ℓ w words over . For R, cR, P 1 , P 2 , and as introduced in Definition 3, a (R, cR, , P 1 , P 2 )-sensitive hash function g j = (h 1,j , h 2,j , . . . , h K ,j ) , where h i,j ∈ H, 1 ≤ i ≤ K , mapping X to U is called (R, cR, , z, P 1 , P 2 )-sensitive, if for any p, q ∈ X one has g j (p) = g j (q) whenever sw j (p) = sw j (q) holds with respect to the pattern P j .  Fig. 2 A schematic illustration of gapped-mapping using S-conLSH. a Multiple patterns having context size = 3 and K = 2 . b, c Hashing of the strings "ATT CGG TAA" and "TTC TAA GTA" respectively using different patterns. d Final hash table and gapped-mapping of the two strings due to the collision at "TTC TAA "