Optimal assembly for high throughput shotgun sequencing

We present a framework for the design of optimal assembly algorithms for shotgun sequencing under the criterion of complete reconstruction. We derive a lower bound on the read length and the coverage depth required for reconstruction in terms of the repeat statistics of the genome. Building on earlier works, we design a de Brujin graph based assembly algorithm which can achieve very close to the lower bound for repeat statistics of a wide range of sequenced genomes, including the GAGE datasets. The results are based on a set of necessary and sufficient conditions on the DNA sequence and the reads for reconstruction. The conditions can be viewed as the shotgun sequencing analogue of Ukkonen-Pevzner's necessary and sufficient conditions for Sequencing by Hybridization.


Problem statement
DNA sequencing is the basic workhorse of modern day biology and medicine.Since the sequencing of the Human Reference Genome ten years ago, there has been an explosive advance in sequencing technology, resulting in several orders of magnitude increase in throughput and decrease in cost.Multiple "next-generation" sequencing platforms have emerged.All of them are based on the whole-genome shotgun sequencing method, which entails two steps.First, many short reads are extracted from random locations on the DNA sequence, with the length, number, and error rates of the reads depending on the particular sequencing platform.Second, the reads are assembled to reconstruct the original DNA sequence.
Assembly of the reads is a major algorithmic challenge, and over the years dozens of assembly algorithms have been proposed to solve this problem [28].Nevertheless, the assembly problem is far from solved, and it is not clear how to compare algorithms nor where improvement might be possible.The difficulty of comparing algorithms is evidenced by the recent assembly evaluations Assemblathon 1 [2] and GAGE [23], where which assembler is "best" depends on the particular dataset as well as the performance metric used.In part this is a consequence of metrics for partial assemblies: there is an inherent tradeoff between larger contiguous fragments (contigs) and fewer mistakes in merging contigs (misjoins).But more fundamentally, independent of the metric, performance depends critically on the dataset, i.e. length, number, and quality of the reads, as well as the complexity of the genome sequence.With an eye towards the near future, we seek to understand the interplay between these factors by using the intuitive and unambiguous metric of complete reconstruction 1 .Note that this objective of reconstruct- 1 The notion of complete reconstruction can be thought of as a mathematical idealization of the notion of "finishing" a sequencing project as defined by the National Human Genome Research Institute [18], where finishing a chromosome requires at least 95% of the chromosome ing the original DNA sequence from the reads contrasts with the many optimization-based formulations of assembly, such as shortest common superstring (SCS) [7], maximum-likelihood [16], [11], and various graph-based formulations [22], [14].When solving one of these alternative formulations, there is no guarantee that the optimal solution is indeed the original sequence.
Given the goal of complete reconstruction, the most basic questions are 1) feasibility: given a set of reads, is it possible to reconstruct the original sequence?2) optimality: which algorithms can successfully reconstruct whenever it is feasible to reconstruct?The feasibility question is a measure of the intrinsic information each read provides about the DNA sequence, and for given sequence statistics depends on characteristics of the sequencing technology such as read length and noise statistics.As such, it can provide an algorithm-independent basis for evaluating the efficiency of a sequencing technology.Equally important, algorithms can be evaluated on their relative read length and data requirements, and compared against the fundamental limit.
In studying these questions, we consider the most basic shotgun sequencing model where N noiseless reads 2 of a fixed length L base pairs are uniformly and independently drawn from a DNA sequence of length G.In this statistical model, feasibility is rephrased as the question of whether, for given sequence statistics, the correct sequence can be reconstructed with probability 1− when N reads of length L are sampled from the genome.We note that answering the feasibility question of whether each N, L pair is sufficient to reconstruct is equivalent to finding the minimum required N (or the coverage depth c = N L/G) as a function of L.
A lower bound on the minimum coverage depth needed was obtained by Lander and Waterman [9].Their lower bound c LW = c LW (L, ) is the minimum number of randomly located reads needed to cover the entire DNA sequence with a given target success probability 1 − .While this is clearly a necessary condition, it is to be represented by a contiguous sequence. 2Reads are thus exact subsequences of the DNA.
in general not tight: only requiring the reads to cover the entire genome sequence does not guarantee that consecutive reads can actually be stitched back together to recover the original sequence.Characterizing when the reads can be reliably stitched together, i.e. determining feasibility, is an open problem.In fact, the ability to reconstruct depends crucially on the repeat statistics of the DNA sequence.An earlier work [13] has answered the feasibility and optimality questions under an i.i.d.model for the DNA sequence.However, real DNA, especially those of eukaryotes, have much longer and complex repeat structures.Here, we are interested in determining feasibility and optimality given arbitrary repeat statistics.This allows us to evaluate algorithms on statistics from already sequenced genomes, and gives confidence in predicting whether the algorithms will be useful for an unseen genome with similar statistics.

Results
Our approach results in a pipeline, which takes as input a genome sequence and desired success probability 1 − , computes a few simple repeat statistics, and from these statistics computes a feasibility plot that indicates for which L, N reconstruction is possible.Fig. 1 displays the simplest of the statistics, the number of repeats as a function of the repeat length .Fig. 2 shows the resulting feasibility plot produced for the statistics of human chromosome 19 (henceforth hc19) with success probability 99%.The horizontal axis signifies read length L and the vertical axis signifies the normalized coverage depth c := c/c LW , the coverage depth c normalized by c LW , the coverage depth required as per Lander-Waterman [9] in order to cover the sequence.
Since the coverage depth must satisfy c ≥ c LW , the normalized coverage depth satisfies c ≥ 1, and we plot the horizontal line c = 1.This lower bound holds for any assembly algorithm.In addition, there is another lower bound, shown as the thick black nearly vertical line in Fig. 2. In contrast to the coverage lower bound, this lower bound is a function of the repeat statistics.It has a vertical asymptote at L crit := max{ interleaved , triple } + 1, where interleaved is the length of the longest interleaved repeats in the DNA sequence and triple is the length of the longest triple repeat (see Section 2 for precise definitions).Our lower bound can be viewed as a generalization of a result of Ukkonen [26] for Sequencing by Hybridization to the shotgun sequencing setting.
Each colored curve in the feasibility plot is the lower boundary of the set of feasible N, L pairs for a specific algorithm.The rightmost curve is the one achieved by the greedy algorithm, which merges reads with largest overlaps first (used for example in TIGR [25], CAP3 [5], and more recently SSAKE [27]).As seen in Fig. 2, its per-formance curve asymptotes at L = repeat , the length of the longest repeat.De Brujin graph based algorithms (e.g.[6] and [22]) take a more global view via the construction of a de Brujin graph out of all the K-mers of the reads.The performance curves of all K-mer graph based algorithms asymptote at read length L = L crit , but different algorithms use read information in a variety of ways to resolve repeats in the K-mer graph and thus have different coverage depth requirement beyond read length L crit .By combining the ideas from several existing algorithms (including [22], [19]) we designed MultiBridging, which is very close to the lower bound for this dataset.Thus Fig. 2 answers, up to a very small gap, the feasibility of assembly for the repeat statistics of hc19, where successful reconstruction is desired with probability 99%.
We produce similar plots for a dozen or so datasets (see supplementary material).
For datasets where interleaved is significantly larger than triple (the majority of the datasets we looked at, including those used in the recent GAGE assembly algorithm evaluation [23]), MultiBridging is near optimal, thus allowing us to characterize the fundamental limits for these repeat statistics (Fig. 9).On the other hand, if triple is close to or larger than interleaved , there is a gap between the performance of MultiBridging and the lower bound (see for example Fig. 3).The reason for the gap is explained in Section 3.4.An interesting feature of the feasibility plots is that for typical repeat statistics exhibited by DNA data, the minimum coverage depth is characterized by a critical phenomenon: If the read length L is below L crit = interleaved , reliable reconstruction of the DNA sequence is impossible no matter what the coverage depth is, but if the read length L is slightly above L crit , then covering the sequence suffices, i.e. c = c/c LW = 1.The sharpness of the critical phenomenon is described by the size of the critical window, which refers to the range of L over which the transition from one regime to the other occurs.For the case when MultiBridging is near optimal, the width W of the window size can be well approximated as: (1) For the hc19 dataset, the critical window size evaluates to about 19% of L crit .
In Sections 2 and 3, we discuss the underlying analysis and algorithm design supporting the plots.The curves are all computed from formulas, which are validated by simulations in Section 4. We return in Section 5 to put our contributions in a broader perspective and discuss extensions to the basic framework.All proofs can be found in the appendix.

Lower bounds
In this section we discuss lower bounds, due to coverage analysis and certain repeat patterns, on the required coverage depth and read length.The style of analysis here is continued in Section 3, in which we search for an assembly algorithm that performs close to the lower bounds.

Coverage bound
Lander and Waterman's coverage analysis [9] gives the well known condition for the number of reads N LW required to cover the entire DNA sequence with probability at least 1 − .In the regime when L G, one may make the standard assumption that the starting locations of the N reads follow a Poisson process with rate λ = N/G, and the number N LW is to a very good approximation given by the solution to the equation The corresponding coverage depth is c LW = N LW L/G.This is our baseline coverage depth against which to compare the coverage depth of various algorithms.For each algorithm, we will plot the coverage depth required by that algorithm normalized by c LW .Note that c is also the ratio of the number of reads N required by an algorithm to N LW .The requirement c ≥ 1 is due to the lower bound on the number of reads obtained by the Lander-Waterman coverage condition.

Ukkonen's condition
A second constraint on reads arises from repeats.
A lower bound on the read length L follows from Ukkonen's condition [26]: if there are interleaved repeats or triple repeats in the sequence of length at least L − 1, then the likelihood of observing the reads is the same for more than one possible DNA sequence and hence correct reconstruction is not possible.Fig. 4 shows an example with interleaved repeats.(Note that we assume 1− > 1/2, so random guessing between equally likely sequences is not viable.) The likelihood of observing the reads under two possible sequences (the green and magenta segments swapped) is the same.Here, the two red subsequences form a repeat and the two orange subsequences form another repeat.
We take a moment to carefully define the various types of repeats.Let s t denote the lengthsubsequence of the DNA sequence s starting at position t.A repeat of length is a subsequence appearing twice, at some positions t 1 , t 2 (so s t 1 = s t 2 ) that is maximal (i.e.s(t 1 −1) = s(t 2 −1) and s(t 1 + ) = s(t 2 + )).Similarly, a triple repeat of length is a subsequence appearing three times, at positions t 1 , t 2 , t 3 , such that s t 1 = s t 2 = s t 3 , and such that neither of s(t 1 − 1) = s(t 2 − 1) = s(t 3 −1) nor s(t 1 + ) = s(t 2 + ) = s(t 3 + ) holds 3 .A copy is a single one of the instances of the subsequence's appearances.A pair of repeats refers to two repeats, each having two copies.A pair of repeats, one at positions t 1 , t 3 with t 1 < t 3 and the second at positions t 2 , t 4 with t 2 < t 4 , is interleaved if t 1 < t 2 < t 3 < t 4 or t 2 < t 1 < t 4 < t 3 (Fig. 4).The length of a pair of interleaved repeats is defined to be the length of the shorter of the two repeats.
Ukkonen's condition implies a lower bound on the read length, Here interleaved is the length of the longest pair of interleaved repeats on the DNA sequence and triple is the length of the longest triple repeat.Ukkonen's condition says that for read lengths less than L crit , reconstruction is impossible no matter what the coverage depth is.But it can be generalized to provide a lower bound on the coverage depth for read lengths greater than L crit , through the important concept of bridging as shown in Figure 5.We observe that in Ukkonen's interleaved or triple repeats, the actual length of the repeated subsequences is irrelevant; rather, to cause confusion it is enough that all the copies of the pertinent repeats are unbridged.This leads to the following theorem.
Theorem 1.Given a DNA sequence s and a set of reads, if there is a pair of interleaved repeats or a triple repeat whose copies are all unbridged, then there is another sequence s of the same length under which the likelihood of observing the reads is the same.
For brevity, we will call a repeat or a triple repeat bridged if at least one copy of the repeat is bridged, and a pair of interleaved repeats bridged if at least one of the repeats is bridged.Thus, the above theorem says that a necessary condition for reconstruction is that all interleaved and triple repeats are bridged.
How does Theorem 1 imply a lower bound on the coverage depth?Focus on the longest pair of interleaved repeats and suppose the read length L is between the lengths of the shorter and the longer repeats.The probability this pair is unbridged is (p unbridged interleaved ) 2 , where Theorem 1 implies that the probability of making an error in the reconstruction is at least 1/2 if this event occurs.Hence, the requirement that P error ≤ implies a lower bound on the number of reads N : A similar lower bound can be derived using the longest triple repeat.A slightly tighter lower bound can be obtained by taking into consideration the bridging of all the interleaved and triple repeats, not only the longest one, resulting in the black curve in Fig. 2.

Towards optimal assembly
We now begin our search for algorithms performing close to the lower bounds derived in the previous section.Algorithm assessment begins with obtaining deterministic sufficient conditions for success in terms of repeat-bridging.We then find the necessary N and L in order to satisfy these sufficient conditions with a target probability 1 − .The required coverage depth for each algorithm depends only on certain repeat statistics extracted from the DNA data, which may be thought of as sufficient statistics.

Greedy algorithm
The greedy algorithm, denoted Greedy, with pseudocode in section C.1, is described as fol-lows.Starting with the initial set of reads, the two fragments (i.e.subsequences) with maximum length overlap are merged, and this operation is repeated until a single fragment remains.
Here the overlap of two fragments x, y is a suffix of x equal to a prefix of y, and merging two fragments results in a single longer fragment.
Theorem 2. Greedy reconstructs the original sequence s if every repeat is bridged.
Theorem 2 allows us to determine the coverage depth required by Greedy: we must ensure that all repeats are bridged.By the union bound, where p unbridged m is defined in (3) and a m is the number of repeats of length m.Setting the righthand side of ( 5) to ensures P error ≤ and yields the performance curve of Greedy in Fig. 2. Note that the repeat statistics {a m } are sufficient to compute this curve.
Greedy requires L > repeat + 1, whereas the lower bound has its asymptote at L = interleaved + 1.In chromosome 19, for instance, there is a large difference between interleaved = 2248 and repeat = 4092, and in Fig 2 we see a correspondingly large gap.Greedy is evidently sub-optimal in handling interleaved repeats.Its strength, however, is that once the reads are slightly longer than repeat , coverage of the sequence is sufficient for correct reconstruction.Thus if repeat ≈ interleaved , then Greedy is close to optimal.

K-mer algorithms
The greedy algorithm fails when there are unbridged repeats, even if there are no unbridged interleaved repeats, and therefore requires a read length much longer than that required by Ukkonen's condition.As we will see, K-mer algorithms do not have this limitation.

Background
In the introduction we mention Sequencing By Hybridization (SBH), for which Ukkonen's con-dition was originally introduced.In the SBH setting, an optimal algorithm matching Ukkonen's condition is known, due to Pevzner [21].
Pevzner's algorithm is based on finding an appropriate cycle in a K-mer graph (also known as a de Bruijn graph) with K = L − 1 (see e.g.[1] for an overview).A K-mer graph is formed by first creating a node in the graph for each unique K-mer (length K subsequence) in the set of reads, and then adding an edge with overlap K − 1 between any two nodes representing Kmers that are adjacent in a read, i.e. offset by a single nucleotide.Edges thus correspond to unique (K + 1)-mers in s and paths correspond to longer subsequences obtained by merging the constituent nodes.There exists a cycle corresponding to the original sequence s, and reconstruction entails finding this cycle.
As is common, we will replace edges corresponding to an unambiguous path by a single node (c.f.Fig. 6).Since the subsequences at some nodes are now longer than K, this is no longer a K-mer graph, and we call the more general graph a sequence graph.The simplified graph is called the condensed sequence graph.The condensed graph has the useful property that if the original sequence s is reconstructible, then s is determined by a unique Eulerian cycle: Theorem 3. Let G 0 be the K-mer graph constructed from the (K + 1)-spectrum S K+1 of s, and let G be the condensed sequence graph obtained from G 0 .If Ukkonen's condition is satisfied, i.e. there are no triple or interleaved repeats of length at least K, then there is a unique Eulerian cycle C in G and C corresponds to s.
Theorem 3 characterizes, deterministically, the values of K for which reconstruction from the (K + 1)-spectrum is possible.We proceed with application of the K-mer graph approach to shotgun sequencing data.

Basic K-mer algorithm
Starting with Idury and Waterman [6], and then Pevzner et al.'s [22] euler algorithm, most current assembly algorithms for shotgun sequencing are based on the K-mer graph.Idury and Waterman [6] made the key observation that SBH with subsequences of length K + 1 can be emulated by shotgun sequencing if each read overlaps the subsequent read by K: the set of all (K + 1)-mers within the reads is equal to the (K +1)-spectrum S K+1 .The resultant algorithm DeBruijn which consists of constructing the Kmer graph from the (K+1)-spectrum observed in the reads, condensing the graph, and then identifying an Eulerian cycle, has sufficient conditions for correct reconstruction as follows.
The performance of DeBruijn is plotted in Fig. 2. DeBruijn significantly improves on Greedy by obtaining the correct first order performance: given sufficiently many reads, the read length L may be decreased to max{ triple , interleaved } + 1.Still, the number of reads required to approach this critical length is far above the lower bound.The following subsection pursues reducing K in order to reduce the required number of reads.

Improved K-mer algorithms
Algorithm DeBruijn ignores a lot of information contained in the reads, and indeed all of the K-mer based algorithms proposed by the sequencing community (including [6], [22], [24], [4], [10], [29]) use the read information to a greater extent than the naive DeBruijn algorithm.Better use of the read information, as described below in algorithms SimpleBridging and Multi-Bridging, will allow us to relax the condition K > max{ interleaved , triple } for success of De-Bruijn, which in turn reduces the high coverage depth required by Condition (c).
Existing algorithms use read information in a variety of distinct ways to resolve repeats.For instance, Pevzner et al. [22] observe that for graphs where each edge has multiplicity one, if one copy of a repeat is bridged, the repeat can be resolved through what they call a "detachment".The algorithm SimpleBridging described below is very similar, and resolves repeats with two copies if at least one copy is bridged.
Meanwhile, other algorithms are better suited to higher edge multiplicities due to higher order repeats; IDBA (Iterative DeBruijn Assembler) [19] creates a series of K-mer graphs, each with larger K, and at each step uses not just the reads to identify adjacent K-mers, but also all the unbridged paths in the K-mer graph with smaller K.Although not stated explicitly in their paper, we observe here that if all copies of every repeat are bridged, then IDBA correctly reconstructs.
However, it is suboptimal to require that all copies of every repeat up to the maximal K be bridged.We introduce MultiBridging, which combines the aforementioned ideas to simultaneously allow for single-bridged double repeats, triple repeats in which all copies are bridged, and unbridged non-interleaved repeats.

SimpleBridging
SimpleBridging improves on DeBruijn by resolving bridged 2-repeats (i.e. a repeat with exactly two copies in which at least one copy is bridged by a read).Condition (a) K > interleaved for success of DeBruijn (ensuring that no interleaved repeats appear in the initial K-mer graph) is updated to require only no unbridged interleaved repeats, which matches the lower bound.With this change, Condition (b) K > triple forms the bottleneck for typical DNA sequences.Thus SimpleBridging is optimal with respect to interleaved repeats, but it is suboptimal with respect to triple repeats.
SimpleBridging deals with repeats by performing surgery on certain nodes in the sequence graph.In the sequence graph, a repeat corresponds to a node we call an X-node, a node with in-degree and out-degree each at least two (e.g.Fig. 7).A self-loop adds one each to the indegree and out-degree.The cycle C(s) traverses each X-node at least twice, so X-nodes correspond to repeats in s.We call an X-node traversed exactly twice a 2-X-node; these nodes correspond to 2-repeats, and are said to be bridged if the corresponding repeat in s is bridged.
In the repeat resolution step of SimpleBridging (illustrated in Fig. 7), bridged 2-X-nodesare duplicated in the graph and incoming and outgoing edges are inferred using the bridging read, reducing possible ambiguity.By the union bound, where b m,n is the number of interleaved repeats in which one repeat is of length m and the other is of length n.To ensure that condition (a) in the above theorem fails with probability no more than , the right hand side of ( 7) is set to be ; this imposes a constraint on the coverage depth.Furthermore, conditions (b) and (c) imply that the normalized coverage depth c ≥ 1/(1 − ( triple + 1)/L).These two constraints together yield the performance curve of Simple-Bridging in Figure 2.

MultiBridging
We now turn to triple repeats.As previously observed, it can be challenging to resolve repeats with more than one copy [22], because an edge into the repeat may be paired with more than one outgoing edge.As discussed above, our approach here shares elements with IDBA [19]: we note that increasing the node length serves to resolve repeats.Unlike IDBA, we do not increase the node length globally.
As noted in the previous subsection, repeats correspond to nodes in the sequence graph we call X-nodes.Here the converse is false: not all repeats correspond to X-nodes.A repeat is said to be all-bridged if all repeat copies are bridged, and an X-node is called all-bridged if the corresponding repeat is all-bridged.The requirement that triple repeats be allbridged allows them to be resolved locally (Fig. 8).The X-node resolution procedure given in Step 4 of MultiBridging can be interpreted in the K-mer graph framework as increasing K locally so that repeats do not appear in the graph.In order to do this, we introduce the following notation for extending nodes: Given an edge (v, q) with weight a v,q , let v →q denote v extended one base to the right along (v, q), i.e. v →q = v q 1 avq+1 (notation introduced in Sec.2.2).Similarly, let p→ v = p 1 end−apv v. MultiBridging is described as follows.
Algorithm 1 MultiBridging.Input: reads R, parameter K. Output: sequence ŝ.K-mer steps 1-3: 1.For each subsequence x of length K in a read, form a node with label x. 2. For each read, add edges between nodes representing adjacent K-mers in the read.3. Condense the graph (c.f.Fig. 6).4. Bridging step: (See Fig. 8).While there exists a bridged X-node v: (i) For each edge (p i , v) with weight a p i ,v , create a new node u i = p i → v and an edge (p i , u i ) with weight 1 + a p i ,v .Similarly for each edge (v, q j ), create a new node w j = v →q j and edge (w j , q j ).(ii) If v has a self-loop (v, v) with weight a v,v , add an edge (v →v , v→ v) with weight a v,v + 2. (iii) Remove node v and all incident edges.(iv) For each pair u i , w j adjacent in a read, add edge (u i , w j ).If exactly one each of the u i and w j nodes have no added edge, add the edge.(v) Condense graph. 5. Finishing step: Find an Eulerian cycle in the graph and return the corresponding sequence.A similar analysis as for SimpleBridging yields the performance curve of MultiBridging in Figure 2.

Gap to lower bound
The only difference between the sufficient condition guaranteeing the success of MultiBridging and the necessary condition of the lower bound is the bridging condition of triple repeats: while MultiBridging requires bridging all three copies of the triple repeats, the necessary condition requires only bridging a single copy.When triple is significantly smaller than interleaved , the bridging requirement of interleaved repeats dominates over that of triple repeats and MultiBridging achieves very close to the lower bound.This occurs in hc19 and the majority of the datasets we looked at.(See Fig. 9 and the plots in the supplementary material.)A critical phenomenon occurs as L increases: for L < L crit reconstruction is impossible, over a small critical window the bridging requirement of interleaved repeats (primarily the longest) dominates, and then for larger L, coverage suffices.
On the other hand, when triple is comparable or larger than interleaved , then MultiBridging has a gap in the coverage depth to the lower bound (see for example Fig. 3).If we further assume that the longest triple repeat is dominant, then this gap can be calculated to be a factor of 3 • log 3 −1 log −1 ≈ 3.72 for = 10 −2 .This gap occurs only within the critical window where the repeatbridging constraint is active.Beyond the critical window, the coverage constraint dominates and MultiBridging is optimal.Further details are provided in the appendices.

Simulations and complexity
In order to verify performance predictions, we implemented and ran the algorithms on simulated error-free reads from sequenced genomes.For each algorithm, we sampled (N, L) points predicted to give < 5% error, and recorded the number of times correct reconstruction was achieved out of 100 trials.Fig. 9 shows results for the three GAGE reference sequences.
We now estimate the run-time of MultiBridging.The algorithm has two phases: the Kmer graph formation step, and the repeat resolution step.The K-mer graph formation runtime can be easily bounded by O((L−K)N K), assuming O(K) look-up time for each of the (L − K)N K-mers observed in reads.This step is common to all K-mer graph based algorithms, so previous works to decrease the practical runtime or memory requirements are applicable.
The repeat resolution step depends on the repeat statistics and choice of K.It can be loosely bounded as The second sum is over distinct maximal repeats x of length and d x is the number of (not necessarily maximal) copies of repeat x.The bound comes from the fact that each maximal repeat of length K < < L is resolved via exactly one bridged X-node, and each such resolution requires examining at most the Ld x distinct reads that contain the repeat.We note that L =K L max repeats x of length d x < L L =K a , and the latter quantity is easily computable from our sufficient statistics.
For our data sets, with appropriate choice of K, the bridging step is much simpler than the Kmer graph formation step: for R. sphaeroides we use K = 40 to get L =K La = 412; in contrast, N > 22421 for the relevant range of L. Similarly, for hc14, using K = 300, L =K La = 661 while N > 733550; for S. Aureus, L =K La = 558 while N > 8031.

Discussions and extensions
The notion of optimal shotgun assembly is not commonly discussed in the literature.One reason is that there is no universally agreed-upon metric of success.Another reason is that most of the optimization-based formulations of assembly have been shown to be NP-hard, including Shortest Common Superstring [3], [7], De Bruijn Superwalk [22], [12], and Minimum s-Walk on the string graph [14], [12].Thus, it would seem that optimal assembly algorithms are out of the question from a computational perspective.What we show in this paper is that if the goal is complete reconstruction, then one can define a clear notion of optimality, and moreover there is a computationally efficient assembly algorithm (MultiBridging) that is near optimal for a wide range of DNA repeat statistics.So while the reconstruction problem may well be NP-hard, typical instances of the problem seem much easier than the worst-case, a possibility already suggested by Nagarajan and Pop [17].
The MultiBridging algorithm is near optimal in the sense that, for a wide range of repeat statistics, it requires the minimum read length and minimum coverage depth to achieve com-  plete reconstruction.However, since the repeat statistics of a genome to be sequenced are usually not known in advance, this minimum required read length and minimum required coverage depth may also not be known in advance.In this context, it would be useful for the Multi-Bridging algorithm to validate whether its assembly is correct.More generally, an interesting question is to seek algorithms which are not only optimal in their data requirements but also provide a measure of confidence in their assemblies.
How realistic is the goal of complete reconstruction given current-day sequencing technologies?The minimum read lengths L crit required for complete reconstruction on the datasets we examined are typically on the order of 500 − 3000 base pairs (bp).This is substantially longer than the reads produced by Illumina, the current dominant sequencing technology, which produces reads of lengths 100-200bp; however, other technologies produce longer reads.PacBio reads can be as long as several thousand base pairs, and as demonstrated by [8], the noise can be cleaned by Illumina reads to enable nearcomplete reconstruction.Thus our framework is already relevant to some of the current cutting edge technologies.To make our framework more relevant to short-read technologies such as Illumina, an important direction is to incorporate mate-pairs in the read model, which can help to resolve long repeats with short reads.Other extensions to the basic shotgun sequencing model: heterogenous read lengths: This occurs in some technologies where the read length is random (e.g.Pacbio) or when reads from multiple technologies are used.Generalized Ukkonen's conditions and the sufficient conditions of MultiBridging extend verbatim to this case, and only the computation of the bridging probability (3) has to be slightly modified.non-uniform read coverage: Again, only the computation of the bridging probability has to be modified.One issue of interest is to investigate whether reads are sampled less frequently from long repeat regions.If so, our framework can quantify the performance hit.double strand: DNA is double-stranded and consists of a length-G sequence u and its reverse complement ũ.Each read is either sampled from u or ũ.This more realistic scenario can be mapped into our single-strand model by defining s as the length-2G concatenation of u and ũ, transforming each read into itself and its reverse complement so that there are 2N reads.Generalized Ukkonen's conditions hold verbatim for this problem, and MultiBridging can be applied, with the slight modification that instead of looking for a single Eulerian path, it should look for two Eulerian paths, one for each component of the sequence graph after repeat-resolution.An interesting aspect of this model is that, in addition to interleaved repeats on the single strand u, reverse complement repeats on u will also induce interleaved repeats on the sequence s.

B Lower bounds on coverage depth
The lower bounds are based on a generalization of Ukkonen's condition to shotgun sequencing, as described in Theorem 1.The proof of Theorem 1 follows by a straightforward modification to the argument in [26] and is omitted here.
Theorem 1.Given a DNA sequence s and a set of reads, if there is a pair of interleaved repeats or a triple repeat whose copies are all unbridged, then there is another sequence s of the same length under which the likelihood of observing the reads is the same.

B.1 Lower bound due to interleaved repeats
In this section we derive a necessary condition on N and L in order that the probability of correct reconstruction be at least 1 − .
Recall that a pair of repeats, one at positions t 1 , t 3 with t 1 < t 3 and the second at positions t 2 , t 4 with t 2 t 4 is interleaved if t 1 < t 2 < t 3 < t 4 or t 2 < t 1 < t 4 < t 3 .From the DNA we may extract a (symmetric) matrix of interleaved repeat statistics b mn , the number of pairs of interleaved repeats of lengths m and n.
We proceed by fixing both N and L and checking whether or not unbridged interleaved repeats occur with probability higher than .We will break up repeats into 2 categories: repeats of length at least L − 1 (these are always unbridged), and repeats of length less than L − 1 (these are sometimes unbridged).We assume that L > interleaved + 1, or equivalently b ij = 0 for all i, j ≥ L − 1, since otherwise there are (with certainty) unbridged interleaved repeats and Ukkonen's condition is violated.
First, we estimate the probability of error due to interleaved repeats of lengths i < L − 1 and j ≥ L−1.The repeat of length j is too long to be bridged, so an error occurs if the repeat of length i is unbridged.For a repeat, as long as the two copies' locations are not too nearby 4 , each copy is bridged independently and hence the probability that both copies of the repeat of length i are unbridged is . (Recall that a repeat is unbridged if both copies are unbridged.) A union bound estimate5 gives a probability of error Requiring the error probability to be less than and solving for L gives the necessary condition where b mn e 2(N/G)(m+1) is a simple function of the interleaved repeat statistic b mn .We now estimate the probability of error due to interleaved repeat pairs in which both repeats are shorter than L − 1.In this case only one repeat of each interleaved repeat pair must be bridged.Again a union bound estimate gives .
Requiring the error probability to be less than gives the necessary condition where γ 2 := m,n<L−1 b mn e 2(N/G)(m+n+2) and similarly to γ 1 is computed from b mn .

B.2 Lower bound due to triple repeats
We translate the generalized Ukkonen's condition prohibiting unbridged triple repeats into a condition on L and N .Let c m denote the number of triple repeats of length m.Then a union bound estimate gives Requiring P(E) ≤ solving for L gives where γ 3 := m c m e 3(N/G)(m+1) .
Remark 7. As discussed here and in Section 2, if the DNA sequence is not covered by the reads or there are unbridged interleaved or triple repeats, then reconstruction is not possible.But there is another situation which must be ruled out.Without knowing its length a priori, it is impossible to know how many copies of the DNA sequence are actually present: if the sequence s to be assembled consists of multiple concatenated copies of a shorter sequence, rather than just one copy, the probability of observing any set of reads will be the same.Since it is unlikely that a true DNA sequence will consist of the same sequence repeated multiple times, we assume this is not the case throughout the paper.Equivalently, if s does consist of multiple concatenated copies of a shorter sequence, we are content to reconstruct a single copy.If available, knowledge of the approximate length of s would then allow to reconstruct.

C Proofs for algorithms C.1 Proof of Theorem 2 (Greedy)
The greedy algorithm's underlying data structure is the overlap graph, where each node represents a read and each (directed) edge (x, y) is labeled with the overlap ov(x, y) (defined as the the length of the shared prefix/suffix) between the incident nodes' reads.For a node v, the in-degree [out-degree] is the number of edges in the graph directed towards [away from] v.The greedy algorithm is described as follows.Proof.We prove the contrapositive.Suppose Greedy makes its first error in merging reads r i and r j with overlap ov(r i , r j ) = .Now, if r j is the successor to r i , then the error is due to incorrectly aligning the reads; the other case is that r j is not the successor of r i .In the first case, the subsequence s t j is repeated at location s t i +L− , and no read bridges either repeat copy.In the second case, there is a repeat s t j = s t i +L− .If s t i +L− is bridged by some read r k , then r i has overlap at least + 1 with r k , implying that read r i has already found its successor before step (either r k or some other read with even higher overlap).A similar argument shows that s t j cannot be bridged, hence there is an unbridged repeat.

C.2.1 Background
We give some mathematical background leading to the proof of Theorem 3 (restated below).Lemma 8. Fix an arbitrary K and form the Kmer graph from the (K +1)-spectrum S K+1 .The sequence s corresponds to a unique cycle C(s) traversing each edge at least once.
To prove the lemma, note that all (K +1)-mers in s correspond to edges and adjacent (K + 1)mers in s are represented by adjacent edges.An induction argument that s corresponds to a cycle.The cycle traverses all the edges, since each edge represents a unique (K + 1)-mer in s.
In both SBH and shotgun sequencing the number of times each edge e is traversed by C(s) (henceforth called the multiplicity of e) is unknown a priori, and finding this number is part of the reconstruction task.Repeated (K + 1)mers in s correspond to edges in the K-mer graph traversed more than once by C(s), i.e. having multiplicity greater than one.In order to estimate the multiplicity, previous works seek a solution to the so-called Chinese Postman Problem (CPP), in which the goal is to find a cycle of the shortest total length traversing every edge in the graph (see e.g.[20], [6], [22], [11]).It is not obvious under what conditions the CPP solution correctly assigns multiplicities in agreement with C(s).For our purposes, as we will see in Theorem 3, the multiplicity estimation problem can be sidestepped (thereby avoiding solving CPP) through a modification to the K-mer graph.
Ignoring the issue of edge multiplicities for a moment, Pevzner [21] showed for the SBH model that if the edge multiplicities are known with multiple copies of each edge included according to the multiplicities, and moreover Ukkonen's condition is satisfied, then there is a unique Eulerian cycle in the K-mer graph and the Eulerian cycle corresponds to the original sequence.(An Eulerian cycle is a cycle traversing each edge exactly once.)Pevzner's algorithm is thus to find an Eulerian cycle and read off the corresponding sequence.Both steps can be done efficiently.
Lemma 9 (Pevzner [21]).In the SBH setting, if the edge multiplicities are known, then there is a unique Eulerian cycle in the K-mer graph with K = L − 1 if and only if there are no unbridged interleaved repeats or unbridged triple repeats.
Most practical algorithms (e.g.[6], [10], [29]) condense unambiguous paths (called unitigs by Myers [15] in a slightly different setting) for computational efficiency.The more significant benefit for us, as shown in Theorem 3, is that if Ukkonen's condition is satisfied then condensing the graph obviates the need to estimate multiplicities.Condensing a K-mer graph results in a graph of the following type.
Definition 10 (Sequence graph).A sequence graph is a graph in which each node is labeled with a subsequence, and edges (u, v) are labeled with an overlap a uv such the subsequences u and v overlap by a uv (the overlap is not necessarily maximal).In other words, an edge label a uv on e = (u, v) indicates that the a uv -length suffix of u is equal to the a uv -length prefix of v.
The sequence graph generalizes both the overlap graph used by Greedy in Section 3.1 (nodes correspond to reads, and edge overlaps are maximal overlaps) as well as the K-mer algorithms discussed in this section (nodes correspond to Kmers, and edge overlaps are K − 1).
In order to speak concisely about concatenated sequences in the sequence graph, we extend the notation s t (denoting the length-subsequence of the DNA sequence s starting at position t) which was introduced in Section 2.2; we abuse notation slightly, and write s end t to indicate the subsequence of s starting at position t and having length so that its end coincides with the end of s.
We will perform two basic operations on the sequence graph.For an edge e = (u, v) with overlap a uv , merging u and v along e produces the concatenation u end 1 v end auv+1 .Contracting an edge e = (u, v) entails two steps (c.f.Fig. 6): first, merging u and v along e to form a new node w = u end 1 v end auv+1 , and, second, edges to u are replaced with edges to w, and edges from v are replaced by edges from w.We will only contract edges (u, v) with The condensed graph is defined next.

Definition 11 (Condensed sequence graph).
The condensed sequence graph replaces unambiguous paths by single nodes.Concretely, any edge e = (u, v) with d out (u) = d in (v) = 1 is contracted, and this is repeated until no candidate edges remain.
For a path P = v 1 , v 2 , . . ., v q in the original graph, the corresponding path in the condensed graph obtained by contracting an edge (v i , v i+1 ) whenever it is contracted in the graph, replacing the node v 1 by w whenever an edge (u, v 1 ) is contracted to form w, and similarly for the final node v q .It is impossible for an intermediate node v i , 2 ≤ i < q, to be merged with a node outside of P, as this would violate the condition d out (u) = d in (v) = 1 for edge contraction in Defn.11.
In the condensed sequence graph G obtained from a sequence s, nodes correspond to subsequences via their labels, and paths in G correspond to subsequences in s via merging the constituent nodes along the path.If the subsequence corresponding to a node v appears twice or more in s, we say that v corresponds to a repeat.Conversely, subsequences of length ≥ K in s correspond to paths P of length − K + 1 in the K-mer graph, and thus by the previous paragraph also to paths in the condensed graph G.
We record a few simple facts about the condensed sequence graph obtained from a K-mer graph.
Lemma 12. Let G 0 be the K-mer graph constructed from the (K + 1)-spectrum of s and let C 0 = C 0 (s) be the cycle corresponding to s.In the condensed graph G, let C be the cycle obtained from C 0 by contracting the same edges as those contracted in G 0 .
1. Edges in G 0 can be contracted in any order, resulting in the same graph G, so the condensed graph is well-defined.Similarly C is well-defined.To begin, let C 0 be the cycle corresponding to s in the original K-mer graph G 0 .We argue that every edge (u, v) traversed twice by C 0 in the K-mer graph G 0 has been contracted in the condensed graph G and hence in C. Note that the cycle C 0 does not traverse any node three times in G 0 , for this would imply the existence of a triple repeat of length K, violating the hypothesis of the Lemma.It follows that the node u cannot have two outgoing edges in G 0 as u would then be traversed three times; similarly, v cannot have two incoming edges.Thus d out (u) = d in (v) = 1 and, as prescribed in Defn.11, the edge (u, v) has been contracted.

C.2.2 Proofs for SimpleBridging
Since bridging reads extend one base to either end of a repeat, it will be convenient to use the following notation for extending sequences: Given an X-node v with an incoming edge (p, v) and an outgoing edge (v, q), let v →q = v q 1 avq+1 , and p→ v = p 1 end−apv v .(13) Here v →q denotes the subsequence v appended with the single next base in the merging of v and q and p→ v the subsequence v prepended with the single previous base in the merging of p and v.For example, if v = ATTC, p = TCAT, a pv = 2, q = TTCGCC, and a vq = 3, then v →q = ATTCG, p→ v = CATTC, and p→ v →q = CATTCG.
The idea is that a bridging read is consistent with only one pair p→ v and v →q and thus allows to match up edge (p, v) with (v, q).This is recorded in the following lemma.Lemma 13.Suppose C corresponds to a sequence in a condensed sequence graph G.If a read r bridges an X-node v, then there are unique edges (p, v) and (v, q) such that p→ v and v →q are adjacent in r.
SimpleBridging is described as follows.
Algorithm 3 SimpleBridging.Input: reads R, parameter K. Output: sequence ŝ.K-mer steps 1-3: 1.For each subsequence x of length K in a read, form a node with label x. 2. For each read, add edges between nodes representing adjacent K-mers in the read.3. Condense the graph as described in Defn.11. 4. Bridging step: See Fig. 7.While there exists an X-node v with d in (v) = d out (v) = 2 bridged by some read r: (i) Remove v and edges incident to it.Add duplicate nodes v 1 , v 2 .(ii) Choose the unique p i and q j s.t.p i → v and v →q j are adjacent in r and add edges (p i , v 1 ) and (v 1 , q j ).Choose the unused p i and q j , add edges (p i , v 2 ) and (v 2 , q j ).(iii) Condense the graph.5. Finishing step: Find an Eulerian cycle in the graph and return the corresponding sequence.

C.2.3 Proofs for MultiBridging
In this subsection we recall Theorem 6 stating sufficient conditions for correct reconstruction, and derive the corresponding required coverage depth and read length to meet a target probability of correct reconstruction.The subsection concludes with a proof that the sufficient conditions are correct.(b) all triple repeats are all-bridged (c) the sequence is covered by the reads.
Remark 14.Unlike the previous K-mer algorithms, DeBruijn and SimpleBridging, it is unnecessary to specify a parameter K for MultiBridging.Implicitly MultiBridging uses K = 1, which makes the condition that reads overlap by K equivalent to coverage of the genome.
Figure 2 plots the performance of MultiBridging, obtained by solving for the relationship between G, N, L, and in order to satisfy the conditions of Lemma 6.We first perform the requisite calculations, and then prove the Lemma.
Condition (a) is already dealt with in ( 9) and (10), and Condition (c) amounts to the requirement that N N LW ≥ 1.
We turn to Condition (b) that all triple repeats are all-bridged.Let c m denote the number of triple repeats of length m.A union bound estimate over triple repeats for the event that one such triple repeat fails to be all-bridged gives and requiring P error ≤ and solving for L yields where 3 := m 3c m e (N/G)•(m+1) is computed from the triple repeat statistics c m .
In order to understand the cost of all-bridging triple repeats, compared to simply bridging one copy as required by our lower bound, it is instructive to study the effect of the single longest triple repeat.Setting c triple = 1 and c m = 0 for m = triple makes γ 3 = 3e (N/G)•( triple +1) in ( 15) and Bridging the longest triple repeat, as shown in Section B.2, requires Solving for N in equations ( 17) and (16) gives The ratio is This means that if the longest triple repeat is dominant, then for L slightly larger than triple , MultiBridging needs a coverage depth approximately 3.72 times higher than required by our lower bound.The remainder of this subsection is devoted to the proving Lemma 6.
We will use m C (v) to denote the multiplicity (traversal count) a cycle C assigns a node v.The multiplicity m C (v) is also equal to the number of times the subsequence v appears in the sequence corresponding to C. For an edge e, we can similarly let m C (e) be the number of times C traverses the edge.The following key lemma relates node multiplicities with the existence of X-nodes.Lemma 15.Let C be a cycle in a condensed sequence graph G, where G itself is not a cycle, traversing every edge at least once.If v is a node with maximum multiplicity at least 2, i.e. m C (v) = max u∈G m C (u) ≥ 2, then v is an Xnode.As a consequence, if m C (v) ≥ 3 for some v, i.e.C traverses some node at least three times, then m C (u) ≥ 3 for some X-node u.
Proof.Let v be a node with maximum multiplicity m C (v) = max u∈G m C (u).We will show that v is an X-node, i.e. d out (v) ≥ 2 and d in (v) ≥ 2.
We prove that d out (v) ≥ 2 by supposing that d out (v) = 1 and deriving a contradiction.Denote the outgoing edge from v by e = (v, u), where u is distinct from v since otherwise G is a cycle.If d in (u) ≥ 2, then u must be traversed more times than v, contradicting the maximality of m C (v), and if d in (u) = 1, then the existence of the edge e contradicts the fact that G is condensed.The argument showing that d in (v) ≥ 2 is symmetric to the case d in (v) ≥ 2.
Proof of Lemma 6.We assume that all triple repeats are all-bridged, that there are no unbridged interleaved repeats, and that all reads overlap their successors by at least 1 base pair.We wish to show that MultiBridging returns the original sequence.
Consider the condensed sequence graph G 0 constructed in steps 1-3 of MultiBridging.Suppose all X-nodes that are either all-bridged or correspond to bridged 2-repeats have been resolved according to repeated application of the procedure in step 4 of MultiBridging, resulting in a condensed sequence graph G.We claim that 1) s corresponds to a cycle C in G traversing every edge at least once, 2) C is Eulerian, and 3) C is the unique Eulerian cycle in G.
Proof of Claim 1.Let G n be the graph after n resolution steps, and suppose that C n is a cycle in G n corresponding to the sequence s and traversing all edges.We will show that there exists a cycle C n+1 in G n+1 corresponding to s and traversing all edges, and that G t = G for a finite t, so by induction, there exists a cycle C in G corresponding to s and traversing all The base case n = 0 was shown in Lemma 8. Moving on to arbitrary n > 0, let v be an X-node in G n labeled as in Fig. 6.The X-node resolution step is constructed precisely to preserve the existence of a cycle corresponding to s.Each traversal of v by the cycle C n assigns an incoming edge (p i v) to an outgoing edge (v, q j ), and the resolution step correctly determines this pairing by the assumption on bridging reads.
Note that all X-nodes in the graph G n+1 continue to correspond to repeats in s.The process terminates and observe that L(G i ) is strictly decreasing in i.Thus s corresponds to a cycle C in G traversing each edge at least once.
Proof of Claim 2. We next show that C is an Eulerian cycle.If G is itself a cycle, and s is not formed by concatenating multiple copies of a shorter subsequence (assumed not to be the case, see discussion at end of Section 2), then C traverses G exactly once and is an Eulerian cycle.Otherwise, if G is not a cycle, then we may apply Lemma 15 to see that any node with m C (v) ≥ 3 implies the existence of an X-node u with m C (u) ≥ 3. Node u must be all-bridged, by hypothesis, which means that an additional X-node resolution step can be applied to G, a contradiction.Thus each node v in G has multiplicity m C (v) ≤ 2.
We can now argue that no edge e = (u, v) is traversed twice by C in the condensed sequence graph G, as it would have been contracted.Suppose m C (e) ≥ 2. The node u cannot have two outgoing edges as this implies m C (u) ≥ 3; similarly, v cannot have two incoming edges.Thus d out (u) = d in (v) = 1, but by Defn.11 the edge e = (u, v) would have been contracted.
Proof of Claim 3. It remains to show that there is a unique Eulerian cycle in G.All Xnodes in G must be unbridged 2-X-nodes (correspond to 2-repeats in s), as all other X-nodes were assumed to be bridged and have thus been resolved in G.
We will map the sequence s to another sequence s , allowing us to use the characterization of Lemma 9 for SBH with known multiplicities.Denote by G the graph obtained by relabeling each node in G by a single unique symbol (no matter the original node label length), and setting all edge overlaps to 0. Through the relabeling, C corresponds to a cycle C in G , and let s be the sequence corresponding to C .Writing S 2 for the 2-spectrum of s , the graph G is by construction precisely the 1-mer graph created from S 2 , and there is a one-to-one correspondence between X-nodes in G and unbridged repeats in s .Through the described mapping, every unbridged repeat in s maps to an unbridged repeat in s, with the order of repeats preserved.
There are multiple Eulerian cycles in G only if there are multiple Eulerian cycles in G since the graphs have the same topology, and by Lemma 9 the latter occurs only if there are unbridged interleaved repeats in s , which by the correspondence in the previous paragraph implies the existence of unbridged interleaved repeats in s .

C.3 Truncation estimate for bridging repeats (Greedy and Multi-Bridging)
The repeat statistics a m and c m used in the algorithm performance curves are potentially overestimates.This is because a large repeat familyone with a large number of copies f -will result in a contribution f 2 ≈ f 2 /2 to a m and f 3 ≈ f 3 /6 to c m .We focus here on deriving an estimate for the required N, L for bridging all repeats with probability 1 − .This upper bound reduces the sensitivity to large families of short repeats.The analogous derivation for all-bridging all triplerepeats is very similar and is omitted.
Suppose there are a m repeats of length m.The probability that some repeat is unbridged is ap-proximately, by the union bound estimate, P(E) ≈ m a m e −2λ(L−m) . ( Requiring P(E) ≤ and solving for L gives where γ := m a m e 2(N/G)m .Now, if a m overcounts the number of repeats for small values of m, the bound in ( 22) might be loose.In order for each read to overlap the subsequent read by x nucleotides, with probability of failure it suffices to take Thus, for any x < L, we may replace (22) by where γ(x) = m>x a m e 2(N/G)m , and obtain a looser bound.

D Critical window calculations D.1 Window size if interleaved triple
We focus here on the bound due to interleaved repeats (rather than triple repeats, treated subsequently), and furthermore assume that the effect of the single largest interleaved repeat is dominant.In this case interleaved = L crit − 1 is the length of the shorter of the pair of interleaved repeats, and let 1 be the length of the longer of the two.For L crit < L ≤ 1 + 1, we are in the setting of ( 9) but with a redefined γ 1 = e 2(N/G)(L crit −1) .Thus, and solving for N gives Let L * be the value of L at which the curve described by constraint (26) intersects the Lander-Waterman coverage value, i.e.N repeat (L * ) = N LW (L * ) := N * .This is the minimum read length for which coverage of the sequence suffices for reconstruction.We now solve for L * L crit .First, the Lander-Waterman equation (2) at N = N * is and setting equal the right-hand sides of ( 27) and ( 26) at L = L * gives G L * log A bit of algebra yields where Since x ≤ 1 2 , equation ( 28 From (33) we see that the relative size of log −1 and log G L crit determines the size of the critical window.If in previous example = 10 −5 , say, then L * L crit increases to 1.3.As tends to zero, r approaches zero as well and L * L crit → 2.

D.2 Window size if triple interleaved
We now suppose the single longest triple repeat dominates the lower bound and estimate the size of the critical window.In this case triple = L crit −1 is the length of the longest triple repeat.Since we don't have matching lower and upper bounds for triple repeats, we separately compute the critical window size for each.We start with the lower bound.For L > L crit , the minimum value of N required in order to bridge the longest triple repeat is given by (18) and repeated here: As for the interleaved repeats case considered earlier, we let L * be the value of L at which the curve described by constraint (35) intersects the Lander-Waterman coverage value, i.e.N triple (L * ) = N LW (L * ) := N * .This is the minimum read length for which coverage of the sequence suffices for reconstruction.
Changing to 10 −5 makes L * L crit ≈ 1.17, and as (and hence also r) tends to zero, L * L crit → 3 2 .The analogous computation for L * /L crit for the upper bound, as given by N all 3 in (18), yields for the example with G ∼ 10 8 , L crit ∼ 1000, and = 5%.The critical window size of the upper bound is about twice as large as that of the lower bound for typical values of G and L crit , with moderate.But as → 0, we see from (37) that L * /L crit → ∞, markedly different to the L * /L crit → 3 2 observed for the lower bound.

Figure 1 :Figure 2 :
Figure 1: For hc19, a log plot of number of repeats as a function of the repeat length .Red line is what would have been predicted by an i.i.d.fit.

Figure 5 :
Figure 5: A subsequence s t is bridged if and only if there exists at least one read which covers at least one base on both sides of the subsequence, i.e. the read arrives in the preceding length L − − 1 interval.

3 TCGCAACFigure 6 :
Figure 6: Contracting an edge by merging the incident nodes.Repeating this operation results in the condensed graph.

Theorem 4 .
DeBruijn with parameter choice K reconstructs the original sequence s if: (a) K > interleaved (b) K > triple (c) adjacent reads overlap by at least K Lander and Waterman's coverage analysis applies also to Condition (c) of Theorem 4, yielding a normalized coverage depth requirement c = 1/(1 − K/L).The larger the overlap K, the higher the coverage depth required.Conditions (a) and (b) say that the smallest K one can choose is K

Figure 7 :Theorem 5 .
Figure 7: An example of the bridging step in Sim-pleBridging.Theorem 5. SimpleBridging with parameter choice K reconstructs the original sequence s if: (a) all interleaved repeats are bridged (b) K > triple (c) adjacent reads overlap by at least K.

Figure 8 :
Figure 8: MultiBridging resolves an X-node with label ATTGCAA corresponding to a triple repeat.

Theorem 6 .
The algorithm MultiBridging reconstructs the sequence s if: (a) all interleaved repeats are bridged (b) all triple repeats are all-bridged (c) the sequence is covered by the reads.

Figure 9 :
Figure 9: Simulation results for each of the GAGE reference genomes.Each simulated (N, L) point is marked with the number of correct reconstructions (e.g.93, 98, 95) on 100 simulated read sets.All four algorithms (Greedy, DeBruijn, SimpleBridging, and MultiBridging) were run on S. Aureus, R. sphaeroides and hc14.Note that MultiBridging is very close to the lower bound on all 3 datasets.

1 ,For G ∼ 10 8 ,
) implies L * ≤ 2L crit , and combined with the obvious inequalityL * ≥ L crit , we have L crit ≤ L * ≤ 2L crit .Thus N LW (2L crit ) ≤ N * ≤ N LW (L crit ) ,(30)and applying the Lander-Waterman fixed-point equation (2) yet again givesG 2L crit log N LW (2L crit ) ≤ N * ≤ G L crit log N LW (L crit ) .(31)Writing this out giveslog −1 log G L crit + log log N LW (L crit ) + log −1 ≤ x ≤ log −1 log G L crit − 1 + log log N LW (2L crit ) + log −L crit ∼ 1000, and = 5%, we get log G L crit ≈ 13.8 and log −1 ≈ 3.0, so r ≈ 4 2. The cycle C in G corresponds to s and is the unique such cycle.3. The cycle C in G traverses each edge at least once.Let S K+1 be the (K + 1)-spectrum of s and G 0 be the K-mer graph constructed from S K+1 , and let G be the condensed sequence graph obtained from G 0 .If Ukkonen's condition is satisfied, i.e. there are no triple repeats or interleaved repeats of length at least K, then there is a unique Eulerian cycle C in G and C corresponds to s. Proof.We will show that if Ukkonen's condition is satisfied, the cycle C = C(s) in G corresponding to s (constructed in Lemma 12) traverses each edge exactly once in the condensed K-mer graph, i.e.C is Eulerian.Pevzner's [21] arguments show that if there are multiple Eulerian cycles then Ukkonen's condition is violated, so it is sufficient to prove that C is Eulerian.As noted in Lemma 12, C traverses each edge at least once, and thus it remains only to show that C traverses each edge at most once.