# ParticleCall: A particle filter for base calling in next-generation sequencing systems

- Xiaohu Shen
^{1}and - Haris Vikalo
^{1}Email author

**13**:160

**DOI: **10.1186/1471-2105-13-160

© Shen and Vikalo; licensee BioMed Central Ltd. 2012

**Received: **14 March 2012

**Accepted: **9 July 2012

**Published: **9 July 2012

## Abstract

### Background

Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data.

### Results

In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy.

### Conclusions

The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling.

ParticleCall is freely available at https://sourceforge.net/projects/particlecall.

## Background

The advancements of next-generation sequencing technologies have enabled inexpensive and rapid generation of vast amounts of sequencing data [1–3]. At the same time, high-throughput sequencing technologies present us with the challenge of processing and analyzing large data sets that they provide. A fundamental computational challenge encountered in next-generation sequencing systems is the one of determining the order of nucleotides from the acquired measurements, the task typically referred to as *base calling*. The accuracy of base calling is of essential importance for various downstream applications including sequence assembly, SNP calling, and genotype calling [4]. Moreover, improving base calling accuracy may enable achieving desired performance of downstream applications with smaller sequencing coverage, which translates to a reduction in the sequencing cost.

A widely used sequencing-by-synthesis platform, commercialized by Illumina, relies on reversible terminator chemistry. Illumina’s sequencing platforms are supported by a commercial base-calling algorithm called Bustard. While Bustard is computationally very efficient, its base-calling error rates can be significantly improved by various computationally more demanding schemes [5]. Such schemes include work presented in [6–9]. Among the proposed methods, the BayesCall algorithm [8] has been shown to significantly outperform Bustard in terms of the achievable base calling error rates. By relying on a full parametric model of the acquired signal, BayesCall builds a Bayesian inference framework capable of providing valuable probabilistic information that can be used in downstream applications. However, its performance gains come at high computational costs. A modified version of the BaseCall algorithm named naiveBayesCall [9] performs base calling in a much more efficient way, but its accuracy deteriorates (albeit remains better than Bustard’s). Both BayesCall and naiveBayesCall rely on expectation-maximization (EM) framework that employs a Markov chain Monte Carlo (MCMC) sampling strategy to estimate the parameters of the statistical model describing the signal acquisition process. This parameter estimation step turns out to be very time-consuming, limiting practical feasibility of the proposed schemes. Highly accurate and practically feasible parameter estimation and base-calling remain a challenge that needs to be addressed.

In this paper, we propose a Hidden Markov Model (HMM) representation of the signal acquired by Illumina’s sequencing-by-synthesis platforms and develop a particle filtering (i.e., sequential Monte Carlo) base-calling scheme that we refer to as ParticleCall. When relying on the BayesCall’s Markov Chain Monte Carlo implementation of the EM algorithm (MCEM) to estimate system parameters, ParticleCall achieves the same error rate performance as BayesCall while reducing the time needed for base calling by a factor of 3. To improve the speed of parameter estimation, we develop a particle filter implementation of the EM algorithm (PFEM). PFEM significantly reduces parameter estimation time while leading to a very minor deterioration of the accuracy of base calling. Finally, we demonstrate that ParticleCall has the best discrimination ability among all of the considered base calling schemes.

## Methods

In this section, we first review the data acquisition process and the basic mathematical model of the Illumina’s sequencing-by-synthesis platform. Then we introduce a Hidden Markov Model (HMM) representation of the acquired signals. Relying on the HMM model and particle filtering (i.e., sequential Monte Carlo) techniques, we develop a novel base calling and parameter estimation scheme and discuss some important practical aspects of the proposed method.

### Illumina sequencing platform

A sequencing task on the Illumina’s platform is preceded by the preparation of a library of single-stranded short templates created by performing random fragmentation of the target DNA sample. Each single-stranded fragment in the library is placed on a glass surface (i.e., the flow cell [10]) and subjected to bridge amplification in order to create a cluster of identical copies of DNA templates [11]. The flow cell contains eight *lanes* where each lane is divided into a hundred of nonoverlapping *tiles*. The order of nucleotides in a DNA template is identified by synthesizing its complementary strand while relying on reversible terminator chemistry [3]. Ideally, in every sequencing cycle, a single fluorescently labeled nucleotide is incorporated into the complementary strand on each copy of the template in a cluster. The incorporated nucleotide is a Watson-Crick complement of the first unpaired base of the template. In reversible terminator chemistry, four distinct fluorescent tags are used to label the four bases, and are detected by CCD imaging technology. The acquired images are processed in order to obtain intensity signals indicating the type of nucleotide incorporated in each cycle. These raw signal intensities are then analyzed by a base-calling algorithm to infer the order of nucleotides in each of the templates.

Quality of the acquired raw signals is adversely affected by the imperfections in the underlying sequencing-by-synthesis and signal acquisition processes. The imperfections are manifested as various sources of uncertainties. For instance, a small fraction of the strands being synthesized may fail to incorporate a base, or they may incorporate multiple bases in a single test cycle. These effects are referred to as phasing and pre-phasing, respectively, and they result in an incoherent addition of the signals generated by the synthesis of the complementary strands on the copies of the template. Other sources of uncertainty are due to cross-talk and delay effects in the optical detection process, the residual effects that are readily observed between subsequent test cycles, signal decay, and measurement noise.

### Overview of the mathematical model

To describe the signal acquired by the Illumina’s sequencing-by-synthesis platform, a parametric model was proposed in [8]. Basic components of the model are overviewed below.

A length-*L* DNA template sequence is represented by a 4×*L* matrix *S*, where the ^{
i
th
}column of *S*, _{
s
i
}, is considered to be a randomly generated unit vector with a single non-zero entry indicating the type of the ^{
i
th
}base in the sequence. We follow the convention where the first component of the vector _{
s
i
}corresponds to the base A, the second to C, the third to G, and the fourth to T and denote them as _{
e
A
},_{
e
C
},_{
e
G
},_{
e
T
}. The goal of base-calling is to infer unknown *S* from the signals obtained by optically detecting nucleotides incorporated during the sequencing-by-synthesis process.

*p*denote the average fraction of strands that fail to extend in a test cycle. Phasing is modeled as a Bernoulli random variable with probability

*p*. Let

*q*denote the average fraction of strands which extend by more than one base in a single test cycle. Pre-phasing is modeled as a Bernoulli random variable with probability

*q*. Length of the synthesized strand changes from

*i*to

*j*with probability

*P*denote an (

*L*+ 1)×(

*L*+ 1) transition matrix with entries

_{ P ij }defined above, 1≤

*i*,

*j*≤

*L*+ 1. The signal generated over

*L*cycles of the synthesis process is affected by phasing and pre-phasing and can be expressed as

*X*=

*SH*, where

*H*=(

_{H i,j}) is an

*L*×

*L*matrix with entries ${H}_{i,j}={\left[{P}^{j}\right]}_{1(i+1)}$, the probability that a synthesized strand is of length

*i*after

*j*cycles. Here

^{ P j }denotes the

^{ j th }power of matrix

*P*. The decay in signal intensities over cycles (caused by DNA loss due to primer-template melting, digestion by enzymatic impurities, DNA dissociation, misincorporation, etc.) is modeled by the per-cluster density random parameter

_{ λ t },

_{ d t }is the per-cluster density decay parameter within [0,1]. We represent the

^{ t th }column of

*H*as

_{ h t }and the

^{ t th }column of

*X*as

_{ x t }. Incorporating the decay into the model, the signal generated in cycle

*t*is expressed as

*t*are given by

where _{
K
t
}denotes the 4×4 crosstalk matrix describing overlap of the emission spectra of the four fluorescent tags, and ${\eta}_{t}^{A},{\eta}_{t}^{C},{\eta}_{t}^{G},{\eta}_{t}^{T}$ are independent, identically distributed (i.i.d.) 4×1 Gaussian random vectors with zero mean and a common 4×4 covariance matrix _{
Σ
t
}.

*p*and

*q*, the components of the vector

_{ h t }around its

^{ t th }entry are significantly greater than the remaining ones. This observation can be used to simplify the expressions (2) and (3). In particular, let ${\mathbf{h}}_{t}^{w}$ denote the vector obtained by windowing

_{ h t }around its

^{ t th }entry, i.e., by setting small components of

_{ h t }to 0. In general, we consider

*l*+

*r*+ 1 dominant components of

_{ h t }centered at position

*t*,

_{H t−l,t},

_{H t−l + 1,t},…,

_{H t,t},…

_{H t + r−1,t},

_{H t + r,t}, and then expression (2) becomes

*t*is empirically observed to contain residual effect from the previous cycle. The residual effect is modeled by adding

_{ α t }(1−

_{ d t })

_{y t−1}to

_{ y t }, where the unknown parameter

_{ α t }∈(0,1). Therefore, the model can be summarized as

where ∥·_{∥2} denotes the _{l2}-norm of its argument, and where _{y0}=**0**, _{λ0}=1.

### Hidden Markov Model of DNA base-calling

In this section, we reformulate the statistical description of the signal acquired by the Illumina’s sequencing-by-synthesis platform as a Hidden Markov Model (HMM) [12]. HMMs comprise a family of probabilistic graphical models which describe a series of observations by a “hidden” stochastic process and are generally suitable for representing time series data. Sequencing data obtained from the Illumina’s platform is a set of time-series intensities _{y1:L}, motivating the HMM representation. HMMs provide a convenient framework for state and parameter estimation, which we exploit to develop a particle filter base-calling scheme in the next section.

_{y t−1}and

_{ y t }by defining ${\mathbf{y}}_{t}^{{}^{\u2033}}={\mathbf{y}}_{t}-{\alpha}_{t}(1-{d}_{t}){\mathbf{y}}_{t-1},t=1,2,\dots ,L$. Therefore, we can write

_{x1:L}. Moreover, let ${S}_{t}^{w}$ denote the 4×(

*l*+

*r*+ 1) windowed submatrix of

*S*, i.e.,

Since ${\mathbf{x}}_{t}^{w}={\lambda}_{t}S{\mathbf{h}}_{t}^{w}={\lambda}_{t}\sum _{i=t-l}^{t+r}{\mathbf{s}}_{i}{H}_{i,t}$, it is clear that ${\mathbf{y}}_{t}^{{}^{\u2033}}$ depends on _{
λ
t
}and ${S}_{t}^{w}$. Therefore, we define the state of the HMM to be the combination of _{
λ
t
} and ${S}_{t}^{w}$ – the per-cluster density at cycle *t* and the collection of (*l* + *r* + 1) bases around (and including) the base in position *t*, respectively.

_{ λ t }are independent, the transition probability is

_{f2}(

_{ λ t }|

_{λ t−1}), is known from the density decay model (1),

*l*+

*r*+ 1 column vectors of ${S}_{t}^{w}$. Note that for

*k*=2,3,…,

*l*+

*r*+ 1, the column vectors ${\mathbf{s}}_{t-1,k}^{w}$ in ${S}_{t-1}^{w}$ and the column vectors ${\mathbf{s}}_{t,k-1}^{w}$ in ${S}_{t}^{w}$ actually represent the same base. Therefore, the transition model between them can be represent by a

*δ*function as

*U*({

_{ e A },

_{ e C },

_{ e G },

_{ e T }}) denote a uniform distribution on the support set of unit vectors ({

_{ e A },

_{ e C },

_{ e G },

_{ e T }}). We assume no correlation between consecutive bases of the template sequence, i.e., ${\mathbf{s}}_{t,l+r+1}^{w}$ is generated from

*U*({

_{ e A },

_{ e C },

_{ e G },

_{ e T }}). Therefore, ${f}_{1}\left({S}_{t}^{w}\right|{S}_{t-1}^{w})$ can be written as

where *u*(·)∼*U*({_{
e
A
},_{
e
C
},_{
e
G
},_{
e
T
}}). Hereby, all the components of the HMM are specified.

### ParticleCall base-calling algorithm

The goal of base calling is to determine the order of nucleotides in a template from the acquired signal _{y1:t}. This can be rephrased as the problem of inferring the most likely sequence of states $({S}_{t}^{w},{\lambda}_{t})$ of the HMM in (7)-(8) from the observed sequence ${\mathbf{y}}_{1:t}^{\u2033}$(clearly, _{s1:L} follows directly from ${S}_{t}^{w}$). We assume that the parameters *Λ*={*p*,*q*,_{d1:L},_{α1:L},_{σ1:L},_{K1:L},_{Σ1:L}} are common for all clusters within a tile, and that they are provided by a parameter estimation step discussed in the following section. In this section, we introduce a novel base calling algorithm ParticleCall which relies on particle filtering techniques to sequentially infer $({S}_{t}^{w},{\lambda}_{t})$ and, therefore, recover the matrix *S*.

*S*

*p*(

**s**

_{ t }|${\mathbf{y}}_{1:t}^{\u2033}$),

*t*=1,2,…,

*L*, and find the maximum a posteriori (MAP) estimates of

_{ s t }by solving

Our algorithm relies on a sequential importance sampling/resampling (SISR) particle filter scheme [14] to calculate $p({S}_{t}^{w},{\lambda}_{t}|{y}_{1:t}^{\u2033})$. Different choices and approximation methods of proposal densities are considered in [15–17]. We directly use the transition (8) as the proposal density. This sequential importance sampling suffers from degeneracy and the variance of the importance weights will increase over time. To address the degeneracy problem, a resampling step is introduced in order to eliminate samples which have small normalized importance weights. Common resampling methods include multinomial resampling [14], residual resampling [18] and systematic resampling [19, 20]. We measure degeneracy of the algorithm using the effective sample size _{Keff} and, for the sake of simplicity, employ multinomial resampling strategy. If we denote the number of particles by _{
N
p
} and associated weights by *w*, then ${K}_{\text{eff}}={\left(\sum _{k=1}^{{N}_{p}}{\left({w}_{t}^{\left(i\right)}\right)}^{2}\right)}^{-1}$ and resampling step is used when _{Keff} is below a fixed threshold _{
N
threshold
}. _{
N
threshold
} of size *O*(_{
N
p
}) is typically sufficient [14]. In our implementation, we set _{
N
threshold
}=_{
N
p
}/2.

We omit further details for brevity and formalize the ParticleCall algorithm below.

#### Algorithm 1

- 1.
Initialization:

1.1 Initialize particles:**for** *i*=1→_{
N
p
}**do**

Sample each column of the submatrix ${S}_{1}^{w,\left(i\right)}$ from *U*({_{
e
A
},_{
e
C
},_{
e
G
},_{
e
T
}}); Sample ${\lambda}_{1}^{\left(i\right)}$ from a Gaussian distribution with mean 1, and the variance calculated using Bustard’s estimates of *λ*in the first 10 test cycles.**end for**

- 2.
Run iteration

*t*(*t*≥2):

2.1 Sampling:**for** *i*=1→_{
N
p
}**do**

Sample ${S}_{t}^{w,\left(i\right)},{\lambda}_{t}^{\left(i\right)}\sim f(\xb7,\xb7|{S}_{t-1}^{w,\left(i\right)},{\lambda}_{t-1}^{\left(i\right)})$ according to (8).**end for**

2.3 Normalize the weights. Calculate the posteriori probability of _{
s
t
} and obtain the estimate ${\widehat{\mathbf{s}}}_{t}$.

2.4 Resampling:**if**${K}_{\text{eff}}={\left(\sum _{k=1}^{{N}_{p}}{\left({w}_{t}^{\left(i\right)}\right)}^{2}\right)}^{-1}\le {N}_{\mathit{\text{threshold}}}$**then**

Draw _{
N
p
}samples $\{{\stackrel{\u0304}{S}}_{t}^{w,\left(j\right)},{\stackrel{\u0304}{\lambda}}_{t}^{\left(j\right)},j=1,\dots ,{N}_{p}\}$ from $\{{S}_{t}^{w,\left(i\right)},{\lambda}_{t}^{\left(i\right)},i=1,\dots ,{N}_{p}\}$ with probabilities proportional to $\{{w}_{t}^{\left(i\right)},i=1,\dots ,{N}_{p}\}$. Assign equal weight to each particle, ${\stackrel{\u0304}{w}}_{t}^{\left(i\right)}=1/{N}_{p}$.**end if**

Since ${S}_{t}^{w}$ in the HMM states are discrete with a finite alphabet, and the transitions of ${S}_{t}^{w}$ and _{
λ
t
} are independent according to (8), it is possible to Rao-Blackwellize the ParticleCall algorithm. Rao-Blackwellization is used to marginalize part of the states in the particle filter, hence reducing the number of needed particles _{
N
p
}[16]. We marginalize the discrete states ${S}_{t}^{w}$ and reduce the hidden process to _{
λ
t
}, while relying on the particle filter to calculate *p*(_{λ1:t}|${\mathbf{y}}_{1:t}^{\u2033}$).

Since $p\left({\lambda}_{1:t}\right|{\mathbf{y}}_{1:t}^{\u2033})\propto p({\mathbf{y}}_{t}^{\u2033}|{\mathbf{y}}_{1:t-1}^{\u2033},{\lambda}_{1:t})p\left({\lambda}_{t}\right|{\lambda}_{t-1}^{\left(i\right)})$, where ${\lambda}_{t-1}^{\left(i\right)}$ is a sample from *p*(_{λ1:t−1}|${\mathbf{y}}_{1:t-1}^{\u2033}$), we can state the Rao-Blackwellized ParticleCall algorithm as below.

#### Algorithm 2

Rao-Blackwellized ParticleCall algorithma

1. Initialization:

1.1 Initialize particles:**for** *i*=1→_{
N
p
}**do**

Sample ${\lambda}_{1}^{\left(i\right)}$ from a Gaussian distribution with mean 1, and the variance calculated using Bustard’s estimates of *λ*in the first 10 test cycles.**end for**

1.2 Compute and normalize weights for each particle according to ${w}_{1}^{\left(i\right)}\propto g\left({\mathbf{y}}_{1}^{\u2033}\right|{\lambda}_{1}^{\left(i\right)})\propto \sum _{{S}_{1}^{w}}g({\mathbf{y}}_{1}^{\u2033}|{S}_{1}^{w},{\lambda}_{1}^{\left(i\right)})$.

1.3 Calculate the discrete distribution $p\left({S}_{1}^{w}\right|{\mathbf{y}}_{1},{\lambda}_{1}^{\left(i\right)})$ for each *i*.

2. Run iteration *t*(*t*≥2):

2.1 Sampling:**for** *i*=1→_{
N
p
}**do**

Sample ${\lambda}_{t}^{\left(i\right)}\sim f(\xb7|{\lambda}_{t-1}^{\left(i\right)})$.**end for**

2.2 Update the importance weight ${w}_{t}^{\left(i\right)}\propto {w}_{t-1}^{\left(i\right)}g\left({\mathbf{y}}_{t}^{\u2033}\right|{\mathbf{y}}_{1:t-1}^{\u2033},{\lambda}_{1:t}^{\left(i\right)}).$ and normalize the weights.

2.3 Resample if _{Keff}≤_{
N
threshold
}

2.4 Update $p\left({S}_{t}^{w}\right|{\mathbf{y}}_{1:t}^{\u2033},{\lambda}_{1:t}^{\left(i\right)})$**for** *i*=1→_{
N
p
}**do**

Update $p\left({S}_{t}^{w}\right|{\mathbf{y}}_{1:t}^{\u2033},{\lambda}_{1:t}^{\left(i\right)})$ using $p\left({S}_{t-1}^{w}\right|{\mathbf{y}}_{1:t-1}^{\u2033},{\lambda}_{1:t-1}^{\left(i\right)})$ and ${\lambda}_{t}^{\left(i\right)}$.**end for**

where $p\left({\mathbf{y}}_{t}^{\u2033}\right|{S}_{t}^{w},{\lambda}_{t}^{\left(i\right)})$ is the observation density, $p\left({S}_{t-1}^{w}\right|$${\mathbf{y}}_{1:t-1}^{\u2033},{\lambda}_{1:t}^{\left(i\right)})=p({S}_{t-1}^{w}|{\mathbf{y}}_{1:t-1}^{\u2033},{\lambda}_{1:t-1}^{\left(i\right)})$ due to the independence of the state transitions, and $p\left({S}_{t}^{w}\right|{S}_{t-1}^{w},{\mathbf{y}}_{1:t-1}^{\u2033},{\lambda}_{1:t}^{\left(i\right)})=$$p\left({S}_{t}^{w}\right|{S}_{t-1}^{w})$ due to the Markov property and the independence of the state transitions.

### Parameter estimation

To determine the set of parameters *Λ*needed to run the proposed ParticleCall base calling algorithm, one could rely on the MCMC implementation of the EM algorithm (MCEM) proposed in [8]. In section Results and discussion, we demonstrate the performance of the ParticleCall algorithm that relies on the MCEM parameter estimation scheme. Note, however, that the MCMC sampling strategy employed by MCEM requires a lengthy burn-in period and a very large sample size to perform the expectation step. Therefore, the MCEM parameter estimation scheme is computationally rather intensive and requires significant computational resources if it is to be used for processing large sequencing data sets. As an alternative, we develop an EM parameter estimation scheme which relies on the proposed HMM and uses samples generated by a particle filter to evaluate the expectation of the likelihood function. We refer to this algorithm as the particle filter EM (PFEM). The speed and accuracy of the proposed scheme is practically sound for use in next generation sequencing platforms.

#### Assumptions on parameters

Recall that the set of parameters needed to run ParticleCall is *Λ*={*p* *q*_{d1:L}_{α1:L}_{σ1:L}_{K1:L}_{Σ1:L}}. The phasing and prephasing parameters *p* and *q* are assumed to be the same for each sequencing lane and are estimated using the same procedure as Bustard (see, e.g., [8]). The remaining parameters are assumed to be cycle-dependent and need to be estimated for each tile. The cycle-dependency assumption on the parameters can lead to a substantial improvement in the base-calling accuracy [5]. In order to avoid over-fitting, we assume that parameters remain constant within a short window of cycles and then change to a different set of values. To track the changes in the parameters, we first divide the total read length *L* into several non-overlapping windows and then perform our parameter estimation window-by-window. To further reduce the number of parameters and improve the estimation efficiency, we assume that the parameters _{d1:L} and _{σ1:L} are uniformly distributed over an interval and incorporate them into the hidden states of the HMM model. Therefore, only the mean and variance of these parameters, i.e., _{
d
mean
}, _{
d
var
}, _{
σ
mean
}, and _{
σ
var
}need to be estimated. Computational results demonstrate that these two assumptions does not affect the accuracy of base-calling.

#### Particle filter EM algorithm

*t*is approximated as

Replacing (2) by (10) leads to a simplified model that allows for straightforward base calling and inference of the parameters by means of linear regression. We use these values to obtain the estimates of _{
d
mean
}, _{
d
var
}, _{
σ
mean
}, and _{
σ
var
}, and to initialize the remaining parameters *α*, *K*, *Σ*, in the particle filter EM parameter estimation procedure.

*n*reads randomly chosen from a tile (in our experiments, we use

*n*=200). Assume the window length is

*w*, and denote the window index by

*m*. The particle filter EM (PFEM) algorithm finds parameters for one window and then uses these values to initialize the search for parameters in the next window. We illustrate the procedure for the first window here (the same procedure is repeated in the following windows). Let ${\Lambda}_{1}^{i}=\{{\alpha}^{i},{K}^{i},{\Sigma}^{i}\}$ denote the set of parameters for window 1 in the

*i*th iteration of the EM scheme. The estimate of ${\Lambda}_{1}^{i}$ is given by

where the expectation is taken with respect to $P({\mathbf{s}}_{1:w},{\lambda}_{1:w}|{\mathbf{y}}_{1:w},{\Lambda}_{1}^{i-1})$. We rely on an SISR particle filtering scheme to generate equally weighted sample trajectories from $P({\mathbf{s}}_{1:w},{\lambda}_{1:w}|{\mathbf{y}}_{1:w},{\Lambda}_{1}^{i-1})$. Based on (7) and (8), we calculate $logP({\mathbf{y}}_{1:w},{\mathbf{s}}_{1:w},{\lambda}_{1:w}|{\Lambda}_{1}^{i-1})$ for these samples and compute their average to approximate the expectation in (12). The maximization (11) is performed by solving equations obtained after taking gradients of ${L}_{1}\left({\Lambda}_{1}^{i-1}\right)$ over the parameters and setting them to 0. In our experiment, the PFEM parameter estimation scheme performs 30 EM iterations and uses 600 samples from the particle filter for each window.

## Results and discussion

### Performance of ParticleCall

*l*,

*r*, particle number

_{ N p }, and parameter estimation window length

*w*. We ran ParticleCall with

*l*=

*r*∈{1,2,4}. Increasing

*l*and

*r*beyond

*l*=

*r*=1 did not affect the performance while it significantly slowed down the algorithm. This is due to small values of the phasing and prephasing probabilities, which are estimated to be

*p*=3.54×1

^{0−8}and

*q*=0.00335. Therefore, in the remainder of the paper, we set

*l*=

*r*=1. The accuracy of base-calling for different

_{ N p }is shown in Table 1. As seen there, for the original ParticleCall algorithm,

_{ N p }=800 leads to high performance with reasonable speed. Rao-Blackwellized ParticleCall can achieve the same accuracy with fewer particles (in particular,

_{ N p }=300); however, its effective running time is 3 times that of the original ParticleCall with the same performance. This is because the Rao-Blackwellization steps in (9) and (9) require evaluating a sum over all possible ${S}_{t}^{w}$ (4

^{3}=64 for our choice

*l*=

*r*=1), resulting in a fairly large number of basic operations needed to calculate exact distribution of the discrete variables. Therefore, for further performance comparisons, we rely on the original ParticleCall algorithm (formalized as Algorithm 1). Table 2 shows the ParticleCall base calling error rate and parameter estimation times for different window lengths

*w*. In the remainder of the paper, we set

*w*=5 as it leads to desirable performance/speed characteristics of the algorithm.

**Comparison of ParticleCall with different**
_{
N
p
}

Method |
| error rate | base-calling time (min) |
---|---|---|---|

ParticleCall (via MCEM) | 400 | 0.0126 | 46 |

800 | 0.0124 | 88 | |

1200 | 0.0124 | 130 | |

ParticleCall (via PFEM) | 400 | 0.0128 | 46 |

800 | 0.0125 | 91 | |

1200 | 0.0125 | 133 | |

Rao-Blackwellized ParticleCall (via MCEM) | 100 | 0.0128 | 103 |

200 | 0.0125 | 190 | |

300 | 0.0124 | 287 | |

400 | 0.0124 | 386 |

**ParticleCall parameter estimation**

parameter estimation | ||
---|---|---|

Window length | base-calling error rate | time (min) |

4 | 0.0125 | 50 |

5 | 0.0125 | 39 |

6 | 0.0127 | 29 |

7 | 0.0130 | 25 |

### Performance comparison of different algorithms

**Comparison of error rates and speed**

base-calling | parameter estimation | ||
---|---|---|---|

Method | error rate | time (min) | time (min) |

Bustard | 0.0152 | 2 (total) | |

Rolexa | 0.0170 | 35 (total) | |

naiveBayesCall | 0.0132 | 21 | 1139 |

BayesCall | 0.0124 | 231 | 1139 |

ParticleCall | |||

(via MCEM) | 0.0124 | 88 | 1139 |

ParticleCall | |||

(via PFEM) | 0.0125 | 91 | 39 |

### Quality scores

*phred*quality scores, which were originally developed to assess the quality of the conventional Sanger sequencing and automate large-scale sequencing projects. Phred scores are also often provided by the algorithms used for base-calling in next generation sequencing platforms. Formally, the phred score for a called base in the cycle

*t*, ${\widehat{\mathbf{s}}}_{t}$, is defined as

*D*(

*ε*) at error tolerance

*ε*is defined as the ratio of the correctly called bases having $P({\widehat{\mathbf{s}}}_{t}\ne {\mathbf{s}}_{t})<\ud716$ (i.e., the quality score higher than $-10\underset{10}{log}\left(\ud716\right)$) to all called bases. Figure 3 compares the discrimination ability of ParticleCall, BayesCall, naiveBayesCall and Bustard. It shows that for a reasonable error tolerance

*ε*, ParticleCall with parameters obtained through MCEM has better discrimination ability than BayesCall, naiveBayesCall and Bustard, while ParticleCall with parameters obtained through PFEM has discrimination ability close to naiveBayesCall and better than other algorithms. In other words, when a small cutoff error tolerance

*ε*is set and all the bases with quality scores below

*ε*are considered invalid, ParticleCall provides the most accurate results among the considered base-calling schemes.

### Effects of improved base-calling accuracy on de novo sequence assembly

*de novo*assembly, the target is reconstructed without consulting any reference [21, 22]. Performance of assembly algorithms highly depends on the accuracy of base calling. To demonstrate the effects of base-calling accuracy on assembly, we apply the Velvet assembly algorithm [22] on reads provided by Bustard, Rolexa, naiveBayesCall, BayesCall, and ParticleCall. In particular, we randomly subsample the set of reads provided by each of the base calling algorithms to emulate 5X, 10X, 15X, and 20X coverage. Then we run Velvet on each of the subsets, and evaluate commonly used metrics that quantify the quality of the assembly procedure. Specifically, we evaluate the maximum contig length and the N50 contig length. The described procedure is repeated 200 times to obtain average values of these two quality metrics. The results are shown in Table 4. As can be seen there, ParticleCall provides the largest N50 and maximum contig length among all of the considered base calling schemes, for all of the considered coverages.

**de novo assembly results**

ParticleCall | ParticleCall | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

Coverage | Bustard | Rolexa | naiveBayesCall | BayesCall | via MCEM | via PFEM | ||||||

N50 | Max | N50 | Max | N50 | Max | N50 | Max | N50 | Max | N50 | Max | |

5X | 271 | 607 | 259 | 565 | 278 | 604 | 292 | 629 | 299 | 637 | 289 | 632 |

10X | 1169 | 1750 | 971 | 1557 | 1180 | 1731 | 1269 | 1831 | 1316 | 1900 | 1341 | 1865 |

15X | 3624 | 3823 | 2885 | 3170 | 3726 | 3908 | 3466 | 3741 | 3742 | 3935 | 3697 | 3918 |

20X | 4694 | 4744 | 4529 | 4614 | 4756 | 4816 | 4827 | 4875 | 5102 | 5116 | 4795 | 5039 |

## Conclusions

In this paper we presented ParticleCall, a particle filtering algorithm for base calling in the Illumina’s sequencing-by-synthesis platform. The algorithm is developed by relying on an HMM representation of the sequencing process. Experimental results demonstrate that the ParticleCall base calling algorithm is more accurate than Bustard, Rolexa, and naiveBayesCall. It is as accurate as BayesCall while being significantly faster. Quality score analysis of the reads indicates that ParticleCall has better discrimination ability than BayesCall, naiveBayesCall and Bustard. Moreover, a novel particle filter EM (PFEM) parameter estimation scheme, much faster than the existing Monte Carlo implementation of the EM algorithm, was proposed. When relying on the PFEM scheme, ParticleCall has near-optimal performance while needing much shorter total parameter estimation and base calling time.

## Author’s contributions

Algorithms and experiments were designed by Xiaohu Shen (XS) and Haris Vikalo (HV). Algorithm code was implemented and tested by XS. The manuscript was written by XS and HV. Both authors read and approved the final manuscript.

## Declarations

### Acknowledgements

This work was funded by the National Institute of Health under grant 1R21HG006171-01.

## Authors’ Affiliations

## References

- Shendure J, Ji H: Next-generation DNA sequencing. Nat Biotechnology. 2008, 26: 1135-10.1038/nbt1486.View Article
- Metzker M: Emerging technologies in DNA sequencing. Genome Research. 2005, 56: 1767-View Article
- Bentley D: Whole-genome re-sequencing. Curr Opin Genet Dev. 2006, 16: 545-10.1016/j.gde.2006.10.009.View ArticlePubMed
- Nielsen R, Paul JS, Alvrechtsen A, Song YS: Genotype and SNP calling from next-generation sequencing data. Nature Reviews. 2011, 12: 443-10.1038/nrg2986.PubMed CentralView ArticlePubMed
- Ledergerber C, Dessimoz C: Base-calling for next-generation sequencing platforms. Briefings in Bioinformatics. 2011, 12: 489-10.1093/bib/bbq077.PubMed CentralView ArticlePubMed
- Rougemont J, Amzallag A, Iseli C, Farinelli L, Xenarios I, Naef F: Probabilistic base calling of solexa sequencing data. BMC Bioinformatics. 2008, 9: 431-10.1186/1471-2105-9-431.PubMed CentralView ArticlePubMed
- Erlich Y, Mitra P, Delabastide M, McCombie W, Hannon G: Alta-Cyclic: a self-optimizing base caller for next-generation sequencing. Nat Methods. 2008, 5: 679-10.1038/nmeth.1230.PubMed CentralView ArticlePubMed
- Kao W, Stevens K, Song Y: BayesCall: A model-based base-calling algorithm for high-throughput short-read sequencing. Genome Research. 2009, 19: 1884-10.1101/gr.095299.109.PubMed CentralView ArticlePubMed
- Kao W, Stevens K, Song Y: naiveBayesCall: an efficient model-based base-calling algorithm for high-throughput sequencing. Journal of Computational Biology. 2011, 18: 365-10.1089/cmb.2010.0247.View ArticlePubMed
- Fedurco M, Romieu A, Williams S, et al: BTA, a novel reagent for DNA attachment on glass and efficient generation of solid-phase amplified DNA colonies. Nucleic Acids Res. 2006, 34 (3): e22-10.1093/nar/gnj023.PubMed CentralView ArticlePubMed
- Turcatti G, Romieu A, Fedurce M, et al: A new class of cleavable fluorescent nucleotides: synthesis and optimization as reversible terminators for DNA sequencing by synthesis. Nucleic Acids Res. 2008, 36 (4): e25-PubMed CentralView ArticlePubMed
- Eddy S: Hidden Markov models. Current Opinion in Structural Biology. 1996, 6 (3): 361-10.1016/S0959-440X(96)80056-X.View ArticlePubMed
- Doucet A, Wang X: Monte Carlo methods for signal processing: A review in the statistical signal processing context. IEEE Signal Processing Magzine. 2005, 22: 152-View Article
- Cappé O, Moulines E, Rydén T: Inference in hidden Markov models. 2005, Springer Verlag, New York
- Pitt M, Shephard N: Filtering via simulation: Auxiliary particle filters. Journal of the American Statistical Association. 1999, 94: 590-10.1080/01621459.1999.10474153.View Article
- Doucet A, Godsill S, Andrieu C: On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and computing. 2000, 10 (3): 197-10.1023/A:1008935410038.View Article
- Kim S, Shephard N, Chib S: Stochastic volatility: likelihood inference and comparison with ARCH models. The Review of Economic Studies. 1998, 65 (3): 361-10.1111/1467-937X.00050.View Article
- Liu J, Chen R: Sequential Monte Carlo methods for dynamic systems. Journal of the American statistical association. 1998, 93: 1032-10.1080/01621459.1998.10473765.View Article
- Kitagawa G: Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of computational and graphical statistics. 1996, 5: 1-
- Carpenter J, Clifford P, Fearnhead P: Improved particle filter for nonlinear problems. Radar, Sonar and Navigation, IEE Proceedings-, Volume 146, IET. 1999, , , 2-7.
- Butler J, MacCallum I, Kleber M, Shlyakhter I, Belmonte M, Lander E, Nusbaum C, Jaffe D: ALLPATHS: de novo assembly of whole-genome shotgun microreads. Genome Research. 2008, 18 (5): 810-10.1101/gr.7337908.PubMed CentralView ArticlePubMed
- Zerbino D, Birney E: Velvet: algorithms for de novo short read assembly using de Bruijn graphs. Genome research. 2008, 18 (5): 821-10.1101/gr.074492.107.PubMed CentralView ArticlePubMed

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.