 Methodology article
 Open Access
 Published:
ParticleCall: A particle filter for base calling in nextgeneration sequencing systems
BMC Bioinformatics volume 13, Article number: 160 (2012)
Abstract
Background
Nextgeneration sequencing systems are capable of rapid and costeffective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data.
Results
In this paper, we consider Illumina’s sequencingbysynthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy.
Conclusions
The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for highthroughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling.
ParticleCall is freely available at https://sourceforge.net/projects/particlecall.
Background
The advancements of nextgeneration sequencing technologies have enabled inexpensive and rapid generation of vast amounts of sequencing data [1–3]. At the same time, highthroughput sequencing technologies present us with the challenge of processing and analyzing large data sets that they provide. A fundamental computational challenge encountered in nextgeneration sequencing systems is the one of determining the order of nucleotides from the acquired measurements, the task typically referred to as base calling. The accuracy of base calling is of essential importance for various downstream applications including sequence assembly, SNP calling, and genotype calling [4]. Moreover, improving base calling accuracy may enable achieving desired performance of downstream applications with smaller sequencing coverage, which translates to a reduction in the sequencing cost.
A widely used sequencingbysynthesis platform, commercialized by Illumina, relies on reversible terminator chemistry. Illumina’s sequencing platforms are supported by a commercial basecalling algorithm called Bustard. While Bustard is computationally very efficient, its basecalling error rates can be significantly improved by various computationally more demanding schemes [5]. Such schemes include work presented in [6–9]. Among the proposed methods, the BayesCall algorithm [8] has been shown to significantly outperform Bustard in terms of the achievable base calling error rates. By relying on a full parametric model of the acquired signal, BayesCall builds a Bayesian inference framework capable of providing valuable probabilistic information that can be used in downstream applications. However, its performance gains come at high computational costs. A modified version of the BaseCall algorithm named naiveBayesCall [9] performs base calling in a much more efficient way, but its accuracy deteriorates (albeit remains better than Bustard’s). Both BayesCall and naiveBayesCall rely on expectationmaximization (EM) framework that employs a Markov chain Monte Carlo (MCMC) sampling strategy to estimate the parameters of the statistical model describing the signal acquisition process. This parameter estimation step turns out to be very timeconsuming, limiting practical feasibility of the proposed schemes. Highly accurate and practically feasible parameter estimation and basecalling remain a challenge that needs to be addressed.
In this paper, we propose a Hidden Markov Model (HMM) representation of the signal acquired by Illumina’s sequencingbysynthesis platforms and develop a particle filtering (i.e., sequential Monte Carlo) basecalling scheme that we refer to as ParticleCall. When relying on the BayesCall’s Markov Chain Monte Carlo implementation of the EM algorithm (MCEM) to estimate system parameters, ParticleCall achieves the same error rate performance as BayesCall while reducing the time needed for base calling by a factor of 3. To improve the speed of parameter estimation, we develop a particle filter implementation of the EM algorithm (PFEM). PFEM significantly reduces parameter estimation time while leading to a very minor deterioration of the accuracy of base calling. Finally, we demonstrate that ParticleCall has the best discrimination ability among all of the considered base calling schemes.
Methods
In this section, we first review the data acquisition process and the basic mathematical model of the Illumina’s sequencingbysynthesis platform. Then we introduce a Hidden Markov Model (HMM) representation of the acquired signals. Relying on the HMM model and particle filtering (i.e., sequential Monte Carlo) techniques, we develop a novel base calling and parameter estimation scheme and discuss some important practical aspects of the proposed method.
Illumina sequencing platform
A sequencing task on the Illumina’s platform is preceded by the preparation of a library of singlestranded short templates created by performing random fragmentation of the target DNA sample. Each singlestranded fragment in the library is placed on a glass surface (i.e., the flow cell [10]) and subjected to bridge amplification in order to create a cluster of identical copies of DNA templates [11]. The flow cell contains eight lanes where each lane is divided into a hundred of nonoverlapping tiles. The order of nucleotides in a DNA template is identified by synthesizing its complementary strand while relying on reversible terminator chemistry [3]. Ideally, in every sequencing cycle, a single fluorescently labeled nucleotide is incorporated into the complementary strand on each copy of the template in a cluster. The incorporated nucleotide is a WatsonCrick complement of the first unpaired base of the template. In reversible terminator chemistry, four distinct fluorescent tags are used to label the four bases, and are detected by CCD imaging technology. The acquired images are processed in order to obtain intensity signals indicating the type of nucleotide incorporated in each cycle. These raw signal intensities are then analyzed by a basecalling algorithm to infer the order of nucleotides in each of the templates.
Quality of the acquired raw signals is adversely affected by the imperfections in the underlying sequencingbysynthesis and signal acquisition processes. The imperfections are manifested as various sources of uncertainties. For instance, a small fraction of the strands being synthesized may fail to incorporate a base, or they may incorporate multiple bases in a single test cycle. These effects are referred to as phasing and prephasing, respectively, and they result in an incoherent addition of the signals generated by the synthesis of the complementary strands on the copies of the template. Other sources of uncertainty are due to crosstalk and delay effects in the optical detection process, the residual effects that are readily observed between subsequent test cycles, signal decay, and measurement noise.
Overview of the mathematical model
To describe the signal acquired by the Illumina’s sequencingbysynthesis platform, a parametric model was proposed in [8]. Basic components of the model are overviewed below.
A lengthL DNA template sequence is represented by a 4×L matrix S, where the ^{ith}column of S, _{ s i }, is considered to be a randomly generated unit vector with a single nonzero entry indicating the type of the ^{ith}base in the sequence. We follow the convention where the first component of the vector _{ s i }corresponds to the base A, the second to C, the third to G, and the fourth to T and denote them as _{ e A },_{ e C },_{ e G },_{ e T }. The goal of basecalling is to infer unknown S from the signals obtained by optically detecting nucleotides incorporated during the sequencingbysynthesis process.
Let p denote the average fraction of strands that fail to extend in a test cycle. Phasing is modeled as a Bernoulli random variable with probability p. Let q denote the average fraction of strands which extend by more than one base in a single test cycle. Prephasing is modeled as a Bernoulli random variable with probability q. Length of the synthesized strand changes from i to j with probability
Let P denote an (L + 1)×(L + 1) transition matrix with entries _{ P ij }defined above, 1≤i,j≤L + 1. The signal generated over L cycles of the synthesis process is affected by phasing and prephasing and can be expressed as X=SH, where H=(_{H i,j}) is an L×L matrix with entries ${H}_{i,j}={\left[{P}^{j}\right]}_{1(i+1)}$, the probability that a synthesized strand is of length i after j cycles. Here ^{Pj} denotes the ^{jth} power of matrix P. The decay in signal intensities over cycles (caused by DNA loss due to primertemplate melting, digestion by enzymatic impurities, DNA dissociation, misincorporation, etc.) is modeled by the percluster density random parameter _{ λ t },
where ${\ud716}_{t}\sim \mathcal{\ud4a9}(0,{\sigma}_{t}^{2})$ is a onedimensional Gaussian random variable and _{ d t } is the percluster density decay parameter within [0,1]. We represent the ^{tth}column of H as _{ h t }and the ^{tth} column of X as _{ x t }. Incorporating the decay into the model, the signal generated in cycle t is expressed as
where ${\mathbf{x}}_{t}={\left[{x}_{t}^{A}{x}_{t}^{C}{x}_{t}^{G}{x}_{t}^{T}\right]}^{\u2033}$ is the vector of signals generated in each of the optical channels. Assuming Gaussian observation noise, the measured intensities at cycle t are given by
where _{ K t }denotes the 4×4 crosstalk matrix describing overlap of the emission spectra of the four fluorescent tags, and ${\eta}_{t}^{A},{\eta}_{t}^{C},{\eta}_{t}^{G},{\eta}_{t}^{T}$ are independent, identically distributed (i.i.d.) 4×1 Gaussian random vectors with zero mean and a common 4×4 covariance matrix _{ Σ t }.
Note that, due to typically small values of p and q, the components of the vector _{ h t } around its ^{tth} entry are significantly greater than the remaining ones. This observation can be used to simplify the expressions (2) and (3). In particular, let ${\mathbf{h}}_{t}^{w}$ denote the vector obtained by windowing _{ h t } around its ^{tth} entry, i.e., by setting small components of _{ h t } to 0. In general, we consider l + r + 1 dominant components of _{ h t }centered at position t, _{H t−l,t},_{H t−l + 1,t},…,_{H t,t},…_{H t + r−1,t},_{H t + r,t}, and then expression (2) becomes
Finally, note that the signal measured in cycle t is empirically observed to contain residual effect from the previous cycle. The residual effect is modeled by adding _{ α t }(1−_{ d t })_{y t−1}to _{ y t }, where the unknown parameter _{ α t }∈(0,1). Therefore, the model can be summarized as
where ∥·_{∥2} denotes the _{l2}norm of its argument, and where _{y0}=0, _{λ0}=1.
Hidden Markov Model of DNA basecalling
In this section, we reformulate the statistical description of the signal acquired by the Illumina’s sequencingbysynthesis platform as a Hidden Markov Model (HMM) [12]. HMMs comprise a family of probabilistic graphical models which describe a series of observations by a “hidden” stochastic process and are generally suitable for representing time series data. Sequencing data obtained from the Illumina’s platform is a set of timeseries intensities _{y1:L}, motivating the HMM representation. HMMs provide a convenient framework for state and parameter estimation, which we exploit to develop a particle filter basecalling scheme in the next section.
For the sake of convenience, we remove the dependency between subsequent observations _{y t−1} and _{ y t } by defining ${\mathbf{y}}_{t}^{{}^{\u2033}}={\mathbf{y}}_{t}{\alpha}_{t}(1{d}_{t}){\mathbf{y}}_{t1},t=1,2,\dots ,L$. Therefore, we can write
Components of ${\mathbf{y}}_{1:L}^{{}^{\u2033}}$ are the observations of our HMM, and depend on the underlying signals _{x1:L}. Moreover, let ${S}_{t}^{w}$ denote the 4×(l + r + 1) windowed submatrix of S, i.e.,
Since ${\mathbf{x}}_{t}^{w}={\lambda}_{t}S{\mathbf{h}}_{t}^{w}={\lambda}_{t}\sum _{i=tl}^{t+r}{\mathbf{s}}_{i}{H}_{i,t}$, it is clear that ${\mathbf{y}}_{t}^{{}^{\u2033}}$ depends on _{ λ t }and ${S}_{t}^{w}$. Therefore, we define the state of the HMM to be the combination of _{ λ t } and ${S}_{t}^{w}$ – the percluster density at cycle t and the collection of (l + r + 1) bases around (and including) the base in position t, respectively.
The proposed HMM representation is illustrated in Figure 1. The observation dynamics that characterize the relationship between ${\mathbf{y}}_{t}^{{}^{\u2033}}$ and the hidden states $({S}_{t}^{w},{\lambda}_{t})$ are given by the distribution $g\left({\mathbf{y}}_{t}^{{}^{\u2033}}\right{S}_{t}^{w},{\lambda}_{t})$. It is straightforward to show from (5) that
On the other hand, the state transition dynamics is described by the transition probability between subsequent states, $({S}_{t1}^{w},{\lambda}_{t1})$ and $({S}_{t}^{w},{\lambda}_{t})$. Since ${S}_{t}^{w}$ and _{ λ t } are independent, the transition probability is
The second term on the righthand side of (8), _{f2}(_{ λ t }_{λ t−1}), is known from the density decay model (1),
For notational convenience, we use ${\mathbf{s}}_{t,1}^{w},\dots ,{\mathbf{s}}_{t,l+r+1}^{w}$ to denote the set of l + r + 1 column vectors of ${S}_{t}^{w}$. Note that for k=2,3,…,l + r + 1, the column vectors ${\mathbf{s}}_{t1,k}^{w}$ in ${S}_{t1}^{w}$ and the column vectors ${\mathbf{s}}_{t,k1}^{w}$ in ${S}_{t}^{w}$ actually represent the same base. Therefore, the transition model between them can be represent by a δ function as
Let U({_{ e A },_{ e C },_{ e G },_{ e T }}) denote a uniform distribution on the support set of unit vectors ({_{ e A },_{ e C },_{ e G },_{ e T }}). We assume no correlation between consecutive bases of the template sequence, i.e., ${\mathbf{s}}_{t,l+r+1}^{w}$ is generated from U({_{ e A },_{ e C },_{ e G },_{ e T }}). Therefore, ${f}_{1}\left({S}_{t}^{w}\right{S}_{t1}^{w})$ can be written as
where u(·)∼U({_{ e A },_{ e C },_{ e G },_{ e T }}). Hereby, all the components of the HMM are specified.
ParticleCall basecalling algorithm
The goal of base calling is to determine the order of nucleotides in a template from the acquired signal _{y1:t}. This can be rephrased as the problem of inferring the most likely sequence of states $({S}_{t}^{w},{\lambda}_{t})$ of the HMM in (7)(8) from the observed sequence ${\mathbf{y}}_{1:t}^{\u2033}$(clearly, _{s1:L} follows directly from ${S}_{t}^{w}$). We assume that the parameters Λ={p,q,_{d1:L},_{α1:L},_{σ1:L},_{K1:L},_{Σ1:L}} are common for all clusters within a tile, and that they are provided by a parameter estimation step discussed in the following section. In this section, we introduce a novel base calling algorithm ParticleCall which relies on particle filtering techniques to sequentially infer $({S}_{t}^{w},{\lambda}_{t})$ and, therefore, recover the matrix S.
In general, particle filtering (i.e., sequential Monte Carlo) methods generate a set of particles with associated weights to estimate the posteriori distribution of unknown variables given the acquired measurements [13]. In the proposed HMM framework, we sequentially calculate the posteriori distribution of the columns of S p(s_{ t }${\mathbf{y}}_{1:t}^{\u2033}$),t=1,2,…,L, and find the maximum a posteriori (MAP) estimates of _{ s t } by solving
Our algorithm relies on a sequential importance sampling/resampling (SISR) particle filter scheme [14] to calculate $p({S}_{t}^{w},{\lambda}_{t}{y}_{1:t}^{\u2033})$. Different choices and approximation methods of proposal densities are considered in [15–17]. We directly use the transition (8) as the proposal density. This sequential importance sampling suffers from degeneracy and the variance of the importance weights will increase over time. To address the degeneracy problem, a resampling step is introduced in order to eliminate samples which have small normalized importance weights. Common resampling methods include multinomial resampling [14], residual resampling [18] and systematic resampling [19, 20]. We measure degeneracy of the algorithm using the effective sample size _{Keff} and, for the sake of simplicity, employ multinomial resampling strategy. If we denote the number of particles by _{ N p } and associated weights by w, then ${K}_{\text{eff}}={\left(\sum _{k=1}^{{N}_{p}}{\left({w}_{t}^{\left(i\right)}\right)}^{2}\right)}^{1}$ and resampling step is used when _{Keff} is below a fixed threshold _{ N threshold }. _{ N threshold } of size O(_{ N p }) is typically sufficient [14]. In our implementation, we set _{ N threshold }=_{ N p }/2.
We omit further details for brevity and formalize the ParticleCall algorithm below.
Algorithm 1
ParticleCall basecalling algorithm

1.
Initialization:
1.1 Initialize particles:for i=1→_{ N p }do
Sample each column of the submatrix ${S}_{1}^{w,\left(i\right)}$ from U({_{ e A },_{ e C },_{ e G },_{ e T }}); Sample ${\lambda}_{1}^{\left(i\right)}$ from a Gaussian distribution with mean 1, and the variance calculated using Bustard’s estimates of λin the first 10 test cycles.end for
1.2 Compute and normalize weights for each particle according to ${w}_{1}^{\left(i\right)}\propto g\left({\mathbf{y}}_{1}^{\u2033}\right{S}_{1}^{w,\left(i\right)},{\lambda}_{1}^{\left(i\right)})$ as in (7).

2.
Run iteration t(t≥2):
2.1 Sampling:for i=1→_{ N p }do
Sample ${S}_{t}^{w,\left(i\right)},{\lambda}_{t}^{\left(i\right)}\sim f(\xb7,\xb7{S}_{t1}^{w,\left(i\right)},{\lambda}_{t1}^{\left(i\right)})$ according to (8).end for
2.2 Update the importance weight
2.3 Normalize the weights. Calculate the posteriori probability of _{ s t } and obtain the estimate ${\widehat{\mathbf{s}}}_{t}$.
2.4 Resampling:if${K}_{\text{eff}}={\left(\sum _{k=1}^{{N}_{p}}{\left({w}_{t}^{\left(i\right)}\right)}^{2}\right)}^{1}\le {N}_{\mathit{\text{threshold}}}$then
Draw _{ N p }samples $\{{\stackrel{\u0304}{S}}_{t}^{w,\left(j\right)},{\stackrel{\u0304}{\lambda}}_{t}^{\left(j\right)},j=1,\dots ,{N}_{p}\}$ from $\{{S}_{t}^{w,\left(i\right)},{\lambda}_{t}^{\left(i\right)},i=1,\dots ,{N}_{p}\}$ with probabilities proportional to $\{{w}_{t}^{\left(i\right)},i=1,\dots ,{N}_{p}\}$. Assign equal weight to each particle, ${\stackrel{\u0304}{w}}_{t}^{\left(i\right)}=1/{N}_{p}$.end if
Since ${S}_{t}^{w}$ in the HMM states are discrete with a finite alphabet, and the transitions of ${S}_{t}^{w}$ and _{ λ t } are independent according to (8), it is possible to RaoBlackwellize the ParticleCall algorithm. RaoBlackwellization is used to marginalize part of the states in the particle filter, hence reducing the number of needed particles _{ N p }[16]. We marginalize the discrete states ${S}_{t}^{w}$ and reduce the hidden process to _{ λ t }, while relying on the particle filter to calculate p(_{λ1:t}${\mathbf{y}}_{1:t}^{\u2033}$).
The original posterior distribution of the states can be expressed as
Since $p\left({\lambda}_{1:t}\right{\mathbf{y}}_{1:t}^{\u2033})\propto p({\mathbf{y}}_{t}^{\u2033}{\mathbf{y}}_{1:t1}^{\u2033},{\lambda}_{1:t})p\left({\lambda}_{t}\right{\lambda}_{t1}^{\left(i\right)})$, where ${\lambda}_{t1}^{\left(i\right)}$ is a sample from p(_{λ1:t−1}${\mathbf{y}}_{1:t1}^{\u2033}$), we can state the RaoBlackwellized ParticleCall algorithm as below.
Algorithm 2
RaoBlackwellized ParticleCall algorithma
1. Initialization:
1.1 Initialize particles:for i=1→_{ N p }do
Sample ${\lambda}_{1}^{\left(i\right)}$ from a Gaussian distribution with mean 1, and the variance calculated using Bustard’s estimates of λin the first 10 test cycles.end for
1.2 Compute and normalize weights for each particle according to ${w}_{1}^{\left(i\right)}\propto g\left({\mathbf{y}}_{1}^{\u2033}\right{\lambda}_{1}^{\left(i\right)})\propto \sum _{{S}_{1}^{w}}g({\mathbf{y}}_{1}^{\u2033}{S}_{1}^{w},{\lambda}_{1}^{\left(i\right)})$.
1.3 Calculate the discrete distribution $p\left({S}_{1}^{w}\right{\mathbf{y}}_{1},{\lambda}_{1}^{\left(i\right)})$ for each i.
2. Run iteration t(t≥2):
2.1 Sampling:for i=1→_{ N p }do
Sample ${\lambda}_{t}^{\left(i\right)}\sim f(\xb7{\lambda}_{t1}^{\left(i\right)})$.end for
2.2 Update the importance weight ${w}_{t}^{\left(i\right)}\propto {w}_{t1}^{\left(i\right)}g\left({\mathbf{y}}_{t}^{\u2033}\right{\mathbf{y}}_{1:t1}^{\u2033},{\lambda}_{1:t}^{\left(i\right)}).$ and normalize the weights.
2.3 Resample if _{Keff}≤_{ N threshold }
2.4 Update $p\left({S}_{t}^{w}\right{\mathbf{y}}_{1:t}^{\u2033},{\lambda}_{1:t}^{\left(i\right)})$for i=1→_{ N p }do
Update $p\left({S}_{t}^{w}\right{\mathbf{y}}_{1:t}^{\u2033},{\lambda}_{1:t}^{\left(i\right)})$ using $p\left({S}_{t1}^{w}\right{\mathbf{y}}_{1:t1}^{\u2033},{\lambda}_{1:t1}^{\left(i\right)})$ and ${\lambda}_{t}^{\left(i\right)}$.end for
In step 2.2 of Algorithm 2, the quantity $g\left({\mathbf{y}}_{t}^{\u2033}\right{\mathbf{y}}_{1:t1}^{\u2033},{\lambda}_{1:t}^{\left(i\right)})$ can be obtained by marginalizing over discrete states ${S}_{t}^{w}$ and ${S}_{t1}^{w}$,
where $p\left({\mathbf{y}}_{t}^{\u2033}\right{S}_{t}^{w},{\lambda}_{t}^{\left(i\right)})$ is the observation density, $p\left({S}_{t1}^{w}\right$${\mathbf{y}}_{1:t1}^{\u2033},{\lambda}_{1:t}^{\left(i\right)})=p({S}_{t1}^{w}{\mathbf{y}}_{1:t1}^{\u2033},{\lambda}_{1:t1}^{\left(i\right)})$ due to the independence of the state transitions, and $p\left({S}_{t}^{w}\right{S}_{t1}^{w},{\mathbf{y}}_{1:t1}^{\u2033},{\lambda}_{1:t}^{\left(i\right)})=$$p\left({S}_{t}^{w}\right{S}_{t1}^{w})$ due to the Markov property and the independence of the state transitions.
In step 2.4 of Algorithm 2, the update equation is obtained as
Parameter estimation
To determine the set of parameters Λneeded to run the proposed ParticleCall base calling algorithm, one could rely on the MCMC implementation of the EM algorithm (MCEM) proposed in [8]. In section Results and discussion, we demonstrate the performance of the ParticleCall algorithm that relies on the MCEM parameter estimation scheme. Note, however, that the MCMC sampling strategy employed by MCEM requires a lengthy burnin period and a very large sample size to perform the expectation step. Therefore, the MCEM parameter estimation scheme is computationally rather intensive and requires significant computational resources if it is to be used for processing large sequencing data sets. As an alternative, we develop an EM parameter estimation scheme which relies on the proposed HMM and uses samples generated by a particle filter to evaluate the expectation of the likelihood function. We refer to this algorithm as the particle filter EM (PFEM). The speed and accuracy of the proposed scheme is practically sound for use in next generation sequencing platforms.
Assumptions on parameters
Recall that the set of parameters needed to run ParticleCall is Λ={p q_{d1:L}_{α1:L}_{σ1:L}_{K1:L}_{Σ1:L}}. The phasing and prephasing parameters p and q are assumed to be the same for each sequencing lane and are estimated using the same procedure as Bustard (see, e.g., [8]). The remaining parameters are assumed to be cycledependent and need to be estimated for each tile. The cycledependency assumption on the parameters can lead to a substantial improvement in the basecalling accuracy [5]. In order to avoid overfitting, we assume that parameters remain constant within a short window of cycles and then change to a different set of values. To track the changes in the parameters, we first divide the total read length L into several nonoverlapping windows and then perform our parameter estimation windowbywindow. To further reduce the number of parameters and improve the estimation efficiency, we assume that the parameters _{d1:L} and _{σ1:L} are uniformly distributed over an interval and incorporate them into the hidden states of the HMM model. Therefore, only the mean and variance of these parameters, i.e., _{ d mean }, _{ d var }, _{ σ mean }, and _{ σ var }need to be estimated. Computational results demonstrate that these two assumptions does not affect the accuracy of basecalling.
Particle filter EM algorithm
In the early sequencing cycles, effects of phasing and prephasing are relatively small. Therefore, we may ignore phasing and prephasing to facilitate straightforward computation of the initial estimates of the remaining parameters. In particular, the signal generated in the early cycles t is approximated as
Replacing (2) by (10) leads to a simplified model that allows for straightforward base calling and inference of the parameters by means of linear regression. We use these values to obtain the estimates of _{ d mean }, _{ d var }, _{ σ mean }, and _{ σ var }, and to initialize the remaining parameters α, K, Σ, in the particle filter EM parameter estimation procedure.
The parameter estimation is performed windowbywindow and is conducted using n reads randomly chosen from a tile (in our experiments, we use n=200). Assume the window length is w, and denote the window index by m. The particle filter EM (PFEM) algorithm finds parameters for one window and then uses these values to initialize the search for parameters in the next window. We illustrate the procedure for the first window here (the same procedure is repeated in the following windows). Let ${\Lambda}_{1}^{i}=\{{\alpha}^{i},{K}^{i},{\Sigma}^{i}\}$ denote the set of parameters for window 1 in the ith iteration of the EM scheme. The estimate of ${\Lambda}_{1}^{i}$ is given by
where ${L}_{1}\left({\Lambda}_{1}^{i1}\right)=\sum _{j=1}^{n}{L}_{1,j}\left({\Lambda}_{1}^{i1}\right)$ is the sum of the loglikelihood functions over the reads in the training set. The loglikelihood function for each read, ${L}_{1,j}\left({\Lambda}_{1}^{i1}\right)$, is obtained as
where the expectation is taken with respect to $P({\mathbf{s}}_{1:w},{\lambda}_{1:w}{\mathbf{y}}_{1:w},{\Lambda}_{1}^{i1})$. We rely on an SISR particle filtering scheme to generate equally weighted sample trajectories from $P({\mathbf{s}}_{1:w},{\lambda}_{1:w}{\mathbf{y}}_{1:w},{\Lambda}_{1}^{i1})$. Based on (7) and (8), we calculate $logP({\mathbf{y}}_{1:w},{\mathbf{s}}_{1:w},{\lambda}_{1:w}{\Lambda}_{1}^{i1})$ for these samples and compute their average to approximate the expectation in (12). The maximization (11) is performed by solving equations obtained after taking gradients of ${L}_{1}\left({\Lambda}_{1}^{i1}\right)$ over the parameters and setting them to 0. In our experiment, the PFEM parameter estimation scheme performs 30 EM iterations and uses 600 samples from the particle filter for each window.
Results and discussion
The proposed method is evaluated on a data set obtained by sequencing phiX174 bacteriophage using Illumina Genome Analyzer II with the cycle length 76. This is a short genome with a known sequence which enables reliable performance comparison of different basecalling techniques. We tested ParticleCall and several other algorithms on a tile containing 77337 reads, and present the results here. All the codes are written in C and the tests are run on a desktop with an Intel Core i7 4core 3GHz processor.
Performance of ParticleCall
The base calling error rates are computed by aligning the reads to the reference genome and evaluating frequency of mismatches. Reads that could not be aligned to the reference with at least 70% matches are discarded. Note that the error rates and speed of the proposed ParticleCall algorithm and the parameter estimation scheme are affected by the parameters l, r, particle number _{ N p }, and parameter estimation window length w. We ran ParticleCall with l=r∈{1,2,4}. Increasing l and r beyond l=r=1 did not affect the performance while it significantly slowed down the algorithm. This is due to small values of the phasing and prephasing probabilities, which are estimated to be p=3.54×1^{0−8} and q=0.00335. Therefore, in the remainder of the paper, we set l=r=1. The accuracy of basecalling for different _{ N p }is shown in Table 1. As seen there, for the original ParticleCall algorithm, _{ N p }=800 leads to high performance with reasonable speed. RaoBlackwellized ParticleCall can achieve the same accuracy with fewer particles (in particular, _{ N p }=300); however, its effective running time is 3 times that of the original ParticleCall with the same performance. This is because the RaoBlackwellization steps in (9) and (9) require evaluating a sum over all possible ${S}_{t}^{w}$ (4^{3}=64 for our choice l=r=1), resulting in a fairly large number of basic operations needed to calculate exact distribution of the discrete variables. Therefore, for further performance comparisons, we rely on the original ParticleCall algorithm (formalized as Algorithm 1). Table 2 shows the ParticleCall base calling error rate and parameter estimation times for different window lengths w. In the remainder of the paper, we set w=5 as it leads to desirable performance/speed characteristics of the algorithm.
Performance comparison of different algorithms
The error rates and speed of the proposed ParticleCall algorithm are compared with those of BayesCall, naiveBayesCall, Rolexa, and Bustard. We run ParticleCall both with parameters provided by the computationally intensive MCEM algorithm as well as with those inferred by the PFEM parameter estimation scheme proposed in this paper. The results are reported in Table 3. Note that Rolexa generally outputs the socalled IUPAC codes, unlike all the other considered algorithms which provide sequences of nucleotides A, C, G, and T. To allow a comparison, we enforce Rolexa to output sequences of nucleotides as well. The comparison of percycle error rates is shown in Figure 2. It can be seen from Table 3 and Figure 2 that ParticleCall, BayesCall and naiveBayesCall all have improved basecalling accuracy compared to Bustard. BayesCall is highly accurate but relatively slow – it requires approximately 4 hours to complete basecalling for one tile of the data. naiveBayesCall significantly improves basecalling speed over BayesCall but it does so at the expense of incurring higher error rate. Our ParticleCall basecalling algorithm has the same accuracy as BayesCall, while being roughly 3 times faster. Figure 2 shows that both ParticleCall and BayesCall are more accurate than naiveBayesCall in the early cycles and improve over Bustard in all cycles. Note that Bustard outperforms Rolexa, which is consistent with the results in [5]. Moreover, we see from Table 3 that performing parameter estimation via the MCEM algorithm proposed in [8] requires 19 hours, while the particle filter implementation of the EM estimation scheme proposed in this paper takes only 39 minutes. As evident from Table 3, running ParticleCall with parameters obtained by the PFEM scheme leads to only a minor performance degradation compared to running it with parameters obtained by the MCEM algorithm. Running ParticleCall base calling along with the PFEM parameter estimation scheme takes about 2 hours per tile, which is 9 times faster than the total time required by the less accurate naiveBayesCall.
Quality scores
Quality scores are used to characterize confidence in the outcome of the basecalling procedures. They are computed as part of the analysis of the acquired raw data and may be used to filter out reads of suspect quality, or to shorten the reads if the quality scores of individual bases fall below certain thresholds. They can also provide confidence information for downstream analysis including sequence assembly and SNP and genotype calling. Frequently used are the socalled phred quality scores, which were originally developed to assess the quality of the conventional Sanger sequencing and automate largescale sequencing projects. Phred scores are also often provided by the algorithms used for basecalling in next generation sequencing platforms. Formally, the phred score for a called base in the cycle t, ${\widehat{\mathbf{s}}}_{t}$, is defined as
Essentially, ${Q}_{\mathrm{phred}}\left({\widehat{\mathbf{s}}}_{t}\right)$ is the scaled logarithm of the error probability. Higher quality scores imply smaller probability of the basecalling error. For the proposed ParticleCall algorithm, probability of correctly calling a base can be obtained from the posteriori probability as
Quality scores can be used to compare the discrimination ability of different algorithms. The discrimination score D(ε) at error tolerance ε is defined as the ratio of the correctly called bases having $P({\widehat{\mathbf{s}}}_{t}\ne {\mathbf{s}}_{t})<\ud716$ (i.e., the quality score higher than $10\underset{10}{log}\left(\ud716\right)$) to all called bases. Figure 3 compares the discrimination ability of ParticleCall, BayesCall, naiveBayesCall and Bustard. It shows that for a reasonable error tolerance ε, ParticleCall with parameters obtained through MCEM has better discrimination ability than BayesCall, naiveBayesCall and Bustard, while ParticleCall with parameters obtained through PFEM has discrimination ability close to naiveBayesCall and better than other algorithms. In other words, when a small cutoff error tolerance ε is set and all the bases with quality scores below εare considered invalid, ParticleCall provides the most accurate results among the considered basecalling schemes.
Effects of improved basecalling accuracy on de novo sequence assembly
In shotgun sequencing, a long target sequence is oversampled by a library of randomly fragmented copies of the target, and the overlaps between short reads obtained by a highthroughput platform are used to assemble the target. In de novo assembly, the target is reconstructed without consulting any reference [21, 22]. Performance of assembly algorithms highly depends on the accuracy of base calling. To demonstrate the effects of basecalling accuracy on assembly, we apply the Velvet assembly algorithm [22] on reads provided by Bustard, Rolexa, naiveBayesCall, BayesCall, and ParticleCall. In particular, we randomly subsample the set of reads provided by each of the base calling algorithms to emulate 5X, 10X, 15X, and 20X coverage. Then we run Velvet on each of the subsets, and evaluate commonly used metrics that quantify the quality of the assembly procedure. Specifically, we evaluate the maximum contig length and the N50 contig length. The described procedure is repeated 200 times to obtain average values of these two quality metrics. The results are shown in Table 4. As can be seen there, ParticleCall provides the largest N50 and maximum contig length among all of the considered base calling schemes, for all of the considered coverages.
Conclusions
In this paper we presented ParticleCall, a particle filtering algorithm for base calling in the Illumina’s sequencingbysynthesis platform. The algorithm is developed by relying on an HMM representation of the sequencing process. Experimental results demonstrate that the ParticleCall base calling algorithm is more accurate than Bustard, Rolexa, and naiveBayesCall. It is as accurate as BayesCall while being significantly faster. Quality score analysis of the reads indicates that ParticleCall has better discrimination ability than BayesCall, naiveBayesCall and Bustard. Moreover, a novel particle filter EM (PFEM) parameter estimation scheme, much faster than the existing Monte Carlo implementation of the EM algorithm, was proposed. When relying on the PFEM scheme, ParticleCall has nearoptimal performance while needing much shorter total parameter estimation and base calling time.
Author’s contributions
Algorithms and experiments were designed by Xiaohu Shen (XS) and Haris Vikalo (HV). Algorithm code was implemented and tested by XS. The manuscript was written by XS and HV. Both authors read and approved the final manuscript.
References
 1.
Shendure J, Ji H: Nextgeneration DNA sequencing. Nat Biotechnology. 2008, 26: 113510.1038/nbt1486.
 2.
Metzker M: Emerging technologies in DNA sequencing. Genome Research. 2005, 56: 1767
 3.
Bentley D: Wholegenome resequencing. Curr Opin Genet Dev. 2006, 16: 54510.1016/j.gde.2006.10.009.
 4.
Nielsen R, Paul JS, Alvrechtsen A, Song YS: Genotype and SNP calling from nextgeneration sequencing data. Nature Reviews. 2011, 12: 44310.1038/nrg2986.
 5.
Ledergerber C, Dessimoz C: Basecalling for nextgeneration sequencing platforms. Briefings in Bioinformatics. 2011, 12: 48910.1093/bib/bbq077.
 6.
Rougemont J, Amzallag A, Iseli C, Farinelli L, Xenarios I, Naef F: Probabilistic base calling of solexa sequencing data. BMC Bioinformatics. 2008, 9: 43110.1186/147121059431.
 7.
Erlich Y, Mitra P, Delabastide M, McCombie W, Hannon G: AltaCyclic: a selfoptimizing base caller for nextgeneration sequencing. Nat Methods. 2008, 5: 67910.1038/nmeth.1230.
 8.
Kao W, Stevens K, Song Y: BayesCall: A modelbased basecalling algorithm for highthroughput shortread sequencing. Genome Research. 2009, 19: 188410.1101/gr.095299.109.
 9.
Kao W, Stevens K, Song Y: naiveBayesCall: an efficient modelbased basecalling algorithm for highthroughput sequencing. Journal of Computational Biology. 2011, 18: 36510.1089/cmb.2010.0247.
 10.
Fedurco M, Romieu A, Williams S, et al: BTA, a novel reagent for DNA attachment on glass and efficient generation of solidphase amplified DNA colonies. Nucleic Acids Res. 2006, 34 (3): e2210.1093/nar/gnj023.
 11.
Turcatti G, Romieu A, Fedurce M, et al: A new class of cleavable fluorescent nucleotides: synthesis and optimization as reversible terminators for DNA sequencing by synthesis. Nucleic Acids Res. 2008, 36 (4): e25
 12.
Eddy S: Hidden Markov models. Current Opinion in Structural Biology. 1996, 6 (3): 36110.1016/S0959440X(96)80056X.
 13.
Doucet A, Wang X: Monte Carlo methods for signal processing: A review in the statistical signal processing context. IEEE Signal Processing Magzine. 2005, 22: 152
 14.
Cappé O, Moulines E, Rydén T: Inference in hidden Markov models. 2005, Springer Verlag, New York
 15.
Pitt M, Shephard N: Filtering via simulation: Auxiliary particle filters. Journal of the American Statistical Association. 1999, 94: 59010.1080/01621459.1999.10474153.
 16.
Doucet A, Godsill S, Andrieu C: On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and computing. 2000, 10 (3): 19710.1023/A:1008935410038.
 17.
Kim S, Shephard N, Chib S: Stochastic volatility: likelihood inference and comparison with ARCH models. The Review of Economic Studies. 1998, 65 (3): 36110.1111/1467937X.00050.
 18.
Liu J, Chen R: Sequential Monte Carlo methods for dynamic systems. Journal of the American statistical association. 1998, 93: 103210.1080/01621459.1998.10473765.
 19.
Kitagawa G: Monte Carlo filter and smoother for nonGaussian nonlinear state space models. Journal of computational and graphical statistics. 1996, 5: 1
 20.
Carpenter J, Clifford P, Fearnhead P: Improved particle filter for nonlinear problems. Radar, Sonar and Navigation, IEE Proceedings, Volume 146, IET. 1999, , , 27.
 21.
Butler J, MacCallum I, Kleber M, Shlyakhter I, Belmonte M, Lander E, Nusbaum C, Jaffe D: ALLPATHS: de novo assembly of wholegenome shotgun microreads. Genome Research. 2008, 18 (5): 81010.1101/gr.7337908.
 22.
Zerbino D, Birney E: Velvet: algorithms for de novo short read assembly using de Bruijn graphs. Genome research. 2008, 18 (5): 82110.1101/gr.074492.107.
Acknowledgements
This work was funded by the National Institute of Health under grant 1R21HG00617101.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Shen, X., Vikalo, H. ParticleCall: A particle filter for base calling in nextgeneration sequencing systems. BMC Bioinformatics 13, 160 (2012). https://doi.org/10.1186/1471210513160
Received:
Accepted:
Published:
Keywords
 Hide Markov Model
 Particle Filter
 Hide Markov Model Model
 Parameter Estimation Scheme
 MCEM Algorithm