 Methodology article
 Open Access
Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory
 Alexander Churbanov^{1}Email author and
 Stephen WintersHilt^{1, 2}
https://doi.org/10.1186/147121059224
© Churbanov and WintersHilt; licensee BioMed Central Ltd. 2008
 Received: 25 August 2007
 Accepted: 30 April 2008
 Published: 30 April 2008
Abstract
Background
The BaumWelch learning procedure for Hidden Markov Models (HMMs) provides a powerful tool for tailoring HMM topologies to data for use in knowledge discovery and clustering. A linear memory procedure recently proposed by Miklós, I. and Meyer, I.M. describes a memory sparse version of the BaumWelch algorithm with modifications to the original probabilistic table topologies to make memory use independent of sequence length (and linearly dependent on state number). The original description of the technique has some errors that we amend. We then compare the corrected implementation on a variety of data sets with conventional and checkpointing implementations.
Results
We provide a correct recurrence relation for the emission parameter estimate and extend it to parameter estimates of the Normal distribution. To accelerate estimation of the prior state probabilities, and decrease memory use, we reverse the originally proposed forward sweep. We describe different scaling strategies necessary in all real implementations of the algorithm to prevent underflow. In this paper we also describe our approach to a linear memory implementation of the Viterbi decoding algorithm (with linearity in the sequence length, while memory use is approximately independent of state number). We demonstrate the use of the linear memory implementation on an extended Duration Hidden Markov Model (DHMM) and on an HMM with a spike detection topology. Comparing the various implementations of the BaumWelch procedure we find that the checkpointing algorithm produces the best overall tradeoff between memory use and speed. In cases where sequence length is very large (for BaumWelch), or state number is very large (for Viterbi), the linear memory methods outlined may offer some utility.
Conclusion
Our performanceoptimized Java implementations of BaumWelch algorithm are available at http://logos.cs.uno.edu/~achurban. The described method and implementations will aid sequence alignment, gene structure prediction, HMM profile training, nanopore ionic flow blockades analysis and many other domains that require efficient HMM training with EM.
Keywords
 Expectation Maximization
 Viterbi Algorithm
 State Path
 Forward Probability
 Dynamic Programming Table
Background
Hidden Markov Models (HMMs) are a widely accepted modeling tool [1] used in various domains, such as speech recognition [2] and bioinformatics [3]. An HMM can be described as a stochastic finite state machine where each transition between hidden states ends with a symbol emission. The HMM can be represented as a directed graph with N states where each state can emit either a discrete character or a continuous value drawn from a Probability Density Function (PDF).
We are interested in a distributed HMM analysis of the channel current blockade signal caused by a single DNA hairpin molecule held in a nanopore detector [4, 5]. The molecules examined frequently produce toggles with stationary statistical profiles for thousands of milliseconds. With a sampling rate of 20 μ s, processing even a modest blockade signal of 200 ms duration (10,000 sample points) becomes problematic, mostly because of the size of the dynamic programming tables required in the conventional implementations of the HMM's BaumWelch and Viterbi decoding algorithms. Since we are also trying to model durations [6] and spike phenomena [7], by increasing the number of HMM states, conventional HMM implementations are found to be prohibitively expensive in terms of memory use.
The BaumWelch algorithm is an Expectation Maximization (EM) algorithm invented by Leonard E. Baum and Lloyd R. Welch, and first appears in [8]. A later refinement, Hirschberg's algorithm for an HMM [9], reduces the memory footprint by recursively halving the pairwise alignment dynamic programming table for sequences of comparable size. In our application domain, the length of the observed emission sequence (in the case of nanopore ionic flow blockade analysis or gene structure prediction) is prohibitively long compared to the number of HMM states. Further, BaumWelch requires multiple paths, instead of the most likely one, making this strategy less than optimal.
The checkpointing algorithm [10–12] implements the BaumWelch algorithm in $O(\sqrt{T}N)$ memory and in O(TNQ_{ max }+ T(Q + E)) processor time, where T is the length of the observed sequence, Q_{ max }is the maximum HMM node outdegree, E is the number of free emission parameters and Q is the number of free transition parameters. It divides the input sequence into $\sqrt{T}$ blocks of $\sqrt{T}$ symbols each, and, during the forward pass, retains the first column of the forward probability table for each block. When the reverse sweep starts, the forward values for each block are sequentially reevaluated, beginning with their corresponding checkpoints, to update the parameter estimates.
The computational expense of different algorithm implementations running on HMM.
Algorith  Canonical  Checkpointing  Linear  

Viterbi  Time  O(TNQ_{ max })  Time  O(TNQ_{ max })  Time  O(TNQ_{ max }) 
Space  O(TN)  Space  $O(\sqrt{T}N+T)$  Space  O(T)  
BaumWelch  Time  O(TNQ_{ max }+ T (Q + E))  Time  O(TNQ_{ max }+ T (Q + E))  Time  O(TNQ_{ max }(Q + ED)) 
Space  O(TN)  Space  $O(\sqrt{T}N)$  Space  O(N(Q + ED)) 
Methods and Results
HMM definition, EM learning and Viterbi decoding
The following parameters describe the conventional HMM implementation according to [14]:

A set of states S = {S_{1},..., S_{ N }} with q_{ t }being the state visited at time t,

A set of PDFs B = {b_{1}(o),..., b_{ N }(o)}, describing the emission probabilities b_{ j }(o_{ t }) = p(o_{ t }q_{ t }= S_{ j }) for 1 ≤ j ≤ N, where o_{ t }is the observation at timepoint t from the sequence of observations O = {o_{1},..., o_{ T }},

The statetransition probability matrix A = {a_{i, j}} for 1 ≤ i, j ≤ N, where a_{i, j}= p(q_{t + 1}= S_{ j }q_{ t }= S_{ i }),

The initial state distribution vector Π = {π_{1},..., π_{ N }}.
The Viterbi decoding, forward and backward procedures.
Forward procedure  Backward procedure  Viterbi algorithm 

α_{ t }(i) ≡ p (o_{1},..., o_{ t }q_{ t }= S_{ i }, λ)  β_{ t }(i) ≡ p (o_{t + 1},..., o_{ T }q_{ t }= S_{ i }, λ)  • Initially δ_{1}(i) = π_{ i }b_{ i }(o_{1}), ψ_{1}(i) = 0 for 1 ≤ i ≤ N, 
• Initially α_{1}(i) = π_{ i }b_{ i }(o_{1}) for 1 ≤ i ≤ N,  • Initially β_{ T }(i) = 1 for 1 ≤ i ≤ N,  • $\begin{array}{l}{\delta}_{t}(j)=\underset{1\le i\le N}{\mathrm{max}\phantom{\rule{0.5em}{0ex}}}[{\delta}_{t1}(i){a}_{i,j}]{b}_{j}({o}_{t}),\hfill \\ {\psi}_{t}(j)=\underset{1\le i\le N}{\mathrm{arg}\phantom{\rule{0.5em}{0ex}}\mathrm{max}\phantom{\rule{0.5em}{0ex}}}[{\delta}_{t1}(i){a}_{i,j}]\hfill \end{array}$ for t = 2,..., T and 1 ≤ j ≤ N, 
• ${\alpha}_{t}(j)=\left[{\displaystyle {\sum}_{i=1}^{N}{\alpha}_{t1}(i)}{a}_{i,j}\right]{b}_{j}({o}_{t})$ for t = 2, 3,..., T and 1 ≤ j ≤ N,  • ${\beta}_{t}(i)={\displaystyle {\sum}_{j=1}^{N}{a}_{i,j}{b}_{j}({o}_{t+1})}{\beta}_{t+1}(j)$ for t = T  1,..., 1 and 1 ≤ i ≤ N,  • Finally ${q}_{T}^{\ast}=\underset{1\le i\le N}{\mathrm{arg}\phantom{\rule{0.5em}{0ex}}\mathrm{max}\phantom{\rule{0.5em}{0ex}}}[{\delta}_{T}(i)],{q}_{t}^{\ast}={\psi}_{t+1}({q}_{t+1}^{\ast})$ for t = T  1,..., 1 with optimal path ${Q}^{\ast}=\{{q}_{1}^{\ast},\mathrm{...},{q}_{T}^{\ast}\}$. 
• Finally $p(O\lambda )={\displaystyle {\sum}_{i=1}^{N}{\alpha}_{T}(i)}$ is the sequence likelihood  • Finally $p(O\lambda )={\displaystyle {\sum}_{i=1}^{N}{\pi}_{i}{b}_{i}({o}_{1}){\beta}_{1}(i)}$. 
The maximization step in HMM learning. states.
Initial probability estimate  Transition probability estimate  Emission parameters estimate 

${\stackrel{\u02c4}{\pi}}_{i}$ = γ_{1}(i), for 1 ≤ i ≤ N.  • Gaussian emission ${\widehat{a}}_{i,j}=\frac{{\displaystyle {\sum}_{t=1}^{T1}{\xi}_{t}(i,j)}}{{\displaystyle {\sum}_{t=1}^{T1}{\gamma}_{t}(i)}}$, for 1 ≤ i, j ≤ N.  $\begin{array}{l}{\widehat{b}}_{j}(o)\to \mu =\frac{{\displaystyle {\sum}_{t=1}^{T}{o}_{t}{\gamma}_{t}(j)}}{{\displaystyle {\sum}_{t=1}^{T}{\gamma}_{t}(j)}},\hfill \\ {\widehat{b}}_{j}(o)\to {\sigma}^{2}=\frac{{\displaystyle {\sum}_{t=1}^{T}{({o}_{t}{\widehat{\mu}}_{j})}^{2}{\gamma}_{t}(j)}}{{\displaystyle {\sum}_{t=1}^{T}{\gamma}_{t}(j)}}\hfill \end{array}$, for 1 ≤ j ≤ N, 
• Discrete emission ${\widehat{b}}_{j}(k)=\frac{{\displaystyle {\sum}_{t=1}^{T}\delta ({o}_{t}={v}_{k}){\gamma}_{t}(j)}}{{\displaystyle {\sum}_{t=1}^{T}{\gamma}_{t}(j)}}$, for 1 ≤ j ≤ N. and 1 ≤ k ≤ K, where v_{1},..., v_{ K }is the set of possible dircrete observations. 
Forward sweep strategy explained
p(Making transition 1 → 2 at time 1–2) = α_{1}(1) × a_{1,2} × b_{2}(o_{2}) × β_{2}(2), p(Making transition 1 → 2 at time 2–3) = α_{2}(1) × a_{1,2} × b_{2}(o_{3}) × β_{3}(2).
In particular, to estimate the probability of transition 1 → 2 at time interval 1–2 we calculate the sum of probabilities of all possible paths that lead to state 1 at timepoint 1 (forward probability α_{1}(1)). Then we multiply this probability of being in state 1 by the transition a_{1,2} and emission b_{2}(o_{2}) probabilities.
Further multiplication by the sum of probabilities of all possible paths from state 2 at time 2 until the end of the emission sequence (backward probability β_{2}(2)), is the expected probability of transition use. The sum of these estimates at timepoints 1–2 and 2–3 is equivalent to the transition probability estimate in Table 3 (prior to normalization).
where out(S_{ i }) is the set of nodes connected by edges from S_{ i }.
According to [13] e_{ i }(γ, t, m) is the weighted sum of probabilities of all possible state paths that emit subsequence o_{1},..., o_{ t }and finish in state m, for which state i emits observation γ at least once where the weight of each state path is the number of γ emissions that it makes from state i.
The following algorithm updates parameters for the set of discrete symbol probability distributions.
Initialization step e_{ i }(γ, 1, m) = α_{1}(m) δ (i = m) δ (γ = o_{1}). After initialization we make the recurrence steps, where we correct the emission recurrence presented in [13] [see Additional File 1]:
The algorithm for discrete emission parameters estimate B = {b_{1}(o),..., b_{ N }(o)} takes in O(NED) memory and O(TNEDQ_{ max }) time. As discussed [see Subsection HMM definition, EM learning and Viterbi decoding] the forward sweep takes O(TNQ_{ max }) time, where only the values of α_{t1}(i) for 1 ≤ i ≤ N are needed to evaluate αt(i), thus reducing the memory requirement to O(N) for the forward algorithm. Computing e_{ i }(γ, t, m) takes O(NED) previous probabilities of e_{ i }(γ, t  1, m) for 1 ≤ m ≤ N, 1 ≤ i ≤ E, 1≤ γ ≤ D. Recurrent updating of each e_{ i }(γ, t, m) probability elements takes O(Q_{ max }) summations, totalling O(TNEDQ_{ max }).
Theorem 1 e_{ i }(γ, t, m) is the weighted sum of probabilities of all possible state paths that emit subsequence o_{1},..., o_{ t }and finish in state m, for which state i emits observation γ at least once.
Proof
Initialization The only chance for a path at a timepoint 1 to emit symbol γ at least once from state i and finish in state m is α_{1}(m) in case i = m and γ = o_{1}. Therefore, initialization conditions satisfy definition of e_{ i }(γ, t, m).
Induction Suppose e_{ i }(γ, t  1, m) represents correct weighted sum of probabilities of all possible state paths that emit subsequence o_{1},..., o_{t1}and finish in state m, for which state i emits observation γ at least once. We need to prove the above holds for timepoint t. Following equation (1) in recurrence part (5) we score the evidence of symbol o_{ t }emission from state i at timepoint t, which will be further propagated down the trellis in recurrence part (6). Chances of such event is α_{ t }(m), i.e. sum of probabilities of all possible state paths finishing in state m at timepoint t under conditions i = m and γ = o_{ t }. The second part of the recurrence (6) extends the weighted paths containing evidence of γ symbol emission from state i at previous timepoints 1,..., t  1 and finishing in state n further down the trellis through available transitions a_{n, m}. Thus the definition of e_{ i }(γ, t, m) holds true for the timepoint t.
At the end of recurrence we marginalize the final state m out of probability e_{ i }(γ, T, m) to get the weighted sum of all state paths making observation γ in state i at various timepoints equivalent to the numerator of the discrete emission parameter estimate in Table 3, which is a weighted sum of all possible paths that score emissions evidence at certain timepoints. By normalizing these scores we estimate the emission parameters.
The forward sweep strategy was originally formulated in [13] for HMMs with silent Start/End states, and automatically handles the prior probabilities estimates for the states as standard transitions connecting Start with other nonsilent states. The prior transition estimates a_{Start, i}are naturally involved within recurrent updates of t_{i, j}(t, m), which takes an additional O(N^{2}) memory if all N nonsilent states have nonzero priors with time cost O(TN^{2}Q_{ max }). In order to compute the prior estimates in the conventional HMM formulation we need to know the backward probability at timepoint 1, which requires calculation of the entire backward table. Therefore, in the next section we present a linear memory BaumWelch algorithm modification built around a backward sweep with scaling, which only involves calculation of α_{1}(i) for 1 ≤ i ≤ N to estimate priors in O(N) time and O(N) memory.
Linear memory BaumWelch using a backward sweep with scaling
The objective of the algorithm presented in this section is equivalent to that discussed previously [see Section Forward sweep strategy explained] based on forward probabilities of state occupation. However, by using the backward probabilities of state occupation we are able to estimate initial HMM state probabilities much more quickly. In the description that follows we introduce a new set of probabilities:
E_{ i }(γ, t, m) – the weighted sum of probabilities of all possible state paths that emit subsequence o_{ t },..., o_{ T }and finish in state m, for which state i emits observation γ at least once, where the weight of each state path is the number of γ emissions that it makes from state i.
T_{i, j}(t, m) – the weighted sum of probabilities of all possible state paths that emit subsequence o_{ t },..., o_{ T }and finish in state m, taking i → j transition at least once, where the weight of each state path is the number of i → j transitions that it takes.
All calculations are based on backward probability β_{ t }(i), but there is inevitably insufficient precision to directly represent these values for significantly long emission sequences. Therefore we scale the backward probability during the recursion.
The scoring functions for discrete and continuous emissions.
Discrete emission  Continuous Gaussian emission 

Score (o_{ t }, γ, i)  Score (o_{ t }, γ, i) 
return δ(o_{ t }= γ)  if (γ = 1) return o_{ t }, 
if (γ = 2) return [o_{ t } (b_{ i }(o) → μ)]^{2},  
if (γ = 3) return 1. 
Parameters update
Viterbi decoding in linear memory
In this section we describe results when using a "linear memory" modification of the original Viterbi algorithm that was introduced in [18] by Andrew J. Viterbi. As mentioned previously, the Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states, called the "Viterbi path", corresponding to the sequence of observed events in the context of an HMM. Viterbi checkpointing implementation introduced in [11] divides input sequence into $\sqrt{T}$ blocks of $\sqrt{T}$ symbols each and during the first Viterbi pass keeps only the first column of the d table for each block. The reconstruction of the most probable state path starts with the last block, where we use the last checkpointing column to initialize recovery of the last $\sqrt{T}$ states of the most likely state path and so on, until the entire sequence decoding is reconstructed. The algorithm requires memory space proportional to $O(N\sqrt{T}+T)$ and takes computational time O(TNQ_{ max }) twice as much as conventional implementation would take because of multiple sweeps. Additional computations could be easily justified by the lower memory use, which in practice leads to substantial speedup.
Only two columns are needed for the δ array that stores maximum probability scores for a state at a given timepoint for the algorithm to run (referring to the relationship shown in Table 2). We replace the array, needed to store the dynamic programming backtrack pointers, by a linked list. Our approximately linear memory implementation follows the observation that the backtrack paths typically converge to the most likely state path and travel all together to the beginning of the decoding table. Although the approximate linearity depends on model structure and emission sequence decoded, and is not guaranteed, this behavior is typical for the HMM topologies we use. The possibility of using O(N log(T)) space (in case we write to disk the most likely path before the coalescence point, i.e. the first state on the backtrack path where only a single candidate is left for the initial segment of the most probable state path) has only been rigorously proven for simple symmetric twostate HMM [19].
Our particular implementation relies on the Java Garbage Collector (GC), which periodically deletes all the linked list nodes allocated in the heap that are no longer referenced by the active program stack as shaded in light gray color in Figures 5 and 7. On multiple core machines the GC could run concurrently with the main computational thread thus not obstructing execution of the method. In the lower level languages (like C/C++) "smart pointers" could be used to deallocate unused links when their reference count drops to zero, which is in some ways even more efficient than Java's garbage collection.
Computational performance
We conducted experiments on the HMM topologies mentioned above [see Section Viterbi decoding in linear memory] with both artificial and real data sets, and evaluated performance of the various implementations of the Viterbi and EM learning algorithms. We describe the performance of the Java Virtual Machive (JVM) after the HotSpot Java byte code optimizer burnsin, i.e. after it optimizes both memory use and execution time within EM cycles. The linear memory, checkpointing and conventional algorithm implementations are thereby streamlined to avoid an unfair comparison due to obvious performance bottlenecks.
In experiments on EM learning on a spike detection HMM topology, shown in Figure 6, we have systematically varied the ionic flow duration from 1,000 ms to 64,000 ms. Although in Subfigures 8(d) – 8(f) duration of the time cycle of the linear memory implementation is not so inflated in this situation, it is still many times higher than for conventional and checkpointing algorithms. Please note that conventional and checkpointing algorithms spend almost identical time per cycle. The conventional implementation still takes the largest amount of memory and once again checkpointing takes less memory compared to the linear memory implementation in the case of long signal duration. Garbage collection in the case of the conventional implementation starts taking a substantial fraction of the CPU time for maximum signal duration, which advocates against using the conventional implementation for the analysis of long signals.
Theoretically, for the linear memory algorithm to run faster than checkpointing alternative the following condition should hold true 2TNQ_{ max }+ T (Q + E) > TNQ_{ max }(Q + ED) which reduces to the condition, unrealistic for any practical model 2 > Q + ED. Thus, the linear memory algorithm will always run slower than checkpointing. The memory condition for the linear memory EM algorithm implementation to run in less space is 2N$\sqrt{T}$ > N(Q + ED), which reduces to 2$\sqrt{T}$ > Q + ED condition – quite realistic for sufficiently large values of T. The memory advantage is the key attribute since efforts are underway (Z. Jiang and S. WintersHilt) to perform segmented linear HMM processing on a distributed platform, where the speed deficiency is offset by Mfold speedup when using M nodes.
In both test scenarios shown in Figure 8 we see that conventional implementation of BaumWelch aggressively claims very large amounts of heap space, even for modestly sized problems (in some applications, such as the JAHMM package [23], it allocates the probabilistic table ξ of size N^{2} × T, although we do it in N × T through progressive summation of forwardbackward tables), where both modified EM algorithm implementations have a very compact memory signature. An HMM algorithm implemented based on forward sweep strategy with silent Start/End states runs slower and takes more memory compared to the proposed backward sweep strategy in case of DHMM topology. This is because prior probabilities of the states are estimated as regular transitions from the Start state, thus substantially increasing t_{i, j}(t, m) table size and time required for a recurrent step.
Memory use for Viterbi decoding on spike topology with loop sizes 6 and 11.
Ionic flow samples  Ratio of number of referenced links to sequence size 

819  1.1173 
10,319  1.0084 
26,233  1.0042 
51,233  1.0017 
101,233  1.0015 
151,232  1.0007 
Memory use for Viterbi decoding on explicit DHMM with D = 60 and two aggregate
Observation sequence size  Ratio of number of referenced links to sequence size 

1,000  2.565 
10,000  1.134 
50,000  1.032 
100,000  1.017 
200,000  1.007 
Discussion and Conclusion
We have discussed implementation of BaumWelch and Viterbi algorithms in linear memory. We successfully used these implementations in analysis of nanopore blockade signals with very large sample sizes (more than 3,000 ms) where the main limitation becomes processing time rather than memory overflow. We are currently working on efficient distributed implementations of the linear memory algorithms to facilitate quicker, potentially "realtime" applications.
In both of the test scenarios considered, the linear memory modification of the EM algorithm takes significantly more time to execute compared to conventional and checkpointing implementations. Despite being the fastest in many realistic scenarios, the conventional implementation of the EM learning algorithm suffers from substantial memory use. The checkpointing algorithm alleviates both of these extremes, sometimes running even faster than the conventional algorithm due to lower memory management overhead. The checkpointing algorithm seems to provide an excellent tradeoff between memory use and speed. We are trying to understand if the running time of our linear memory EM algorithm implementation can be constrained in a way similar to the checkpointing algorithm. A demo program featuring the canonical, checkpointing and linear memory implementations of the EM learning and the Viterbi decoding algorithms is available on our web site (see Availability & requirements section below).
Availability & requirements
University of New Orleans bioinformatics group: http://logos.cs.uno.edu/~achurban
Appendices
Appendix A – Proofs of scaling relationships
The scaling steps in Figure 2 need additional rigorous justification. Our proofs are partially inspired by recurrences presented in [24] with further clarifications.
Theorem 2 $\tilde{\beta}$_{ t }(m) = d_{ t }$\overline{\beta}$_{ t }(m)
Here we observe useful relationships D_{ t }= d_{ t }D_{t + 1}and $\overline{\beta}$_{ t }(m) = D_{t + 1}β_{ t }(m) necessary in followup proves. □
Theorem 3 $\tilde{T}$_{i, j}(t, m) = d_{ t }$\overline{T}$_{i, j}(t, m)
Theorem 4 $\tilde{E}$_{ i }(γ, t, m) = d_{ t }$\overline{E}$_{ i }(γ, t, m)
Declarations
Acknowledgements
Authors are grateful for numerous constructive suggestions made by anonymous reviewers.
Authors’ Affiliations
References
 Bilmes J: What HMMs can do. In Tech rep. University of Washington, Seattle; 2002.Google Scholar
 Rabiner L, Juang BH: Fundamentals of speech recognition. Printice Hall; 1993.Google Scholar
 Durbin R, Eddy S, Krogh A, Mitchison G: Biological sequence analysis. Cambridge University press; 1998.View ArticleGoogle Scholar
 Vercoutere W, WintersHilt S, Olsen H, Deamer D, Haussler D, Akeson M: Rapid discrimination among individual DNA hairpin molecules at singlenucleotide resolution using an ion channel. Nature Biotechnology 2001, 19: 248–252. 10.1038/85696View ArticlePubMedGoogle Scholar
 Vercoutere W, WintersHilt S, DeGuzman V, Deamer D, Ridino S, Rodgers J, Olsen H, Marziali A, Akeson M: Discrimination among individual WatsonCrick base pairs at the termini of single DNA hairpin molecules. Nucleic Acids Research 2003, 31(4):1311–1318. 10.1093/nar/gkg218PubMed CentralView ArticlePubMedGoogle Scholar
 Churbanov A, Baribault C, WintersHilt S: Duration learning for analysis of nanopore ionic current blockades. BMC Bioinformatics 2007. [MCBIOS IV supplemental proceedings].Google Scholar
 WintersHilt S, Landry M, Akeson M, Tanase M, Amin I, Coombs A, Morales E, Millet J, Baribault C, Sendamangalam S: Cheminformatics methods for novel nanopore analysis of HIV DNA termini. BMC Bioinformatics 2006, 7(Suppl 2):S22. 10.1186/147121057S2S22PubMed CentralView ArticlePubMedGoogle Scholar
 Baum L, Petrie T, Soules G, Weiss N: A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann Math Statist 1970, 41: 164–171. 10.1214/aoms/1177697196View ArticleGoogle Scholar
 Hirschberg D: A linearspace algorithm for computing maximal common subsequences. Communications of the ACM 1975, 18: 341–343. 10.1145/360825.360861View ArticleGoogle Scholar
 Grice J, Hughey R, Speck D: Reduced space sequence alignment. CABIOS 1997, 13: 45–53.PubMedGoogle Scholar
 Tarnas C, Hughey R: Reduced space hidden Markov model training. Bioinformatics 1998, 14(5):401–406. 10.1093/bioinformatics/14.5.401View ArticlePubMedGoogle Scholar
 Wheeler R, Hughey R: Optimizing reducedspace sequence analysis. Bioinformatics 2000, 16(12):1082–1090. 10.1093/bioinformatics/16.12.1082View ArticlePubMedGoogle Scholar
 Miklós I, Meyer I: A linear memory algorithm for BaumWelch training. BMC Bioinformatics 2005, 6: 231. 10.1186/147121056231PubMed CentralView ArticlePubMedGoogle Scholar
 Rabiner L: A tutorial on hidden Markov models and selected applications in speach recognition. Proceedings of IEEE 1989, 77: 257–286. 10.1109/5.18626View ArticleGoogle Scholar
 Bilmes J: A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. In Tech Rep TR97–021. International Computer Science Institute; 1998.Google Scholar
 Wierstra D, Wiering M: Master's Thesis. Volume chap. 5. IDSIA; 2004:36–40. [A New Implementation of Hierarchical Hidden Markov Models].Google Scholar
 Kingsbury N, Rayner P: Digital filtering using logarithmic arithmetic. Electronics Letters 1971, 7(2):56–58. 10.1049/el:19710039View ArticleGoogle Scholar
 Viterbi A: Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory 1967, 13(2):260–269. 10.1109/TIT.1967.1054010View ArticleGoogle Scholar
 Šrámek R, Brejová B, Vinař T: Online Viterbi algorithm and its relationship to random walks. Tech rep Comenius and Cornell Universities; 2007. [http://arxiv.org/abs/0704.0062v1]Google Scholar
 Ferguson J: Variable duration models for speech. In Proc Symposium on the application of Hidden Markov Models to text and speech Edited by: Ferguson J. 1980, 143–179.Google Scholar
 Mitchell C, Helzerman R, Jamieson L, Harper M: A parallel implementation of a hidden Markov model with duration modeling for speech recognition. Digital Signal Processing, A Review Journal 1995, 5: 298–306. [http://citeseer.ist.psu.edu/mitchell95parallel.html]View ArticleGoogle Scholar
 Burge C, Karlin S: Predictions of complete gene structures in human genomic DNA. Journal of Molecular Biology 1997, 268: 78–94. 10.1006/jmbi.1997.0951View ArticlePubMedGoogle Scholar
 François JM: Jahmm – Java Hidden Markov Model (HMM).[http://www.run.montefiore.ulg.ac.be/~francois/software/jahmm/]
 Rahimi A: An erratum for "A tutorial on Hidden Markov Models and selected applications in speech recognition".2000. [http://alumni.media.mit.edu/~rahimi/rabiner/rabinererrata/rabinererrata.html]Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.