 Research article
 Open Access
A linear memory algorithm for BaumWelch training
 István Miklós†^{1} and
 Irmtraud M Meyer†^{2}Email author
https://doi.org/10.1186/147121056231
© Miklós and Meyer; licensee BioMed Central Ltd. 2005
 Received: 24 June 2005
 Accepted: 19 September 2005
 Published: 19 September 2005
Abstract
Background:
BaumWelch training is an expectationmaximisation algorithm for training the emission and transition probabilities of hidden Markov models in a fully automated way. It can be employed as long as a training set of annotated sequences is known, and provides a rigorous way to derive parameter values which are guaranteed to be at least locally optimal. For complex hidden Markov models such as pair hidden Markov models and very long training sequences, even the most efficient algorithms for BaumWelch training are currently too memoryconsuming. This has so far effectively prevented the automatic parameter training of hidden Markov models that are currently used for biological sequence analyses.
Results:
We introduce the first linear space algorithm for BaumWelch training. For a hidden Markov model with M states, T free transition and E free emission parameters, and an input sequence of length L, our new algorithm requires O(M) memory and O(LMT_{ max }(T + E)) time for one BaumWelch iteration, where T_{ max }is the maximum number of states that any state is connected to. The most memory efficient algorithm until now was the checkpointing algorithm with O(log(L)M) memory and O(log(L)LMT_{ max }) time requirement. Our novel algorithm thus renders the memory requirement completely independent of the length of the training sequences. More generally, for an nhidden Markov model and n input sequences of length L, the memory requirement of O(log(L)L^{n1}M) is reduced to O(L^{n1}M) memory while the running time is changed from O(log(L)L^{ n }MT_{ max }+ L^{ n }(T + E)) to O(L^{ n }MT_{ max }(T + E)).
An added advantage of our new algorithm is that a reduced time requirement can be traded for an increased memory requirement and vice versa, such that for any c ∈ {1, ..., (T + E)}, a time requirement of L^{ n }MT_{ max }c incurs a memory requirement of L^{n1}M(T + E  c).
Conclusion
For the large class of hidden Markov models used for example in gene prediction, whose number of states does not scale with the length of the input sequence, our novel algorithm can thus be both faster and more memoryefficient than any of the existing algorithms.
Keywords
 Hide Markov Model
 Markov Chain Monte Carlo
 Memory Requirement
 Input Sequence
 Training Sequence
Background
Hidden Markov Models (HMMs) are widely used in Bioinformatics [1], for example, in protein sequence alignment, protein family annotation [2, 3] and genefinding [4, 5].
When an HMM consisting of M states is used to annotate an input sequence, its predictions crucially depend on its set of emission probabilities ε and transition probabilities . This is for example the case for the state path with the highest overall probability, the socalled optimal state path or Viterbi path [6], which is often reported as the predicted annotation of the input sequence.
When a new HMM is designed, it is usually quite easy to define its states and the transitions between them as these typically closely reflect the underlying problem. However, it can be quite difficult to assign values to its emission probabilities ε and transition probabilities . Ideally, they should be set up such that the model's predictions would perfectly reproduce the known annotation of a large and diverse set of input sequences.
 (1)
If we know the optimal state paths that correspond to the known annotation of the training sequences, the transition and emission probabilities can simply be set to the respective count frequencies within these optimal state paths, i.e. to their maximum likelihood estimators. If the training set is small or not diverse enough, pseudocounts have to be added to avoid overfitting.
 (2)
If we do not know the optimal state paths of the training sequences, either because their annotation is unknown or because their annotation does not unambiguously define a state path in the HMM, we can employ an expectation maximisation (EM) algorithm [7] such as the BaumWelch algorithm [8] to derive the emission and transition probabilities in an iterative procedure which increases the overall log likelihood of the model in each iteration and which is guaranteed to converge at least to a local maximum. As in case (1), pseudocounts or Dirichlet priors can be added to avoid overfitting when the training set is small or not diverse enough.
Methods and results
BaumWelch training
The BaumWelch algorithm defines an iterative procedure in which the emission and transition probabilities in iteration n + 1 are set to the number of times each transition and emission is expected to be used when analysing the training sequences with the set of emission and transition probabilities derived in the previous iteration n.
Let ${T}_{i,j}^{n}$ denote the transition probability for going from state i to state j in iteration n, ${E}_{i}^{n}(y)$ the emission probability for emitting letter y in state i in iteration n, P(X) the probability of sequence X, and x_{ k }the k th letter in input sequence X which has length L. We also define X_{ k }as the sequence of letters from the beginning of sequence X up to sequence position k, (x_{1}, ...x_{ k }). X^{ k }is defined as the sequence of letters from sequence position k + 1 to the end of the sequence, (x_{k+1}, ...x_{ L }).
For a given set of training sequences, S, the expectation maximisation update for transition probability ${T}_{i,j}^{n}$, ${T}_{i,j}^{n+1}$ can then be written as
$\begin{array}{l}{T}_{i,j}^{n+1}=\frac{{\sum}_{X\in S}{t}_{i,j}^{n}(X)/P(X)}{{\sum}_{{j}^{\prime}}{\sum}_{X\in S}{t}_{i,j}^{n},(X)/P(X)}\hfill \\ \text{where}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}{t}_{i,j}^{n}(X):=\sum _{k=1}^{L}{f}^{n}({X}_{k,}i){T}_{i,j}^{n}{E}_{j}^{n}({x}_{k+1}){b}^{n}({X}^{k+1},j)\hfill \end{array}(1)$
The superfix n on the quantities on the right hand side indicates that they are based on the transition probabilities ${T}_{i,j}^{n}$ and emission probabilities ${E}_{i}^{n}({x}_{k+1})$ of iteration n. f(X_{ k }, i): = P(x_{1}, ...x_{ k }, s(x_{ k }) = i) is the socalled forward probability of the sequence up to and including sequence position k, requiring that sequence letter x_{ k }is read by state i. It is equal to the sum of probabilities of all state paths that finish in state i at sequence position k. The probability of sequence X, P(X), is therefore equal to f(X_{ L }, End). b(X^{ k }, i): = P(x_{k+1}, ...x_{ L }s(x_{ k }) = i) is the socalled backward probability of the sequence from sequence position k + 1 to the end, given that the letter at sequence position k, x_{ k }, is read by state i. It is equal to the sum of probabilities of all state paths that start in state i at sequence position k.
For a given set of training sequences, S, the expectation maximisation update for emission probability ${E}_{i}^{n}(y)$, ${E}_{i}^{n+1}(y)$, is
$\begin{array}{l}{E}_{i}^{n+1}(y)=\frac{{\sum}_{X\in S}{e}_{i}^{n}(y,X)/P(X)}{{\sum}_{y\text{'}}{\sum}_{X\in S}{e}_{i}^{n}(y\text{'},X)/P(X)}\hfill \\ \text{where}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}{e}_{i}^{n}(y,X):=\sum _{k=1}^{L}{\delta}_{{x}_{k},y}{f}^{n}({X}_{k,}i){b}^{n}({X}^{k},i)\hfill \end{array}(2)$
δ is the usual delta function with ${\delta}_{{x}_{k},y}$ = 1 if x_{ k }= y and ${\delta}_{{x}_{k},y}$ = 0 if x_{ k }≠ y. As before, the superfix n on the quantities on the right hand side indicates that they are calculated using the transition probabilities ${T}_{i,j}^{n}$ and emission probabilities ${E}_{i}^{n}({x}_{k+1})$ of iteration n.
The forward and backward probabilities f^{ n }(X_{ k }, i) and b^{ n }(X^{ k }, i) can be calculated using the forward and backward algorithms [1] which are introduced in the following section.
BaumWelch training using the forward and backward algorithm
The forward algorithm proposes a procedure for calculating the forward probabilities f(X_{ k }, i) in an iterative way. f(X_{ k }, i) is the sum of probabilities of all state paths that finish in state i at sequence position k.
The recursion starts with the initialisation
$f({X}_{0},i)=\{\begin{array}{c}1\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}i=Start\\ 0\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}i\ne Start\end{array}$
where Start is the number of the start state in the HMM. The recursion proceeds towards higher sequence positions
$f({X}_{k+1},i)=\sum _{j=1}^{M}f({X}_{k},j){T}_{j,i}{E}_{i}({x}_{k+1})$
and terminates with
$P(X)=P({X}_{L})=f({X}_{L},End)=\sum _{j=1}^{M}f({X}_{L},j){T}_{j,End}$
where End is the number of the end state in the HMM. The recursion can be implemented as a dynamic programming procedure which works its way through a twodimensional matrix, starting at the start of the sequence in the Start state and finishing at the end of the sequence in the End state of the HMM.
The backward algorithm calculates the backward probabilities b(X^{ k }, i) in a similar iterative way. b(X^{ k }, i) is the sum of probabilities of all state paths that start in state i at sequence position k. Opposed to the forward algorithm the backward algorithm starts at the end of the sequence in the End state and finishes at the start of the sequence in the Start state of the HMM.
The backward algorithm starts with the initialisation
$b({X}^{L},i)=\{\begin{array}{c}1\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}i=End\\ 0\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}i\ne End\end{array}$
and continues towards lower sequence positions with the recursion
$b({X}^{k},i)=\sum _{j=1}^{M}{E}_{i}({x}_{k}){T}_{i,j}b({X}^{k+1},j)$
and terminates with
$P(X)=b({X}^{1},Start)=\sum _{j=1}^{M}{T}_{Start,j}b({X}^{1},j)$
As can be seen in the recursion steps of the forward and backward algorithms described above, the calculation of f(X_{k+1}, i) requires at most T_{ max }previously calculated elements f(X_{ k }, j) for j ∈ {1, ..M}. T_{ max }is the maximum number of states that any state of the model is connected to. Likewise, the calculation of b(X^{ k }, i) refers to at most T_{ max }elements b(X^{k+1}, j) for j ∈ {1, ..M}.
In order to continue the calculation of the forward and backward values f(X_{ k }, i) and b(X_{ k }, i) for all states i ∈ {1, ..M} along the entire sequence, we thus only have to memorise M elements.
BaumWelch training using the checkpointing algorithm
Unit now, the checkpointing algorithm [11–13] was the most efficient way to perform BaumWelch training. The basic idea of the checkpointing algorithm is to perform the forward and backward algorithm by memorising the forward and backward values only in $O(\sqrt{L})$ columns along the sequence dimension of the dynamic programming table. The checkpointing algorithm starts with the forward algorithm, retaining only the forward values in $O(\sqrt{L})$ columns. These columns partition the dynamic programming table into $O(\sqrt{L})$ separate fields. The checkpointing algorithm then invokes the backward algorithm which memorises the backward values in a strip of length $O(\sqrt{L})$ as it moves along the sequence. When the backward calculation reaches the boundary of one field, the precalculated forward values of the neighbouring checkpointing column are used to calculate the corresponding forward values for that field. The forward and backward values of that field are then available at the same time and are used to calculate the corresponding values for the EM update.
The checkpointing algorithm can be further refined by using embedded checkpoints. With an embedding level of k, the forward values in $O({L}^{{\scriptstyle \frac{1}{k}}})$ columns of the initial calculation are memorised, thus defining $O(L/{L}^{{\scriptstyle \frac{1}{k}}})=O({L}^{{\scriptstyle \frac{k1}{k}}})$ long fields. When the memorysparse calculation of the backward values reaches the field in question, the forward algorithm is invoked again to calculate the forward values for $O({L}^{{\scriptstyle \frac{1}{k}}})$ additional columns within that field. This procedure is iterated k times within the thus emerging fields. In the end, for each of the $O({L}^{{\scriptstyle \frac{1}{k}}})$long ksubfields, the forward and backward values are simultaneously available and are used to calculate the corresponding values for the EM update. The time complexity of this algorithm for one BaumWelch iteration and a given training sequence of length L is O(kLMT_{ max }+ L(T + E)), since k forward and 1 backward algorithms have to be invoked, and the memory complexity is $O(k{L}^{{\scriptstyle \frac{1}{k}}}M)$. For k = log(L), this amounts to a time requirement of O(log(L)LMT_{ max }+ L(T + E)) and a memory requirement of O(log(L)M), since ${L}^{{\scriptstyle \frac{1}{\mathrm{log}(L)}}}$ = e.
BaumWelch training using the new algorithm
It is not trivial to see that the quantities ${T}_{i,j}^{n+1}$ and ${E}_{i}^{n+1}(y)$ of Equations 1 and 2 can be calculated in an even more memorysparse way as both, the forward and the corresponding backward probabilities are needed at the same time in order to calculate the terms ${f}^{n}({X}_{k},i){T}_{i,j}^{n}{E}_{i}^{n}({x}_{k+1}){b}^{n}({X}^{k+1},j)$ in ${t}_{i,j}^{n}(X)$ and ${\delta}_{{x}_{k},y}{f}^{n}({X}_{k},i){b}^{n}({X}^{k},i)$ in ${e}_{i}^{n}(y,X)$ of Equations 1 and 2. A calculation of these quantities for each sequence position using a memorysparse implementation (that would memorise only M values at a time) both for the forward and backward algorithm would require Ltimes more time, i.e. significantly more time. Also, an algorithm along the lines of the Hirschberg algorithm [9, 10] cannot be applied as we cannot halve the dynamic programming table after the first recursion.
We here propose a new algorithm to calculate the quantities ${T}_{i,j}^{n+1}$ and ${E}_{i}^{n+1}(y)$ which are required for BaumWelch training. Our algorithm requires O(M) memory and O(LMT_{ max }(T + E)) time rather than O(log(L)M) memory and O(log(L{LMT_{ max }+ L(T + E)) time.
The trick for coming up with a memory efficient algorithm is to realise that

${t}_{i,j}^{n}(X)$ and ${e}_{i}^{n}(y,X)$ in Equations 1 and 2 can be interpreted as a weighted sum of probabilities of state paths that satisfy certain constraints and that

the weight of each state path is equal to the number of times that the constraint is fulfilled.
For example, ${t}_{i,j}^{n}(X)$ in the numerator in Equation 1 is the weighted sum of probabilities of state paths for sequence X that contain at least one i → j transition, and the weight of each such state path in the sum is the number of times this transition occurs in the state path.
We now show how ${t}_{i,j}^{n}(X)$ in Equation 1 can be calculated in O(M) memory and O(LMT_{ max }) time. As the superfix n is only there to remind us that the calculation of ${t}_{i,j}^{n}(X)$ is based on the transition and emission probabilities of iteration n and as this index does not change in the calculation of ${t}_{i,j}^{n}$, we discard it for simplicity sake in the following.
Let t_{i, j}(X_{ k }, l) denote the weighted sum of probabilities of state paths that finish in state l at sequence position k of sequence X and that contain at least one i → j transition, where the weight for each state path is equal to its number of i → j transitions.
Theorem 1: The following algorithm calculates t_{i, j}(X) in O(M) memory and O(LMT_{ max }) time. t_{i, j}(X) is the weighted sum of probabilities of all state paths for sequence X that have at least one i → j transition, where the weight for each state path is equal to its number of i → j transitions.
The algorithm starts with the initialisation
$\begin{array}{l}f({X}_{0},m)\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}=\{\begin{array}{c}1\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}m=Start\\ 0\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}m\ne Start\end{array}\\ {t}_{i,j}({X}_{0},m)=0\end{array}$
and proceeds via the following recursion
$\begin{array}{c}f({X}_{k+1},m)\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}=\sum _{n=1}^{M}f({X}_{k},n){T}_{n,m}{E}_{m}({x}_{k+1})\\ {t}_{i,j}({X}_{k+1},m)=\{\begin{array}{c}{\sum}_{n=1}^{M}{t}_{i,j}({X}_{k},n){T}_{n,m}{E}_{m}({x}_{k+1})\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}m\ne j\\ \begin{array}{l}f({X}_{k},i){T}_{i,m}{E}_{m}({x}_{k+1})+\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}m=j\\ {\sum}_{n=1}^{M}{t}_{i,j}({X}_{k},n){T}_{n,m}{E}_{m}({x}_{k+1})\end{array}\end{array}(3)\end{array}$
and finishes with
$\begin{array}{l}\begin{array}{l}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}P(X)=f({X}_{L},End)\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}=\sum _{n=1}^{M}f({X}_{L},n){T}_{n,End}\hfill \\ {t}_{i,j}(X)={t}_{i,j}({X}_{L},End)=\{\begin{array}{l}\begin{array}{l}{\sum}_{n=1}^{M}{t}_{i,j}({X}_{L},n){T}_{n,End}\end{array}\hfill \\ \begin{array}{l}f({X}_{L},i){T}_{i,End}+\\ {\sum}_{n=1}^{M}{t}_{i,End}({X}_{k},n){T}_{n,End}\end{array}\hfill \end{array}\begin{array}{c}End\ne j\\ End=j\end{array}\hfill \end{array}(4)\end{array}$
 (1)
It is obvious that the recursion requires only O(M) memory as the calculation of all f(X_{k+1}, m) values with m ∈ {1, ..M} requires only access to the M previous f(X_{ k }, n) values with n ∈ {1, ..M}. Likewise, the calculations of all t_{i, j}(X_{k+1}, m) values with m ∈ {1, ..M} refer only to M elements t_{i, j}(X_{ k }, n) with n ∈ {1, ..M}. We therefore have to remember only a thin "slice" of t_{i, j}and f values at sequence position k in order to be able to calculate the t_{i, j}and f values for the next sequence position k + 1. The time requirement to calculate t_{i, j}is O(LMT_{ max }): for every sequence position and for every state in the HMM, we have to sum at most T_{ max }terms in order to calculate the backward and forward terms.
 (2)
The f(X_{ k }, m) values are identical to the previously defined forward probabilities and are calculated in the same way as in the forward algorithm.
 (3)
We now prove by induction that t_{i, j}(X_{ k }, l) is equal to the weighted sum of probabilities of state paths that have at least one i → j transition and that finish at sequence position k in state l, the weight of each state path being equal to its number of i → j transitions.
 (i)
case m = j:
${t}_{i,j}({X}_{k+1},m)=f({X}_{k},i){T}_{i,j}{E}_{j}({x}_{k+1})+(5)$
$\sum _{n=1}^{M}{t}_{i,j}({X}_{k},n){T}_{n,j}{E}_{j}({X}_{k+1})(6)$
 (ii)
case m ≠ j:
$\sum _{n=1}^{M}{t}_{i,j}({X}_{k},n){T}_{n,j}{E}_{j}({X}_{k+1})(6)$
The expression on the right hand side is the weighted sum of probabilities of state paths that finish in sequence position k + 1 and contain at least one i → j transition.
We have therefore shown that if Equation 3 is true for sequence position k, it is also true for sequence position k + 1. This concludes the proof of theorem 1. □
It is easy to show that e_{ i }(y, X) in Equation 2 can also be calculated in O(M) memory and O(LMT_{ max }) time in a similar way as t_{i, j}(X). Let e_{ i }(y, X_{ k }, l) denote the weighted sum of probabilities of state paths that finish at sequence position k in state l and for which state i reads letter y at least once, the weight of each state path being equal to the number of times state i reads letter y. As in the calculation of t_{i, j}(X) we again omit the superfix n as the calculation of e_{ i }(y, X) is again entirely based on the transition and emission probabilities of iteration n.
Theorem 2: e_{ i }(y, X) can be calculated in O(M) memory and O(LMT_{ max }) time using the following algorithm. e_{ i }(y, X) is the weighted sum of probabilities of state paths for sequence X that read letter y in state i at least once, the weight of each state path being equal to the number of times letter y is read by state i.
Initialisation step:
$\begin{array}{c}f({X}_{0},m)\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}=\{\begin{array}{c}1\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}m=Start\\ 0\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}m\ne Start\end{array}\\ {e}_{i}(y,{X}_{0},m)=0\end{array}$
Recursion:
$\begin{array}{c}f({X}_{k+1},m)=\sum _{n=1}^{M}f({X}_{k},n){T}_{n,m}{E}_{m}({x}_{k+1})\\ {e}_{i}(y,{X}_{k+1},m)=\begin{array}{c}\{\begin{array}{l}{\sum}_{n=1}^{M}{e}_{i}(y,{X}_{k},n){T}_{n,m}{E}_{m}({x}_{k+1})\\ if\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}m\ne i\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\text{or}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}{x}_{k+1}\ne y\\ f({X}_{k},i){T}_{i,m}{E}_{m}({x}_{k+1})+\\ {\sum}_{n=1}^{M}{e}_{i}(y,{X}_{k},n){T}_{n,m}{E}_{m}({x}_{k+1})\\ if\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}m=i\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}and\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}\phantom{\rule{0.1em}{0ex}}{x}_{k+1}=y\end{array}\end{array}\end{array}$
Termination step:
$\begin{array}{l}P(X)=f({X}_{L},End)=\sum _{n+1}^{M}f({X}_{L},n){T}_{n,End}(7)\\ {e}_{i}(y,X)={e}_{i}(y,{X}_{L},End)=\sum _{n+1}^{M}{e}_{i}(y,{X}_{L},n){T}_{n,End}\end{array}$
Proof: The proof is strictly analogous to the proof of theorem 1.
The above theorems have shown that t_{i, j}(X) and e_{ i }(y, X) can each be calculated in O(M) memory and O(LMT_{ max }) time. As there are T transition parameters and E emission parameters to be calculated in each BaumWelch iteration, and as these T + E values can be calculated independently, the time and memory requirements for each iteration and a set of training sequences whose sum of sequence lengths is L using our new algorithm are

O(M) memory and O(LMT_{ max }(T + E)) time, if all parameter estimates are calculated consecutively

O(M(T + E)) memory and O(LMT_{ max }) time, if all parameter estimates are calculated in parallel

more generally, O(Mc) memory and O(LMT_{ max }(T + E  c)) time for any c ∈ {1,..., (T + E)}, if c of T + E parameters are to be calculated in parallel
Note that the calculation of P(X) is a byproduct of each t_{i, j}(X) and each e_{ i }(y, X) calculation, see Equations 4 and 7, and that T is equal to the number of free transition parameters in the HMM which is usually smaller than the number of transitions probabilities. Likewise, E is the number of free emission parameters in the HMM which may differ from the number of emission probabilities when the probabilities are parametrised.
Discussion and Conclusion
We propose the first linearmemory algorithm for BaumWelch training. For a hidden Markov model with M states, T free transition and E free emission parameters, and an input sequence of length L, our new algorithm requires O(M) memory and O(LMT_{ max }(T + E)) time for one BaumWelch iteration as opposed to O(log(L)M) memory and O(log(L)LMT_{ max }+ L(T + E)) time using the checkpointing algorithm [11–13], where T_{ max }is the maximum number of states that any state is connected to. Our algorithm can be generalised to pairHMMs and, more generally, nHMMs that analyse n input sequences at a time in a straightforward way. In the nHMM case, our algorithm reduces the memory and time requirements from O(log(L)L^{n1}M) memory and O(log(L)L^{ n }MT_{ max }+ L^{ n }(T + E)) time to O(L^{n1}M) memory and O(L^{ n }MT_{ max }(T + E))) time. An added advantage of our new algorithm is that a reduced time requirement can be traded for an increased memory requirement and vice versa, such that for any c ∈ {1,..., (T + E)}, a time requirement of L^{ n }MT_{ max }c incurs a memory requirement of L^{n1}M(T + E  c). For HMMs, our novel algorithm renders the memory requirement completely independent of the sequence length. Generally, for nHMMs and all T + E parameters being estimated consecutively, our novel algorithm reduces the memory requirement by a factor log(L) and the time requirement by a factor log(L)/(T +E) + 1/(MT_{ max }). For all hidden Markov models whose number of states does not depend on the length of the input sequence, this thus amounts to a significantly reduced memory requirement and – in cases where the number of free parameters and states of the model (i.e. T + E) is smaller than the logarithm of sequence lengths – even to a reduced time requirement.
For example, for an HMM that is used to predict human genes, the training sequences have a mean length of at least 2.7·10^{4} bp which is the average length of a human gene [14]. Using our new algorithm, the memory requirement for BaumWelch training is reduced by a factor of about 28 ≈ e* In (2.7·10^{4}) with respect to the most memorysparse version of the checkpointing algorithm.
BaumWelch training is only guaranteed to converge to a local optimum. Other optimisation techniques have been developed in order to find better optima. One of the most successful methods is simulated annealing (SA) [1, 15]. SA is essentially a Markov chain Monte Carlo (MCMC) in which the target distribution is sequentially changed such that the distribution gets eventually trapped in a local optimum. One can give proposal steps a higher probability as they are approaching locally better points. This can increase the performance of the MCMC method, especially in higher dimensional spaces [16]. One could base the candidate distribution on the expectations such that proposals are more likely to be made near the EM updates (calculated with our algorithm). There is no need to update all the parameters in one MCMC step, modifying a random subset of parameters yields also an irreducible chain. The last feature makes SA significantly faster than BaumWelch updates as we need to calculate expectations only for a few parameters using SA. In that way, our algorithm could be used for highly efficient parameter training: using our algorithm to calculate the EM updates in only linear space and using SA instead of the BaumWelch algorithm for fast parameter space exploration.
Typical biological sequence analyses these days often involve complicated hidden Markov models such as pairHMMs or long input sequences and we hope that our novel algorithm will make BaumWelch parameter training an appealing and practicable option.
Other commonly employed methods in computer science and Bioinformatics are stochastic context free grammars (SCFGs) which need O(L^{2} M) memory to analyse an input sequence of length L with a grammar having M nonterminal symbols [1]. For a special type of SCFGs, known as covariance models in Bioinformatics, M is comparable to L, hence the memory requirement is O(L^{3}). This has recently been reduced to O(L^{2} log(L)) using a divideandconquer technique [17], which is the SCFG analogue of the Hirschberg algorithm for HMMs [9]. However, as the states of SCFGs can generally impose longrange correlations between any pair of sequence positions, it seems that our algorithm cannot be applied to SCFGs in the general case.
Notes
Declarations
Acknowledgements
The authors would like to thank one referee for the excellent comments. I.M. is supported by a Békésy György postdoctoral fellowship. Both authors wish to thank Nick Goldman for inviting I.M. to Cambridge.
Authors’ Affiliations
References
 Durbin R, Eddy S, Krogh A, Mitchison G: Biological sequence analysis. Cambridge University Press; 1998.View ArticleGoogle Scholar
 Krogh A, Brown M, Mian IS, Sjölander K, Haussler D: Hidden Markov models in biology: Applications to protein modelling. J Mol Biol 1994, 235: 1501–1531. 10.1006/jmbi.1994.1104View ArticlePubMedGoogle Scholar
 Eddy S: HMMER: Profile hidden Markov models for biological sequence analysis.2001. [http://hmmer.wustl.edu/]Google Scholar
 Meyer IM, Durbin R: Comparative ab initio prediction of gene structures using pair HMMs. Bioinformatics 2002, 18(10):1309–1318. 10.1093/bioinformatics/18.10.1309View ArticlePubMedGoogle Scholar
 Meyer IM, Durbin R: Gene structure conservation aids similarity based gene prediction. Nucleic Acids Research 2004, 32(2):776–783. 10.1093/nar/gkh211PubMed CentralView ArticlePubMedGoogle Scholar
 Viterbi A: Error bounds for convolutional codes and an assymptotically optimum decoding algorithm. IEEE Trans Infor Theor 1967, 260–269. 10.1109/TIT.1967.1054010Google Scholar
 Dempster AP, Laird NM, Rubin DB: Maximum likelihood from incomplete data via the EM algorithm. J Roy Stat Soc B 1977, 39: 1–38.Google Scholar
 Baum LE: An equality and associated maximization technique in statistical estimation for probabilistic functions of Markov processes. Inequalities 1972, 3: 1–8.Google Scholar
 Hirschberg DS: A linear space algorithm for computing maximal common subsequences. Commun ACM 1975, 18: 341–343. 10.1145/360825.360861View ArticleGoogle Scholar
 Myers EW, Miller W: Optimal alignments in linear space. CABIOS 1988, 4: 11–17.PubMedGoogle Scholar
 Grice JA, Hughey R, Speck D: Reduced space sequence alignment. CABIOS 1997, 13: 45–53.PubMedGoogle Scholar
 Tarnas C, Hughey R: Reduced space hidden Markov model training. Bioinformatics 1998, 14(5):4001–406. 10.1093/bioinformatics/14.5.401View ArticleGoogle Scholar
 Wheeler R, Hughey R: Optimizing reducedspace sequence analysis. Bioinformatics 2000, 16(12):1082–1090. 10.1093/bioinformatics/16.12.1082View ArticlePubMedGoogle Scholar
 International Human Genome Sequencing Consortium: Initial sequencing and analysis of the human genome. Nature 2001, 409: 860–921. 10.1038/35057062View ArticleGoogle Scholar
 Kirkpatrick S, Gelatt CD Jr, Vecchi MP: Optimization by Simulated Annealing. Science 1983, 220: 671–680.View ArticlePubMedGoogle Scholar
 Roberts GO, Rosenthal JS: Optimal scaling of discrete approximations to Langevin diffusions. J R Statist Soc B 1998, 60: 255–268. 10.1111/14679868.00123View ArticleGoogle Scholar
 Eddy S: A memoryefficient dynamic programming algorithm for optimal alignment of a sequence to an RNA secondary structure. BMC Bioinformatics 2002, 3: 18. 10.1186/14712105318PubMed CentralView ArticlePubMedGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.