KullbackLeibler divergence is the log likelihood for the word counts
KullbackLeibler (KL) divergence has a natural probability theory interpretation as a limiting case of the multinomial distribution. In particular, it was used in the context of alignmentfree sequence comparison in the work [12];
Under this model one assumes that the counted nmers are drawn from a pool q with frequencies of each nmer being q_{
i
}, index i numbering all possible nmers and running from 1 to 4^{>n}. In other words, the model assumes that the words in a sequence are independent, and the probability of appearance of a particular word at a given position is position independent. The probability of appearance of the word i at a given position is q_{
i
}; i = 1,…,4^{n}. Under these assumptions the probability of obtaining nmer count vector c is given by the multinomial distribution^{2}:
P\left(\mathbf{c}\right\mathbf{q})=\frac{L!}{{c}_{1}!\dots {c}_{{4}^{n}}!}\prod _{i=1}^{{4}^{n}}{q}_{i}^{{c}_{i}}.
(1)
We denote {\sum}_{i}{c}_{i}=L — the total number of words in a sequence. For sufficiently large counts one can use the Stirling’s approximation,
c!\approx \sqrt{2\pi c}{\left(\frac{c}{e}\right)}^{c},\phantom{\rule{.3em}{0ex}}c\gg 1;
which yields
P\left(\mathbf{c}\right\mathbf{q})\approx \sqrt{2\pi n}\prod _{i=1}^{{4}^{n}}\frac{1}{\sqrt{2\pi {c}_{i}}}{\left(\frac{L{q}_{i}}{{c}_{i}}\right)}^{{c}_{i}}.
(2)
Denote the normalized counts as
{p}_{i}=\frac{{c}_{i}}{L};
consequently, the log of the probability is
logP\left(\mathbf{p}\right\mathbf{q})\approx L\sum _{i=1}^{{4}^{n}}{p}_{i}log\frac{{q}_{i}}{{p}_{i}}=L{D}_{\text{KL}}(\mathbf{p}\left\mathbf{q}\right).
(3)
The KL divergence between the frequency distributions p and q is:
{D}_{\text{KL}}\left(\mathbf{p}\right\mathbf{q})=\sum _{i=1}^{{4}^{n}}{p}_{i}log\frac{{p}_{i}}{{q}_{i}}.
(4)
When the difference between p_{
i
} and q_{
i
} is small, this probability distribution reduces to the multivariate normal distribution^{3},
\begin{array}{ll}\phantom{\rule{7.5pt}{0ex}}logP\left(\mathbf{c}\right\mathbf{q})& =L\sum _{i=1}^{{4}^{n}}{p}_{i}log\left(1+\frac{{q}_{i}{p}_{i}}{{p}_{i}}\right)\\ \approx L\sum _{i=1}^{{4}^{n}}{p}_{i}\left(\frac{{q}_{i}{p}_{i}}{{p}_{i}}\frac{1}{2}\frac{{({q}_{i}{p}_{i})}^{2}}{{p}_{i}^{2}}\right)\\ =\sum _{i=1}^{{4}^{n}}\frac{{({c}_{i}L{q}_{i})}^{2}}{2L{q}_{i}}.\end{array}
(5)
We have used the Taylor expansion for the natural logarithm:
log(1+x)=x\frac{1}{2}{x}^{2}+O\left({x}^{3}\right),
(6)
dropping the cubic and higher terms.
Interpretation of the formula (3) in the context of clustering is as follows. When we have several candidate pools q^{α} (“centroids”), KL divergence D_{KL}(pq^{α}) gives the log odds ratio for a sequence having the vector of counts c to be attributed to centroid α. Therefore the ML estimate of a centroid is obtained by minimizing the KL divergence. We employ this relation within the framework of expectation maximization clustering.
Expectation maximization clustering
The problem of clustering is the challenge of partitioning a dataset of N points into K subsets (clusters) such that the similarity between the points within each subset is maximized, and the similarity between different subsets is minimized. The measure of similarity can vary depending on the data. Generally the clustering problem is computationally hard. However, there exist heuristic clustering algorithms that run in polynomial time. Most common clustering approaches are hierarchical clustering, kmeans type (centroid based) clustering [13] and density based clustering [14]. Each of these approaches possesses its own advantages and disadvantages.
Hierarchical clustering does not require one to specify the number of clusters a priori. Instead it produces the linkage tree, and the structure of the tree (in particular, branch lengths) determines the optimal number of clusters. However, the complexity of hierarchical clustering is at least \mathcal{O}\left({N}^{2}\right). Density based algorithms (e. g., DBSCAN [14]) can find clusters of arbitrary shape as well as identify outliers. They do not require the prior specification of the number of clusters. Run time is \mathcal{O}(NlogN). Centroid based approaches (kmeans, expectation maximization) have a linear run time. Prior specification of the number of clusters is required, and results depend on the initialization of the algorithm.
In the present work we focus on centroid based technique. Our rationale for this is as follows. First, there exists a natural likelihood function for the word counts, which allows one to perform EM clustering. Also, the space of word counts possesses a natural notion of a centroid: for a set of sequences which belong to the same cluster one adds all the words within them; and the resulting frequencies yield the cluster centroid. Second, linear run time is critical for large datasets (in particular, HTS data).
EM is a generalization of kmeans algorithm. The number of clusters K needs to be specified in advance. For the execution of the algorithm on N sequences one needs the following variables: centroids q^{α}, α = 1,…,K; and assignments (“latent data”) z^{a}, a = 1,…,N. The algorithm consists of the two steps repeated iteratively until it converges.

1.
Expectation step: given the current centroids q ^{α}, compute the new values of z ^{a} so that the log likelihood \mathcal{L} is maximized.

2.
Maximization step: given the current assignments z ^{a}, compute the new values of q ^{α} so that the log likelihood \mathcal{L} is maximized.
This procedure guarantees that the log likelihood is nondecreasing at each step. Note that Equation (3) implies that the log likelihood is bounded from above by zero. These two facts imply that the algorithm converges^{4}. In terms of the the variables q^{α} and z^{a} the log likelihood is
\mathcal{L}=\sum _{a=1}^{N}{L}_{a}{D}_{\text{KL}}\left({\mathbf{p}}^{a}\right{\mathbf{q}}^{{z}^{a}})\equiv \sum _{a=1}^{N}{L}_{a}\sum _{i=1}^{{4}^{n}}{p}_{i}^{a}log\frac{{p}_{i}^{a}}{{q}_{i}^{{z}^{a}}}.
(7)
We denote the total number of words in the a’th sequence as L_{
a
}. Consequently, expectation step reassigns each point to its closest centroid:
{z}^{a}=\underset{\alpha}{\text{arg min}}{D}_{\text{KL}}\left({\mathbf{p}}^{a}\right{\mathbf{q}}^{\alpha}).
(8)
Centroids are updated during the maximization step as follows:
{q}_{i}^{\alpha}=\frac{{\sum}_{a=1}^{N}{\delta}_{{z}^{a},\alpha}{c}_{i}^{a}}{{\sum}_{a=1}^{N}{\sum}_{j=1}^{{4}^{n}}{\delta}_{{z}^{a},\alpha}{c}_{j}^{a}}.
(9)
Here we have introduced the Kronecker delta symbol:
{\delta}_{\alpha \beta}=\left\{\begin{array}{ll}1,& \alpha =\beta \\ 0,& \alpha \ne \beta \end{array}\right.
(10)
This prescription exactly corresponds to the natural notion of a centroid: one adds all the words counts within a cluster to obtain the total count vector and normalizes this vector. Detailed derivation of Equation (9) is presented in Appendix 1.
The EM algorithm depends on initialization. In other words, depending on the initial centroid assignment the algorithm may converge to a partitioning that is only locally optimal. One of the ways to minimize the impact of the random initialization is to perform clustering several times using different initializations. This results in several partitionings, and then the one which maximizes the likelihood function is chosen. In the framework of kmeans clustering selecting the partitioning with the minimal distortion leads to such maximization. Distortion is the sum of the intracluster variances for all the clusters. Using KL divergence as a likelihood function, one arrives at the modified definition of distortion:
\mathcal{D}=\sum _{a}{L}^{a}{D}_{\text{KL}}\left({\mathbf{p}}^{a}\right{\mathbf{q}}^{{z}^{a}}).
(11)
Note that in the limit when the likelihood function reduces to the Gaussian one, our EM algorithm reduces to Gaussian mixture EM. In this case in the light of the formula (5) our definition of distortion reduces to the regular one.
Alternative distance (pseudolikelihood) functions
We also explore some other distance functions, such as d_{2} and {d}_{2}^{\ast}[6, 15, 16]. We are not aware of their direct probabilistic interpretation as a likelihood function. Nevertheless, they represent some distances; i. e., they can serve as some measure of a degree of dissimilarity between the two sequences. One can operate in terms of a distortion function as a measure of the quality of the partitioning. In the case of the EM clustering of kmeans clustering, distortion equals the negative log likelihood. If one can prove that the analogs of both expectation and maximization steps lead to a decrease of distortion, this provides the basis for the convergence of the clustering algorithm.
d
_{
2
}
distance
d_{2} distance between the two vectors is defined as 1− cos θ, where θ is the angle between these vectors:
{d}_{2}(\mathbf{c},\mathbf{q})=1\frac{\mathbf{c}\xb7\mathbf{q}}{\parallel \mathbf{c}\parallel \parallel \mathbf{q}\parallel}.
(12)
Here ∥v∥ denotes the norm of the vector v:
\parallel \mathbf{v}\parallel =\sqrt{\sum _{i=1}^{{4}^{n}}{v}_{i}^{2}},
(13)
and the dot denotes the dot product:
\mathbf{c}\xb7\mathbf{q}=\sum _{i=1}^{{4}^{n}}{c}_{i}{q}_{i}.
(14)
One can define the distortion function as
\mathcal{D}=\sum _{a=1}^{N}{d}_{2}({\mathbf{c}}^{a},{\mathbf{q}}^{{z}^{a}})=N\sum _{a=1}^{N}\frac{{\mathbf{c}}^{a}\xb7{\mathbf{q}}^{{z}^{a}}}{\parallel {\mathbf{c}}^{a}\parallel \parallel {\mathbf{q}}^{{z}^{a}}\parallel}.
(15)
In the context of d_{2} distance it is natural to normalize the word counts for centroids and individual sequences so that they have a unit norm: ∥p∥ = ∥q∥ = 1.
EM algorithm can be generalized to use the d_{2} distance as follows. During the expectation step one assigns each sequence to the closest (in terms of the d_{2} distance) centroid. During the maximization step one updates the centroids as follows:
{\mathbf{q}}^{\alpha}=\frac{{\sum}_{a=1}^{N}{\delta}_{{z}^{a}\alpha}{\mathbf{p}}^{a}}{\parallel {\sum}_{a=1}^{N}{\delta}_{{z}^{a}\alpha}{\mathbf{p}}^{a}\parallel}.
(16)
We assume that the word counts for individual sequences are normalized so that ∥p^{a}∥ = 1. Equation (16) is derived in Appendix 1. This procedure ensures that at each step the distortion is nonincreasing. The distortion is bounded by zero from below. These two facts ensure the convergence of the algorithm. Equations (12) and (16) imply that the value of the d_{2} distance and the updated positions of the the centroids only depend on the normalized word counts. Consequently, the algorithm makes no distinction between the short and the long sequences.
{\mathit{d}}_{\mathbf{2}}^{\mathbf{\ast}}distance
{D}_{2}^{\ast} distance was introduced in works [15],[16]. Its modification with suitable normalization for comparing short sequences was introduced in work [6] and called{d}_{2}^{\ast}. This distance computation of expected word frequencies using the zero order Markov model and standardization of the observed word counts. In the context of centroid based clustering it can be formulated as follows.

1.
For a given cluster count the frequencies of single nucleotides (1mers) within the union of all sequences within the cluster.

2.
Compute the vector of expected frequencies of nmers Q using zero order Markov model. Under this prescription the expected frequency of nmer is the product of the frequencies of individual characters.

3.
For a vector of raw counts x define the corresponding standardized vector \stackrel{~}{\mathbf{x}} as
\stackrel{~}{{x}_{i}}=\frac{{x}_{i}{Q}_{i}{\sum}_{j=1}^{{4}^{n}}{x}_{j}}{\sqrt{{Q}_{i}{\sum}_{j=1}^{{4}^{n}}{x}_{j}}}.
(17)

4.
Denote the word count vector of all sequences within a cluster as x; then the distance between the centroid of this cluster and a sequence with the word count vector c is
{d}_{2}^{\ast}(\mathbf{c},\mathbf{x})=\frac{1}{2}\left(1\frac{\stackrel{~}{\mathbf{c}}\xb7\stackrel{~}{\mathbf{x}}}{\parallel \stackrel{~}{\mathbf{c}}\parallel \parallel \stackrel{~}{\mathbf{x}}\parallel}\right).
(18)
Update of sequences’ assignment to clusters is the analog of the maximization step. Update of the expected frequencies is the analog of the expectation step. A priori it is not obvious how to define the distortion so that both expectation and minimization steps lead to a guaranteed decrease in distortion. We leave this question as well as the proof of convergence beyond the scope of the current work.
χ
^{2}
distance
Standardization procedure as defined in Equation (17) is inspired by the Poisson distribution where mean equals variance. Following a similar logic, we introduce the χ^{2} distance:
{\chi}^{2}(\mathbf{c},\mathbf{Q})=\sum _{i=1}^{{4}^{n}}\frac{{({c}_{i}{Q}_{i}L)}^{2}}{{Q}_{i}L},\phantom{\rule{1em}{0ex}}L=\sum _{i=1}^{{4}^{n}}{c}_{i}.
(19)
Despite the apparent similarity of this definition with Equation (5), the frequency vector Q is the expected vector computed from the zero order Markov model (the same way as it was computed in the calculation of {d}_{2}^{\ast} distance).
Symmetrized KullbackLeibler divergence
This distance is the symmetrized KullbackLeibler divergence:
{D}_{\text{KL}}^{S}\left(\mathbf{p}\right\mathbf{q})=\sum _{i=1}^{{4}^{n}}({p}_{i}{q}_{i})log\frac{{p}_{i}}{{q}_{i}}.
(20)
It assumes that p and q are normalized frequency vectors:
\sum _{i=1}^{{4}^{n}}{p}_{i}=\sum _{i=1}^{{4}^{n}}{q}_{i}=1.
(21)
Consensus clustering
Centroid based and hierarchical clustering techniques can be combined in consensus clustering. In this approach centroid based clustering is performed a number of times, each time randomly selecting a fraction of samples into the bootstrap dataset. After that the distance matrix is formed as
\begin{array}{l}\phantom{\rule{19em}{0ex}}{D}_{\mathit{\text{ij}}}=1\\ \frac{\mathrm{\#}\left(\text{times}\phantom{\rule{.3em}{0ex}}\mathit{i}\phantom{\rule{.3em}{0ex}}\text{and}\phantom{\rule{.3em}{0ex}}\mathit{j}\phantom{\rule{.3em}{0ex}}\text{were}\phantom{\rule{.3em}{0ex}}\text{clustered}\phantom{\rule{.3em}{0ex}}\text{together}\right)}{\mathrm{\#}\left(\text{times}\phantom{\rule{.3em}{0ex}}\mathit{i}\phantom{\rule{.3em}{0ex}}\text{and}\phantom{\rule{.3em}{0ex}}\mathit{j}\phantom{\rule{.3em}{0ex}}\text{were}\phantom{\rule{.3em}{0ex}}\text{in}\phantom{\rule{.3em}{0ex}}\text{the}\phantom{\rule{.3em}{0ex}}\text{same}\phantom{\rule{.3em}{0ex}}\text{dataset}\right)}.\end{array}
(22)
Hierarchical clustering is performed with distance matrix D_{
ij
}. This approach is computationally expensive as complexity of the distance matrix construction is \mathcal{O}\left({N}^{2}\right), and the complexity of the hierarchical clustering using average linkage is \mathcal{O}({N}^{2}logN) for an input of N sequences.
Recall rate
Consider a set of HTS reads originating from several genes (contigs). Grouping together reads originating from the same gene provides a natural partitioning of the read set. Recall rate is a measure of how well the clustering agrees with this natural partitioning. In other words, the recall rate provides a measure of how well the reads from the same contig cluster together. It is defined as follows. Consider reads originating from some gene G. For example, if the number of clusters is K = 4 and 40% of reads from G are assigned to cluster 1, 20% of reads from G are assigned to cluster 2, 10% of reads from G are assigned to cluster 3, 30% of reads from G are assigned to cluster 4; the recall rate is R_{
G
} = 40%.
Generally, assume that there are K clusters, and consider reads originating from some gene G. Denote f_{
k
} the fraction of all reads originating from G which are assigned to the cluster k. Recall rate for gene G is
{R}_{G}=max({f}_{1},\dots ,{f}_{k}).
(23)
Recall rate provides a measure of how clustering interferes with assembly. In particular, when the recall rate is R_{
G
} = 1, all reads from gene G get assigned to the same cluster; and the contig for G can be assembled from just one cluster with no losses.
We performed a numerical experiment to estimate the dependence of the recall rate on the read length and the clustering method. We generated 50 sets of human RNA sequences, each set containing 1000 sequences randomly chosen from the set of the reference sequences. We required that the length of each sequence is at least 500 bp and at most 10000bp. After that we simulated reads of length 30, 50, 75, 100, 150, 200, 250, 300, 400bp from each of these 50 sets using Mason [17]. Each read set contained 100000 reads. Mason options used were illumina N 100000 n READLENGTH pi 0 pd 0 pmm 0 pmmb 0 pmme 0. This way we obtained a total of 450 simulated read sets: one set for each of the 50 gene sets and 9 values of the read length. To study the dependence of the recall rate on the sequencing error rate for each of the 50 gene sets we generated 100000 reads of length 200 and error rate 0.001, 0.005, 0.01, 0.02, 0.03, 0.04, 0.05. Mason options used were illumina N 100000 n 200 pi 0 pd 0 pmm ERRORRATE pmmb ERRORRATE pmme ERRORRATE. This way we obtained 350 simulated read sets: one set for each of the 50 gene sets and 7 values of the error rate. To study the dependence of the recall rate on the depth of coverage (total number of reads) we simulated read sets with 200000, 150000, 100000, 75000, 50000, 30000, 20000, 10000, 5000 reads. Mason options used were mason illumina N NUMREADS n 200 pi 0 pd 0 pmm 0 pmmb 0 pmme 0. This way we obtained 450 simulated read sets: one read set for each of the 50 gene sets and 9 values of the number of reads.
We performed hard EM clustering, kmeans clustering, L_{2} clustering and d_{2} clustering and computed the recall rate for each gene in each read set. The results show that the EM algorithm exhibits a higher recall rate than that of kmeans algorithm. For kmeans clustering we used the implementation available in scipy[18] package.
Soft EM clustering
For the execution of the algorithm one needs the following variables: centroids q^{α} and probabilities {Z}_{a}^{\alpha} for observation point a to be associated with cluster α. EM algorithm iteratively updates probabilities{Z}_{a}^{\alpha} starting from centroid locations, and then updates centroid locations q^{α} using the updated probabilities {Z}_{a}^{\alpha}. These steps are performed as follows.
Given a set of centroids q^{α} and observations (count vectors) c^{a}, the probability for observation a to be associated with centroid α is
{Z}_{a}^{\alpha}=\frac{P\left({\mathbf{p}}^{a}\right{\mathbf{q}}^{\alpha})}{{\sum}_{\beta}P\left({\mathbf{p}}^{a}\right{\mathbf{q}}^{\beta})},
(24)
as it follows from Bayes’ theorem. In the “soft” EM algorithm {Z}_{a}^{\alpha} can take fractional values, calculated according to Equation (24)^{5}.
Given the probabilities {Z}_{a}^{\alpha}, one updates centroid locations by maximizing the log likelihood expectation
\mathcal{L}\left(\mathbf{q}\right)=\mathrm{E}\left[logP\left(\mathbf{p}\right\mathbf{q})\right].
(25)
When written explicitly it becomes
\mathcal{L}=\sum _{\alpha =1}^{K}\sum _{a=1}^{N}{Z}_{a}^{\alpha}{L}_{a}{D}_{\text{KL}}\left({\mathbf{p}}^{a}\right{\mathbf{q}}^{\alpha}).
(26)
Here we denote the number of clusters by K and the number of sequences by N. In our conventions Greek index α runs over the different clusters, Latin index a runs over different sequences, and Latin index i runs over different nmers. As derived in Appendix 1, centroids are computed as follows:
{q}_{i}^{\alpha}=\frac{1}{{\mathrm{\Lambda}}_{\alpha}}\sum _{a=1}^{N}{Z}_{a}^{\alpha}{c}_{i}^{a},
(27)
where
{\mathrm{\Lambda}}_{\alpha}=\sum _{i=1}^{{4}^{n}}\sum _{a=1}^{N}{Z}_{a}^{\alpha}{c}_{i}^{a}.
(28)
Note that Equation (27) conforms to the intuitive notion of centroid in terms of the word counts. Namely, word counts from all the sequences in the cluster are added up (with some weights in soft EM), and the resulting frequencies are calculated.
As explained, soft EM algorithm assigns the read a to the cluster α with some probability{Z}_{a}^{\alpha}. Choice of a confidence threshold ε produces a set of clusters: read a is a member of cluster α if {Z}_{a}^{\alpha}\ge \epsilon. Note that the clusters are possibly overlapping; i. e., one read can be assigned to multiple clusters simultaneously.
ROC curve for soft EM clustering
Consider a group of reads coming from the same origin (e. g., the same gene or the same organism). A perfect alignmentfree classification would assign them to a single cluster (possibly containing some other reads). Let us assume that we know the origin of each read. A choice of some fixed value for the cutoff ε will assign each read to zero or more clusters. We consider the cluster which embraces the largest part of the reads from gene G to be the “correct” assignment for the reads originating from this gene. For example, assume that we have K = 4 (overlapping) clusters, containing 40%, 35%, 35% and 10% of the reads correspondingly. Then the first cluster is the “correct” assignment that would be attributed to all the reads from gene G if the clustering algorithm were perfect.
The true positive rate (recall rate) is
\text{TPR}=\frac{\#\left(\text{reads}\phantom{\rule{.3em}{0ex}}\text{correctly}\phantom{\rule{.3em}{0ex}}\text{assigned}\right)}{\#\left(\text{reads}\right)}.
(29)
We define the false positive rate as
\text{FPR}=\frac{\#\left(\text{reads}\phantom{\rule{.3em}{0ex}}\text{incorrectly}\phantom{\rule{.3em}{0ex}}\text{assigned}\right)}{\#\left(\text{reads}\right)}.
(30)
A read is considered “incorrectly” assigned if it is assigned to at least one cluster different from the correct one. Note that for some values of the threshold ε the same read can be simultaneously assigned to a correct and an incorrect cluster, thus producing both a true and a false positive. In the limit ε → 0 each read is assigned to each cluster (FPR=TPR=1). In the limit ε → 1 neither read gets assigned to any cluster (FPR=TPR=0).
Dependence of TPR vs FPR as ε changes from 0 to 1 gives an analog of the ROC curve^{6}. Performance of the algorithm is characterized by the area under the curve (AUC).
Assembly of real data
Reads from an Illumina run on cDNA of a nasal swab were taken. After filtering out the low quality and the low complexity reads 21,568,249 100bp single end reads were left. Velvet [19] assembly was performed with the default settings. Velveth command line was velveth Assem HASHLENGTH short fasta INPUTFILE. Velvetg command line was velvetg Assem. Values of the has length were 21, 31, 41. Assembly was performed on the complete read set as well as on subsets obtained as a result of alignmentfree clustering of the reads. Hard clustering was performed 5 times, and the partitioning with the minimal distortion was chosen. Soft clustering was performed once. Confidence cutoff for the soft clustering is ε = 0.05. For every splitting of the read set all the contings generated from individual parts were merged together. After that the original reads were mapped back onto the contings using bwa[20] and allowing up to 2 mismatches. The number of reads which map back onto the contigs is a measure of the quality of assembly. It takes care of the possible duplicate contigs which may be formed when assembling separate parts of the sample.
Sequence data
Reference sequences for human mRNA genes were obtained from NCBI RefSeq ftp site, http://ftp.ncbi.nih.gov/refseq/H_sapiens/mRNA_Prot/. Data were downloaded from NCBI on Apr 09 2013. Sequences for the bacterial recA, dnaA, rpsA and 16S rRNA genes used in the simulation were extracted from streptococcus pneumoniae genome, [GenBank:NC_003028]. Viral sequences used in the simulation are [GenBank:NC_001477, NC_001943, NC_000883, NC_015783, NC_001806, NC_003977, NC_001802]. We concatenate all the segments of segmented viruses (rotavirus [GenBank:NC_011500, NC_011506, NC_011507, NC_011508, NC_011510, NC_011501, NC_011502, NC_011503, NC_011504, NC_011509, NC_011505], Lujo virus [GenBank:FJ952385, FJ952384] and influenza virus). For influenza virus we use the sequence of the vaccine strain, A/California/7/2009.