 Methodology article
 Open Access
 Published:
Mixture models for analysis of melting temperature data
BMC Bioinformatics volume 9, Article number: 370 (2008)
Abstract
Background
In addition to their use in detecting undesired realtime PCR products, melting temperatures are useful for detecting variations in the desired target sequences. Methodological improvements in recent years allow the generation of highresolution meltingtemperature (T_{m}) data. However, there is currently no convention on how to statistically analyze such highresolution T_{m} data.
Results
Mixture model analysis was applied to T_{m} data. Models were selected based on Akaike's information criterion. Mixture model analysis correctly identified categories in T_{m} data obtained for known plasmid targets. Using simulated data, we investigated the number of observations required for model construction. The precision of the reported mixing proportions from data fitted to a preconstructed model was also evaluated.
Conclusion
Mixture model analysis of T_{m} data allows the minimum number of different sequences in a set of amplicons and their relative frequencies to be determined. This approach allows T_{m} data to be analyzed, classified, and compared in an unbiased manner.
Background
Realtime PCR or semiquantitative PCR is widely used to detect and quantify specific target sequences. The exponential amplification of a sequence is monitored in real time by fluorescence. Commonly, a nonspecific fluorescent dye is used, such as SYBR Green I or LCGreen, which only reports the presence of doublestranded DNA. These dyes do not distinguish sequences and can thus report the amplification of undesired targets. Undesired sequences are normally detected during a dissociation step after thermocycling is complete. During dissociation, the doublestranded PCR products melt into single strands, so fluorescence is diminished. A curve can be produced by plotting the loss of fluorescence against a gradual increase in temperature. The temperature at which the rate of signal loss is the greatest can be defined as the melting temperature (T_{m}) of the PCR product. Although the T_{m} is sequence dependent, different sequences do not necessarily have different T_{m}. However, the converse is true. The detection of different T_{m} does imply the presence of different sequences. Therefore, by monitoring T_{m}, we can distinguish different targets for one set of primers. This technique has been used for the detection of singlenucleotide polymorphisms [1], allelic discrimination [2], and strain typing of microorganisms [3–5]. We previously reported the use of T_{m} analysis to detect the expression patterns of transcripts containing different members of the W family of human endogenous retrovirus (HERV) elements in vitro and in vivo [6, 7].
The precision of the T_{m} measurements determines the sensitivity with which different sequences can be distinguished. The instrument used to obtain the T_{m} recordings is the principal factor limiting the amount of information that can be extracted from the data. We recently reported a method that allows improved resolution, reduced spatial bias, and automated data collection for T_{m} detection in an ABI Prism 7000 Sequence Detection System (Applied Biosystems, Palo Alto, CA) [8]. Using a temperature indicator probe (T_{m}probe) and an algorithm (GcTm) to interpolate moreprecise T_{m} measurements from multiple data points, the standard deviation of the measurement error (σ) of the T_{m} recordings was improved from 0.19°C to 0.06°C [8].
However, there is no convention on how to analyze T_{m} data to objectively distinguish sequences by T_{m}. The need for such a tool becomes apparent when the T_{m} data are: i) not easily stratified because of overlapping clusters of T_{m} observations, and/or ii) if the number of different sequences and possible T_{m} categories are unknown. In this report, we use mixture model analysis to construct a model for a particular set of primer targets, to classify T_{m} data, and to calculate the mixing proportions of the amplicons within these categories. The mixture model technique allows T_{m} analysis to be applied to any set of primers to determine the minimum number of T_{m} categories (i.e., the number of different sequences detected) and the mixing proportions (frequency distributions) of the detected categories. Thus, mixture model analysis of T_{m} data is an objective method with which more refined T_{m} assays can be established.
Results
In a T_{m} analysis using the T_{m}probe and GcTm program, described previously [8], we demonstrated, using plasmids containing known sequences, that it was possible to distinguish some but not all sequences based on their T_{m}. In the present report, we applied the mixture models and the ρ established in the previous publication [8] to determine the T_{m} categories and mixing proportions of these data (Figure 1). Akaike's information criterion (AIC), a measure of how well a model explains the data, with a penalty for the number of parameters estimated, determined that the T_{m} of the four sequences were best represented by a threecategory mixture model. This model precisely estimated the mixing proportions of the T_{m} into the categories, attributing the correct number of T_{m} recordings to each of the four sequences (where two of them shared a category). For an overview of the procedure for using mixture models to analyze T_{m} data, see the Methods section.
We next assessed the performance of the mixture model analysis in constructing models for categories of T_{m} with varying separations. Therefore, we generated simulated data points mimicking the T_{m} of four sequences separated by multiples of σ. These data were used to identify the model that best explained the data according to AIC (see an example of the AIC plot in Figure 2) for a range of T_{m} separations and numbers of data points (Figure 3). A large separation of T_{m}, 10 × σ (0.6°C), allow the mixture model analysis to close in on four separate categories with only 10 data points. Smaller separations of T_{m} require larger numbers of data points to determine the correct number of T_{m} categories. The distinction of categories with a separation of 1 × σ required approximately 2000 data points to model the correct number of T_{m} categories.
Next, we evaluated the fit of the data points to preestablished models. For this purpose, we generated data points corresponding to a sample containing three of four possible T_{m} represented in a model. We compared the mixing proportions reported by the mixture model analysis with the mixing proportions in which all four T_{m} were present at equal frequencies. In Figure 4, the P values obtained from χ^{2} analyses for various separations of the T_{m} are plotted against the numbers of data points used. The P values for the χ^{2} test drop rapidly with increasing sample numbers for any T_{m} separation of more than 1 × σ. With smaller separations of the T_{m} categories, the mixture model analysis is unable to reliably establish the differences in the mixing proportions.
Discussion
We report the application of mixture models to the analysis of highresolution T_{m} data. Whereas the plasmid T_{m} data reported are sufficiently separated to be stratified manually, we use these data to demonstrate the principle that can be applied to analyze more complex T_{m} data.
Mixture model analysis of T_{m} data entails the construction of a model based on the T_{m} data for a set of primers. With such a model established, it is possible to fit smaller subsets of data to calculate the mixing proportions of the T_{m} categories of the model. This gives a proxy marker for the frequency distributions of different amplicon sequences in the analyzed data. This approach requires no prior knowledge of how many different amplicons are present and there is no limit to the number of different T_{m} that can be distinguished. However, the T_{m} analysis method with mixture models only reports the minimum number of different sequences required to explain the T_{m} data because different sequences can have the same T_{m}.
Mixture model analysis is a modern type of cluster analysis. The purpose of cluster analysis is to group data that have properties in common. When constructing the mixture model for a set of primers, the number of categories in the model that most appropriately explains the T_{m} data is determined by AIC. Other information criteria exist, such as the Bayesian information criterion, but this penalizes free parameters more harshly than does the AIC.
By empirical testing with simulated data, we found that smaller separations of T_{m} require exponentially larger numbers of data points to distinguish the correct number of categories in a mixture model. Insufficient numbers of observations yield an underestimation of the numbers of unique T_{m} represented by the data, erring on the side of safety. In other words, with insufficient data, the number of unique sequences in the data is underestimated by the optimal model.
In an established model, based on a large number of T_{m} observations, a smaller number of observations can be fitted to calculate the mixing proportions in the T_{m} categories. These proportions can then be compared between sets of T_{m} data as frequency distributions of sequences and analyzed with χ^{2} tests. We observed that, whereas a large number of T_{m} observations are required to establish a model with a small separation between categories (e.g., 1000 data points are required with 2 × σ separation), far fewer are sufficient for comparisons once the model is established (e.g., 100 data points for P < 0.001). A separation of the T_{m} categories in the model of less than 1 × σ results in unreliable mixing proportions. However, this should rarely be a problem in practice, because constructing the models puts a larger constraint on T_{m} separation by AIC. In other words, models constructed with mixture model analysis will consist of T_{m} categories separated by more than 1 × σ.
Not all dissociation curves are easily defined by a single T_{m}, as in the case of multiple domain transitions in longer sequences [9] (generally longer than those generated in realtime PCR assays) and for heterodimers. Using the GcTm approach to curve fitting and SYBR Green I chemistry, such melting profiles will be assigned a single T_{m} value. Although some additional information is therefore lost, mixture model analysis still validly identifies clusters of T_{m} and sequences. There is an established highresolution amplicon melting analysis (usually denoted HRM) using LCGreen, primarily based on differences in the profiles of melting curves rather than on absolute T_{m} [10]. Although this method is superior to mixture model analysis in identifying heterodimers, absolute T_{m} values are required to identify homodimers. Recently, a method with sufficient resolution to distinguish basepair neutral homozygotes was reported [11]. Mixture model analysis of T_{m} can be used in all cases where the T_{m} can be denoted as a single value, but primarily for homodimer discrimination.
Conclusion
In conclusion, the mixture model analysis of T_{m} presented here allows the unbiased analysis of highresolution T_{m} data. This analysis is applicable to the identification of sequences in T_{m} data regardless of the method by which the T_{m} are acquired, provided the measurement error is known. Mixture models allow T_{m} analyses to be performed on more complex and varied sequence targets than hitherto possible. Possible applications include typing microbial strains and their relative abundances in a population and the analysis of transcripts containing repetitive elements [3, 4, 6, 12].
Methods
Finite mixture models
Mixture models are useful for describing complex populations with observed or unobserved heterogeneity. The term mixture model encompasses many types of statistical structures. Here, we use it to denote mixture distributions. A mixture distribution is a collection of statistical distributions that arise when mixed populations are sampled that have a different probability density function for each component.
Let X be a random variable or vector taking values in sample space χ with the probability density functiong(x) = π_{1} f_{1} (x) + ... + π_{ k }f_{ k }(x), x ∈ χ,
where 0 ≤ π_{ i }≤ 1, i = 1, ..., k, π_{1} + ... + π_{ k }= 1.
Such a model can arise if one is sampling from a heterogeneous population that can be decomposed into k distinct homogeneous subpopulations, called component populations. If these components have been "mixed" together, and we measure only the variable X without determining the particular components, then this model holds. We say that X has a finite mixture distribution and that g(·) is a finite mixture density function. The parameters π_{1},..., π_{ k }are called mixing weights or mixing proportions, and each π_{ i }represents the proportion of the total population in the i th component.
There is no requirement that the component densities should all belong to the same parametric family, but in this paper, we keep to the simplest case where f_{1}(x),..., f_{ k }(x) have a common functional form but different parameters.
We apply the theory of finite mixture models to T_{m} data consisting of normally distributed components in a mixture model, where each component has a standard deviation of σ°C. The finite mixture density function is then as follows:
where ψ = (π_{1},..., π_{ k }, μ_{1},..., μ_{ k }, σ)^{T}.
The likelihood function corresponding to the data (x_{1},..., x_{ n }) is as follows:
The logarithm of the likelihood function is
We attempt to find the particular ψ that maximizes the likelihood function. This maximization can be undertaken in the traditional way by differentiating L(ψ; ×) with respect to the components of ψ and equating the derivatives to zero to give the likelihood equation:
Quite often, the log likelihood function cannot be maximized analytically, i.e., the likelihood equation has no explicit solutions. In such cases, it is possible to compute the maximum likelihood of ψ iteratively. To calculate maximum likelihood estimates, we use the expectation maximization (EM) method in combination with the NewtonRaphson algorithm. Iterations of the EM algorithm consist of two steps: the expectation step or the Estep and the maximization step or the Mstep [13, 14]. The NewtonRaphson algorithm for solving the likelihood equation approximates the gradient vector of the log likelihood function by a linear Taylor series expansion [15]. We use the NewtonRaphson algorithm in the Mstep of the EM method.
We developed an algorithm that allows the automated estimation, in parallel, of a finite number of normally distributed components. The number of components can be assessed by several different methods, although none of them is optimal. We chose the AIC [16, 17]. AIC is a relative score between different models where the selection of the optimal model is made by considering the number of data points and categories and the separation of the T_{m} categories. AIC is defined as 2L_{ m }+ 2m, where L_{ m }is the maximized log likelihood and m is the number of parameters.
Acquisition of HERVW gag T_{m}
T_{m} data were generated with GcTm, as previously described [8], on dissociation data obtained from the amplification of plasmids containing known HERVW gag sequences.
Simulated T_{m} data recordings and GcTm analysis were performed in MATLAB™ (The MathWorks) version 7.0.1.24704 with the Optimization Toolbox. Mixture model analysis was performed in R 2.6.0 [18] with the MIX software [19, 20].
Overview of mixture model analysis of T_{m}
A mixture model is constructed for a set of primers. The model should be constructed on a large enough sample of T_{m} data to expect all possible sequences to be represented. The T_{m} data are then stratified into smallinterval groups and the frequency distributions of these arbitrary categories are used to construct and compare the mixture models. AIC is used to evaluate which model best explains the data, while a minimum number of different categories is used. Lower values of AIC indicate the preferred model, i.e., the one with the fewest parameters. Once a model is selected, T_{m} data from different samples can be fitted to the model and the mixing proportions compared between samples. Differences between samples can be evaluated with χ^{2} tests if a conservative stance is taken, depending on the separation between the T_{m} categories and the numbers of data points.
Abbreviations
 T_{m}:

Melting temperature
 AIC:

Akaike's information criterion
 HERV:

human endogenous retrovirus
 EM:

expectation maximization.
References
 1.
Germer S, Higuchi R: Singletube genotyping without oligonucleotide probes. Genome Res 1999, 9(1):72–78.
 2.
Graziano C, Giorgi M, Malentacchi C, Mattiuz PL, Porfirio B: Sequence diversity within the HA1 gene as detected by melting temperature assay without oligonucleotide probes. BMC Med Genet 2005, 6: 36. 10.1186/14712350636
 3.
Pham HM, Konnai S, Usui T, Chang KS, Murata S, Mase M, Ohashi K, Onuma M: Rapid detection and differentiation of Newcastle disease virus by realtime PCR with meltingcurve analysis. Arch Virol 2005, 150(12):2429–2438. 10.1007/s0070500506030
 4.
WakuKouomou D, Alla A, Blanquier B, Jeantet D, Caidi H, Rguig A, Freymuth F, Wild FT: Genotyping measles virus by realtime amplification refractory mutation system PCR represents a rapid approach for measles outbreak investigations. J Clin Microbiol 2006, 44(2):487–494. 10.1128/JCM.44.2.487494.2006
 5.
Harasawa R, Mizusawa H, Fujii M, Yamamoto J, Mukai H, Uemori T, Asada K, Kato I: Rapid detection and differentiation of the major Mycoplasma contaminants in cell cultures using realtime PCR with SYBR Green I and melting curve analysis. Microbiol Immunol 2005, 49(9):859–863.
 6.
Nellåker C, Yao Y, JonesBrando L, Mallet F, Yolken RH, Karlsson H: Transactivation of elements in the human endogenous retrovirus W family by viral infection. Retrovirology 2006, 3(1):44. 10.1186/17424690344
 7.
Yao Y, Schröder J, Nellåker C, Bottmer C, Bachmann S, Yolken RH, Karlsson H: Elevated levels of human endogenous retrovirusW transcripts in blood cells from patients with first episode schizophrenia. Genes Brain Behav 2007, 7: 103–112.
 8.
Nellåker C, Wallgren U, Karlsson H: Molecular beaconbased temperature control and automated analyses for improved resolution of melting temperature analysis using SYBR I Green chemistry. Clin Chem 2007, 53(1):98–103. 10.1373/clinchem.2006.075184
 9.
Volker J, Blake RD, Delcourt SG, Breslauer KJ: Highresolution calorimetric and optical melting profiles of DNA plasmids: resolving contributions from intrinsic melting domains and specifically designed inserts. Biopolymers 1999, 50(3):303–318. 10.1002/(SICI)10970282(199909)50:3<303::AIDBIP6>3.0.CO;2U
 10.
Wittwer CT, Reed GH, Gundry CN, Vandersteen JG, Pryor RJ: Highresolution genotyping by amplicon melting analysis using LCGreen. Clin Chem 2003, 49(6 Pt 1):853–860. 10.1373/49.6.853
 11.
Gundry CN, Dobrowolski SF, Martin YR, Robbins TC, Nay LM, Boyd N, Coyne T, Wall MD, Wittwer CT, Teng DH: Basepair neutral homozygotes can be discriminated by calibrated highresolution melting of small amplicons. Nucleic Acids Res 2008, 36(10):3401–3408. 10.1093/nar/gkn204
 12.
Slinger R, Bellfoy D, Desjardins M, Chan F: Highresolution melting assay for the detection of gyrA mutations causing quinolone resistance in Salmonella enterica serovars Typhi and Paratyphi. Diagn Microbiol Infect Dis 2007, 57(4):455–458. 10.1016/j.diagmicrobio.2006.09.011
 13.
Dempster AP, Laird NM, Rubin DB: Maximum likelihood from incomplete data via the EM algorithm. J Roy Statist Soc B 1977, 39(1):1–38.
 14.
McLachlan GJ, Krishnan T: The EM Algorithm and Extensions. New York: Wiley; 1997.
 15.
Dennis JJE, Schnabel RB: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. New Jersey: Prentice Hall; 1983.
 16.
Akaike H: A new look at the statistical model identification. IEEE Trans Automat Control 1974, 19(6):716–723. 10.1109/TAC.1974.1100705
 17.
Akaike H, (ed.): Information Theory and an Extension of the Maximum Likelihood Principle. Budapest: Akademiai Kiado; 1973.
 18.
Team RDC: R: A Language and Environment for Statistical Computing. 2.6.0 edition. Vienna, Austria: R Foundation for Statistical Computing; 2008.
 19.
Macdonald P: MIX Software for Mixture Distributions. 2.3rd edition. Ontario, Canada: Ichthus Data Systems; 1988.
 20.
Du J: Combined algorithms for fitting finite mixture distributions. In Masters thesis. Hamilton, Ontario: McMaster University; 2002.
Acknowledgements
This study was generously supported by the Stanley Medical Research Institute, Bethesda, MD, and the Swedish Research Council (21X20047).
Author information
Affiliations
Corresponding author
Additional information
Authors' contributions
CN conceived the study, tested and prepared the manuscript; FU developed the method and critically revised the manuscript; JT developed the method and prepared the manuscript; HK conceived the study and prepared the manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Nellåker, C., Uhrzander, F., Tyrcha, J. et al. Mixture models for analysis of melting temperature data. BMC Bioinformatics 9, 370 (2008). https://doi.org/10.1186/147121059370
Received:
Accepted:
Published:
Keywords
 Mixture Model
 Finite Mixture
 Small Separation
 Finite Mixture Model
 Likelihood Equation