Statistical characterization of multiple-reaction monitoring mass spectrometry (MRM-MS) assays for quantitative proteomics
© Mani et al.; licensee BioMed Central Ltd. 2012
Published: 5 November 2012
Skip to main content
© Mani et al.; licensee BioMed Central Ltd. 2012
Published: 5 November 2012
Multiple reaction monitoring mass spectrometry (MRM-MS) with stable isotope dilution (SID) is increasingly becoming a widely accepted assay for the quantification of proteins and peptides. These assays have shown great promise in relatively high throughput verification of candidate biomarkers. While the use of MRM-MS assays is well established in the small molecule realm, their introduction and use in proteomics is relatively recent. As such, statistical and computational methods for the analysis of MRM-MS data from proteins and peptides are still being developed. Based on our extensive experience with analyzing a wide range of SID-MRM-MS data, we set forth a methodology for analysis that encompasses significant aspects ranging from data quality assessment, assay characterization including calibration curves, limits of detection (LOD) and quantification (LOQ), and measurement of intra- and interlaboratory precision. We draw upon publicly available seminal datasets to illustrate our methods and algorithms.
In the past decade, the scientific community has seen an uptick in the use of mass spectrometry (MS) for the quantification of proteins and peptides in complex biological matrices. However, the technique that is most frequently used in quantitative assays, selected reaction monitoring (SRM, plural form: multiple reaction monitoring, MRM) MS was first reported in 1979 during the introduction of the triple quadrupole (QqQ) mass spectrometer . Initially used for the detection, identification and quantification of small molecules [2–8], the QqQ has become prolific in proteomics laboratories and a necessary tool for the quantification of peptides and proteins, especially for biomarker verification. Biomarker verification is a step in the proteomics pipeline in which candidate biomarkers that have been identified from unbiased discovery experiments are targeted by quantitative assays utilizing stable isotope-dilution and MRM-MS . This manuscript will focus on the statistical characterization and evaluation of MRM-MS assays arising from quantitative biomarker verification studies.
The power of the QqQ mass spectrometer comes from the inherent selectivity of its staged mass selection and detection. In the majority of quantitative MS experiments, the QqQ operates in SRM mode (plural form: multiple reaction monitoring, MRM). In this mode, as samples are ionized by electrospray ionization  and enter the instrument, the first quadrupole (Q1) is set to only allow the predefined m/z value of the precursor ion to pass into the second quadrupole, or the collision cell. In the collision cell, the selected ions enter a higher pressure region with argon or nitrogen gas, resulting in low energy collisions and fragmentation of the selected precursor ion into many product ions. Finally, only the preselected product ions with specific m/z values are allowed to pass through the third quadrupole (Q3) and on to the detector . The result is a very selective means for separating the target ions away from everything that is being introduced into the mass spectrometer (i.e., through liquid chromatography or other sample introduction), and further detecting fragment ions of the target and reducing chemical noise from the sample. One of the benefits of MRM-MS on a QqQ MS platform is the speed at which it is able to detect multiple transitions (Q1/Q3 pairs), which is on the order of 10 msec per transition or less, allowing high multiplexing capabilities. This ability can be harnessed for both the analysis of many peptides (10's-100's) per assay, and the monitoring of many transitions per peptide. This ability is important because the identity of the peptide is reliant on the sparse few transitions that are detected and that discriminate it from other peptides or molecules in the sample. Therefore, a highly selective assay for a particular peptide would target several transitions, minimally three product ions. This results in three or more independent measures for a particular peptide target, which can make statistical analysis more complicated.
Due to the inherent instability of electrospray ionization, accurate and precise quantification is best achieved through the addition of a stable isotope-labeled standard (SIS) into the sample, an approach called isotope dilution [11–19]. The most common internal standards have been 13C and/or 15N-labeled peptide analogs, which introduce little chromatographic shift in reversed-phase chromatography so that they coelute with the target peptide and are chemically identical to the target peptide, except for the mass difference. The isotopically labeled standard is spiked into the sample as far up-stream in the sample handling process as possible. If isotopically labeled proteins are unavailable (such as uniformly 15N-incorporated proteins, or proteins with 13C and/or 15N modified amino acids such as arginine or lysine), then peptide analogs can be synthesized with isotopically labeled amino acids and spiked in pre- or post enzymatic digestion of the sample. These peptide standards behave similarly to the target peptide with regards to chromatographic separation, ionization, and fragmentation. The intensity of the signals detected for the SIS peptide is then compared to the signals for the analyte peptide, and their peak areas (determined from the area under the curve of the extracted ion chromatogram, XIC, for each transition) are compared to generate a peak area ratio (PAR). When the SIS peptide is spiked into the sample in a known quantity, the PAR is multiplied by the SIS peptide amount and the analyte peptide concentration is determined. While using only 3 transitions for the detection and identification of a target analyte seems sparse, the chromatographic retention times of the analyte peptide and SIS are also paired to ensure the proper peptide is detected. Finally, another important criterion to ensure peptide identity is the fragment ion ratio for a given peptide. This concept was first described in the context of small molecules as the "branching ratio", where each time a small molecule was fragmented and multiple product ions were detected, the ratio of these ions to one another was consistent: the largest fragment was always the largest, the smallest was always the smallest, and so on, as long as no interferences were present and the concentration was within the linear range of detection . In the context of peptides, this effect is also seen from the fragmentation along the peptide backbone, and ensuring this ratio is consistent between the peptide target and the IS provides another level of selectivity and can indicate the presence of interfering signal . This topic is further discussed below in Section 5.
While quantitative MRM-MS assays have been in practice for decades 3[4–7, 11, 15–19, 21, 23], this manuscript will focus on some more recent publications that use SID-MRM-MS for the quantification of peptides in plasma or similar complex matrices [15, 22, 23]. The first few examples describe the use of SID-MRM-MS for the quantification of peptides from samples with complex biological matrices [16–18]. In all cases, the work describes the use of SIS peptides as internal standards added to the sample matrix, sample analysis by MRM-MS and the calculation of peptide amount present in the sample. These papers created a turning point in the use of SID-MRM-MS in proteomics labs because they demonstrated the feasibility of simple assay development, throughput and precision in the quantification of target peptides present in complex samples.
The earlier publications on peptide quantification using SID-MS did not, in fact, have detailed sections on the statistical analysis of the quantitative data. Barr et al  report variances between MS run to MS run, or between digestion replicates, but did not discuss assay characteristics such as linear range or limits of detection and quantification. Gerber et al  briefly described the linearity of the assay between the concentration points assessed, but did not discuss reproducibility, the slope of the response curve or other metrics. Barnidge et al  showed the effect of equal weighting versus 1/x weighting when plotting the linear regression of the standard curve area versus concentration. Barnidge and Barr also discussed percent recovery of the peptide target from the proteolytic digest and sample handling, a topic that is further explored by Agger et al, for the quantification of apolipoproteins A-1 and B . The more recent publications have more detailed sections on these calculations [15, 19, 20, 23, 24], but may still not be exhaustive enough to describe all aspects of calculations required to define an analytical assay for the newcomer. Therefore, this manuscript will consolidate many of the statistical and analytical approaches used to describe the quantitative aspects of SID-MRM-MS assays.
One example from a recent study--published in Nature Biotechnology, and hence referred to as the NBT study--evaluated the repeatability and reproducibility of SID-MRM-MS across multiple labs for the quantification of 10 peptides from 7 proteins spiked into human plasma . The overall NBT study was constituted by three (sub-) studies. Study I is the peptide-level spike, where synthetic peptides were spiked into a background sample matrix of digested plasma to generate a calibration curve between 1 fmol and 500 fmol per μL in 1 μg of digested plasma. Study II is the protein-level spike, in which an equimolar mixture of the 7 target proteins were digested together, and then spiked into the background of digested plasma. This phase was designed to determine the effect of protein digestion on peptide recovery and its contribution to assay variability. The third phase, Study III, was also a protein-level spike, but mimicked a "real world" biomarker assay in which an equimolar protein solution was spiked into neat plasma and all subsequent sample handling steps (denaturation, reduction, alkylation, digestion, desalt and addition of stable isotope labeled peptide standards) were conducted at the individual laboratories. In all three cases, target peptides (or proteins) were spiked in at 9 different concentrations (1 fmol-500 fmol/μL in 1 μg/μL plasma) to generate a response curve, and the SIS peptides were spiked in at a fixed concentration of 50 fmol/μL in all samples including the blank, which consisted of digested plasma only. Eight laboratories participated in this study, seven of which used the same MRM-MS platform (4000 QTRAP, ABSciex) and the remaining lab used a TSQ Quantum Ultra QqQ (Thermofisher). All labs used nanoflow chromatography and adhered to an SOP that was distributed to dictate sample handling and data acquisition. All data acquired from this study is available on-line (http://www.proteomecommons.org/tranche/, Tranche hash: bCKpfN0bl2ULLwCaIovXn/spuw4rYfJF6H/L+/6sHAKGzCsj4fzTD0RauJjAwf9baB8tI36HQ0izji2tupYAPM29P2cAAAAAAAT0iw==), and will be used in example calculations.
Additional studies have been reported that aim to target clinically relevant analyte concentrations of proteins in plasma [15, 22, 24]. Of importance in these assays is measurement precision, inter- and intra-assay reproducibility or coefficient of variation (CV), as well as accuracy, and limits of detection and quantification. The following sections will discuss calculations of these parameters and metrics and discuss the necessary experimental design, as well as several methods for calculation and statistical analysis of the data. Many of these algorithms and calculations will be illustrated using the NBT study.
MRM-MS assays are characterized and evaluated based on several performance metrics and characteristics. Definitions of these metrics and associated terminology are laid out in this section, and will be used in the rest of the manuscript.
Peak areas from each of the monitored transitions (usually 3 or more per peptide form) are determined based on the extracted ion chromatograms (measured ion intensity or count per chromatographic time).
In the context of SID-MRM-MS, the peak area of each peptide analyte transition is divided by the peak area of the corresponding transition from the stable isotope labeled peptide form to obtain the peak area ratio.
The precision of the data is determined by measuring replicates (3 or more) of one sample in the same manner. Precision is usually represented by standard deviation and coefficient of variation (CV).
Accuracy of the data is calculated (when the true concentration is known) as percent error.
Synonymous with precision.
The lowest analyte concentration at which the signal is discernable from the noise (chemical noise, white noise, etc), or detected with confidence . This can be calculated in a variety of ways, several of which will be described below.
The lower limit of quantification refers to the lowest concentration of the analyte at which quantitative measurements can be made. The upper limit of quantification describes the highest concentration of analyte above which the signal departs from linearity. These two limits of quantification define the linear range of the assay.
MRM-MS assays are used when the detection and quantification of specific analyte targets are required from a complex mixture. Stable isotope-labeled standard (SIS) peptides are used for a variety of reasons, but primarily act as an internal standard for the measurement of the peptide analyte and minimize the contributions of measurement variations due to chromatography, ionization, fragmentation and detection by MS. Assays can be designed to determine the Figures of Merit (limits of detection and quantification, precision and accuracy) by incorporating a calibration curve. The Figures of Merit can change due to differences in sample matrix (both nature of matrix and concentration) and factors affecting instrument sensitivity (chromatographic resolution, ionization, MS detection, etc). It is recommended to determine Figures of Merit if any of these factors are changed, and periodically on the same instrument, especially when analyzing samples that will be detected near the lower LOQ of the assay or when high precision is required.
In typical quantitative SID-MRM-MS assays, the determined Figures of Merit are strongly influenced by system performance, both in terms of sensitivity and reproducibility from sample to sample. The noise contributed by the sample matrix also plays a major role in the magnitude of the calculated LOD and LOQ, and this is determined usually by several (at least three, preferably more) repeat measurements of matrix blanks (sample including everything except the target analyte) run throughout the course of the assay. With current technologies and on normally functioning nanoflow LC-MRM-MS systems, typical peptide LODs can be attained in the 100's amol per 1 ug equivalent protein digest load [19, 23].
An ideal calibration curve has a slope of 1 and an intercept of 0, indicating that the measured concentrations are in excellent agreement with the theoretical concentrations. An example of a well-behaved peptide is shown in Figure 1a. Deviation of the slope from 1 indicates less than ideal response, and a significant non-zero intercept is indicative of the presence of endogenous analyte (Section 4.1 and 4.2).
A standard way to fit such calibration curves is ordinary least squares (OLS) regression . While non-linear calibration curves could also be fitted, such curves may tend to overfit the data, given the relatively small number of points used for the fit. Furthermore, the slope and y-intercept of a linear regression fit have additional relevance from a quantification perspective.
MRM-MS assays usually have a linear operating region where the intensity response linearly varies as the spike-in concentration of the target analyte is varied. When a concentration curve is run, these limits of the linear region are not known--in fact determining this region is one of the goals of running the response curve. As such, we expect some analyte concentration values at the high and/or low end of the spectrum to lie outside the linear operating region. Therefore, when a linear OLS regression curve is fit, these points in non-linear regions of the MRM-MS response can unduly affect the regression fitting, resulting in skewed slope and y-intercept values. Robust regression [2, 14] is one approach used to address this problem. Robust regression fitting algorithms are resistant to outliers, and down-weight points that deviate from the main bulk of data points, resulting in more reliable estimates. Some common methods for robust regression includes least median of squares (LMS) regression, least trimmed squares (LTS) regression  and the use of the MM-estimator .
Furthermore, the variance of concentration measurements tends to increase at higher concentrations. In order to account for this trend data points are weighted according to the inverse square of the measurement or variance at that measurement level. This weighting can be used either with least squares regression (resulting in weighted least squares, or WLS regression) or with robust regression.
A comparison of OLS, WLS and robust regression with and without weighting for representative peptides in the NBT Study data are shown in Figure 1c. As is evident from the Figure, OLS is significantly influenced by the few points at the higher concentration level. Robust regression is more resistant to such outlier and tends to fit the regression line to follow the trend captured by a majority of points. The weighted regression lines for WLS and robust regression are much closer and are significantly less influenced by outliers and the high variance at the upper end of the calibration curve.
Comparison of the regression analysis in linear and log space.
Regression slope after log-transformation
Regression slope in linear space
ideal (slope = 1)
ideal (slope = 1)
Traditionally in analytical chemistry, the slope of a linear regression is related to the sensitivity of an assay, which describes the ability of the assay to differentiate between small changes in analyte concentration . Calibration sensitivity is equal to the slope of the calibration curve and is independent of concentration. This definition is the quantitative definition of sensitivity that is recognized by the International Union of Pure and Applied Chemists (IUPAC). Calibration sensitivity, however, does not take into account measurement precision. Analytical sensitivity (γ), described by Mandel and Stiehler , takes into account the precision of the measurements as well as the slope of the calibration curve: γ = m/ss, where m is the slope of the calibration curve and ss is the standard deviation of the measurement. In the context of peptide quantification, the slope of the calibration curve or the analytical sensitivity would easily aid in the selection of the best peptide targets, if there were several to choose from, and is also a good measure of whether or not similar instruments are measuring the target peptides with equal sensitivity. However, in addition to sensitivity, other figures of merit can be calculated from these values, including limit of detection, limit of quantification, and the amount of endogenous signal present in the blank .
Given the importance of the slope and intercept of the regression line for the calibration curve, an additional approach to evaluating the robustness and quality of the regression fit is to inspect the 95% confidence intervals for the slope and intercept. While many regression fitting algorithms provide an estimate of the standard error, the 95% confidence intervals can be easily calculated . Bootstrap resampling is an alternative method for determining these limits  (also see Section 4.2 below).
Currently, less attention has been given to slope and y-intercept, and are often not reported in publications, in lieu of R2 . R2 is a measure of "explained variance", and does not provide an indication of the robustness of the regression fit. In addition to R2, other factors of the regression fit including confidence intervals of the slope and intercept, residuals and a graph of the data should be examined before judging the quality of the regression line .
Limits of detection (LOD) and quantification (LOQ) are important characteristics of any quantitative method, and in the MRM-MS assay can be determined using the calibration curve. The intuition and definitions related to LOD and LOQ determination are described in Currie, 1968 . There are a variety of methods to calculate LOD and LOQ, based on different aspects of the assay, and its intended application. A brief summary of the different classes of methods to determine detection and quantification limits is given below.
In this approach, replicates of a blank sample--i.e., a sample with the target analyte absent--are used to determine the LOD and LOQ of the analyte . Assuming that random measurement errors are normally distributed, and with 5% risk of incorrectly claiming detection in the absence of analyte (α) or missing the detection of analyte (β), LOD = 3.29 σB and LOQ = 3 × LOD = 10 σB where σB is the standard deviation of the blank sample.
The above method uses only the blank sample. In practice, the standard deviation of the blank sample could be significantly different from the standard deviation with the analyte present at a low level. To account for this possibility, LOD and LOQ calculation explicitly takes both the blank and the low concentration samples into account. A variation of the partly nonparametric method in  is to use a parametric approximation to account for a small number of replicates. This approach used in  is to calculate LOD as: LOD = μB +t(1-β) (σB + σS)/√n, where μB is the estimated mean of the blank samples, σB is the standard deviation of the blank samples and σS is the standard deviation of the low concentration samples. The equation assumes that analyte concentration is estimated using the mean of n replicates. Given the LOD, LOQ is estimated as 3 × LOD.
Instead of using just the blank or a low concentration point, this method uses the entire calibration curve to determine LOD. Also termed the calibration plot method, the standard error sy|x of the measured concentration (y-estimate in the regression equation) is used in place of the standard deviation of the blank sample . The LOD is then calculated as LOD = 3 sy|x/slope, and LOQ = 3 LOD.
This approach  determines LOQ based on an accepted target value for relative standard deviation (RSD). RSD is the absolute value of the coefficient of variation (CV, the ratio of standard deviation to mean), and is expected to be small at the LOQ (typically less than 10% or 20%). The calibration curve is used to determine the RSD at each spike-in level, and the RSD variation is modeled as a function of the analyte concentration using RSD = level × p1 (1 - p 2 1og(level)). The parameters p1 and p2 are determined using a fitting process, and the LOQ is that analyte concentration where the target RSD is achieved. The LOD is then reported as LOD = LOQ/3.
Endogenous presence of analyte signal in the sample matrix is a difficult problem to deal with because it can complicate the calculation of LOD and LOQ. In addition, any signal derived from a spiked-in analyte (as in a calibration curve experiment) is added to the endogenous signal. One experimental approach to circumvent this issue is to use a surrogate matrix, one that is very similar to the sample matrix, but does not contain the endogenous analyte. This can be difficult to find, especially in a sample matrix as complex as plasma with thousands of proteins ranging ten orders of magnitude of concentration . Using plasma from a different species may even introduce new problems, such as interfering signal. An experimental alternative is to use the internal standard as a surrogate analyte and vary its concentration in the sample matrix to generate a standard curve . While a reasonable alternative, this can cause questions to arise about the difference in the chemical noise that may be present at the m/z values for the surrogate analyte versus the real analyte. A stable isotope-labeled version of a peptide with a mass shift of 6 amu may have an entirely different level of chemical noise contributed by the sample matrix and electronic noise. Therefore, it is beneficial to consider a statistical means of estimating the endogenous level of analyte present in a sample matrix.
with the y-axis representing measured concentration and the x-axis representing theoretical concentration. The 99% confidence interval of the regression line y-intercept is calculated using bootstrap estimation with repeated (1000 or more) resampling iterations . The bootstrap estimation involves resampling with replacement from the data, in order to assess expected variation. For each resampled data set, the regression above is re-fit to recalculate the slope. The basic non-parametric confidence interval  for the slope is estimated as the range (m α, m 1-α), where m p is the p-th percentile of the slope in the resampled estimation, and (1-2α) is the confidence level. Usually, α = 0.025 or 0.005, for a confidence level of 95% or 99% respectively.
If the lower limit of the confidence interval is positive, then the analyte is deemed to have an endogenous level equal to the regression y-intercept. If the lower 99% confidence interval is zero or negative, there is no expected endogenous level for that analyte. Once endogenous levels (if present) are calculated, the estimated LOD (and hence LOQ) in the absence of endogenous analyte is the difference of the calculated LOD (in the matrix) and the estimated endogenous level.
Summary of endogenous calculations for 28 peptides from 8 proteins.
Natriuretic peptides B
There have been no observed instances of false negatives where an endogenous level was expected, and the method returned with a 0 endogenous level. If such instances are encountered, the confidence interval can be relaxed to 95% (from the currently used 99%) to enable overcoming false negatives (at the expense of more false positives).
Effective application of the method is dependent on having enough points on the concentration curve that are in the linear operating range. If there are too few points in the concentration curve, or if the endogenous level is so high that most of the concentration curve is non-linear and affected by endogenous analyte, the method will fail. Theoretically, the method is likely to succeed if at least 50% of points on the concentration curve fall in the linear operating range (since least median squares regression has a breakdown point of 0.5).
Summary of potential problems encountered during analysis of SID-MRM-MS data that often require manual identification or re-integration and their effect on the precision and accuracy of quantification.
Impact on Quantification
Poor chromatographic peak shape
Imprecise and inaccurate area assessment
Chromatographic peak too narrow (<6 points across)
Imprecise and inaccurate area assessment
Inaccurate peak area assessment
Inconsistent integration between analyte and SIS peptides
Imprecise and inaccurate peak area assessment
Interference in analyte or SIS signals
Inaccurate peak area assessment
Use all transitions of a peptide (peak area from XICs) to calculate relative ratios by either the minimal-pairs or all-pairs method. The minimal-pairs method calculates the relative ratio of a given transition by dividing its peak area by the peak area of one other transition from the same precursor. The all-pairs method calculates ratios for all possible transition pairs generated from one precursor. This process is performed for each peptide analyte and its corresponding SIS so that the relative ratios of the analyte can be compared with the relative ratios of the SIS.
Apply the t-test to determine a p-value for the hypothesis that the relative ratios for the analyte are different from the relative ratios of the SIS.
Use the Benjamini-Hochberg false-discovery rate method to correct the nominal t-test p-values to account for multiple hypothesis testing .
Disaggregate the corrected p-values for the relative ratios into combined p-values for each transition. Each transition is used to calculate either 2 ratios for the minimal-pairs method or n-1 ratios for the all-pairs method (where n is the total number of observed transitions for each peptide). Calculation of the p-value for determining if a transition is problematic requires combining the p-values for the respective relative ratios. Because the same peak areas from a given transition were used in calculating all its ratios, the resulting p-values are not independent. These dependent p-values are combined by means of a previously outlined methodology [19, 39].
Calculate the CV for the PAR (analyte/SIS) from the results for all replicates in a transition for a given sample.
A transition is marked as "bad" if either the corrected combined p-value for the transition is less than the p-value threshold of 10-5 or if the CV is greater than the CV threshold of 0.2 (20%). Transitions not satisfying either of these conditions are classified as "good." Although the chosen thresholds work well for many data sets, they can be changed to fine-tune the algorithm as needed.
Validation of AuDIT.
Overall Accuracy (%)
10 Peptide Standard Curve, 3 transitions MultiQuant
10 Peptide Standard Curve, 3 transitions, Skyline
10 Peptide Standard Curve, 5 transitions, MultiQuant
Clinical Samples, 3 transitions, MultiQuant
AuDIT can be applied to data exported from most MRM-MS analysis software, and can potentially be embedded into such applications to greatly reduce manual inspection and alert the researcher of potentially errant data at an early point in the data analysis, potentially allowing problematic samples that exhibited large CV values (for example, maybe caused by column degradation and poor peak shape) to be re-acquired. In addition, incorporation of AuDIT into the MRM-MS workflow would streamline the processing, likely resulting in more efficient generation of accurate and precise quantitative data from SID-MRM-MS analyses.
The AuDIT software is available at http://www.broadinstitute.org/cancer/software/genepattern/modules/AuDIT.html.
AuDIT provides a mechanism to evaluate SID-MRM-MS data quality from the perspective of minimizing interferences to enable robust quantification. A complementary approach involves assigning quality scores to the MRM-MS spectra in order to statistically define error rates for peptide identities, as implemented in mProphet . mProphet uses characteristics of the transition peaks and the concept of "decoy peaks" (measured where no real peaks are present) to derive a composite discriminant score that statistically captures the quality and reliability of the MRM-MS data for each peptide.
In addition to AuDIT and mProphet, other data analysis software packages possess features that help to evaluate the composite signal of all transitions measured for a peptide and its SIS to monitor for differences. Such features are available in Skyline , a vendor neutral data analysis program, by monitoring the signal contribution from each transition and enabling the user to compare it to that of the SIS peptide with the output in visual plots. PinPoint software (Thermo Fisher Scientific) also compares the fragment ion ratio of the light and heavy peptides to look for agreement and reports the fragment ion ratios for the light and heavy peptides, also with visual plots. These software features work well for detection of interfering signal in a transition from a given peptide, and through the use of visual plots enable rapid screening of large data sets with a variety of peptide targets.
In order for MRM-MS combined with stable isotope dilution to be used as an assay for quantitative measurement of proteins and peptides, the precision and variability of the assay needs to be characterized not only in a given laboratory, but also across multiple laboratories. Assessment of the intra- and inter-lab variation of MRM-MS assays was the primary goal of the NBT Study described in Section 2.
Summary of Results for Studies I, II, and III (combined results for process replicates a, b, c) for each peptide across sites for inter-site CV, intra-site CV, linear slope and % recovery.
Inter- site CV
Intra- site CV
Inter- site CV
Intra- site CV
Inter- site CV
Intra- site CV
In this analysis, the interlaboratory precision is calculated as the median intralaboratory CV. While this measure summarizes the precision obtained across multiple laboratories, it does not account for the accuracy of the measurement across different laboratories--all the laboratories may have repeated measurements that are very close (high precision, and hence low CV), but the actual measurements may differ significantly from laboratory to laboratory (poor accuracy). Hence, in clinical domains, the interlaboratory precision is calculated as the CV of all the measurements of a peptide (at a concentration) across all the laboratories . An additional study investigated the use of more sophisticated mixed effect models to evaluate the sources of variation in the NBT study .
For researchers new to SID-MRM-MS assays, this section outlines important aspects of the experimental design and data analysis, along with practical tips.
When constructing a calibration curve, attempt to use a concentration range that extends past the estimated LOD and upper LOQ so that these Figures of Merit can be calculated from the data. Prepare the calibration curve in a matrix that is identical to that of the actual sample in order to accurately reproduce the chemical noise contributions from the matrix. If this is not possible, use a matrix that is very similar in composition. Analyze matrix blank samples periodically throughout the assay. This will provide the best determination of the signal-to-noise of the sample matrix and internal standards, and detect any potential for analyte carryover that would be encountered in a quantitative assay of unknown samples.
To determine the technical variability of an assay, analyzing a minimum of 3 technical replicates (repeat injections from the same sample) is suitable. The use of process replicates (preparations of the samples made at different times) can be used to calculate the analytical variability of an assay. Usually, technical variability is smaller than analytical variability. A minimum of 3 replicates should be prepared for each concentration point in calibration curves. The precision of the calculations improves with increased sample size, so if time and resources permit, more replicates are favorable.
Most methods of calculating the LOD or LOQ use the calibration curve data points to interpolate the determined value. To make sure the calculated LOD seems reasonable, it is recommended to visually inspect the individual concentration points to make sure the calculated values make sense and the concentration point above the calculated LOD is easily discernable. The main factors affecting the calculated LOD of an assay are the noise present in the matrix blank, and the reproducibility of that noise. Matrices that have a lot of noise and/or where the measurement of that noise is very variable will result in higher LODs.
Often in practice, the largest influence on the sensitivity of an assay is not the instrument itself, but how well the instrument is performing. Variability can have a profound impact on sensitivity. Evaluating the reproducibility of an LC-MRM-MS system is highly recommended before evaluating its sensitivity. This can be accomplished by making repeat measurements of the same sample using the same method, to achieve CV values less than 20%.
Last but not least, automated data processing tools and algorithms should be applied with care, continually assessing data quality, consistently accounting for outliers, and monitoring results.
MRM-MS assays are increasingly being deployed to measure and quantify peptides (and hence, proteins) in a variety of matrices and backgrounds. This manuscript provides a complete toolkit for the analysis and interpretation of MRM-MS experiments.
Sound statistical analysis of MRM-MS data starts with high quality data. Using algorithms like AuDIT and mProphet (Section 5), the data quality assessment can be automated resulting in a more reliable high throughput analysis pipeline which quickly weeds out poor quality transitions or transitions with interferences.
Calibration and characterization of detection limits and variability are important aspects of any quantitative assay. We present a comparative set of methods and approaches for MRM-MS assay calibration, regression analysis, determination of confidence intervals, dealing with endogenous signal, assessment of detection limits and multi-laboratory characterization of assay performance and precision.
While systematic and principled analysis of data is essential for achieving the full potential of quantitative MRM-MS assays, care has to be exercised in experiment design and data generation to maximize reproducibility and data quality. There are many experimental and other variables beyond the scope of this manuscript that need to be addressed for successful deployment and use. Several new multi-laboratory studies aim to circumscribe and control these aspects. Two such factors worth mentioning are (i) digestion and (ii) system suitability assessment. Reproducible digestion of proteins is a pre-requisite for reliable quantification using MRM-MS. Several on-going studies attempt to not only determine standard operating procedure to ensure proper digestion, but also use specially chosen marker peptides to detect improper or incomplete digestion. Furthermore, given the complexity of chromatography and MS instrumentation, constant assessment of optimal system performance is necessary to guarantee data quality. Studies for defining, assessing and maintaining system suitability are also under way. Most of these large multi-laboratory studies are being carried out under the auspices of the Clinical Proteomics Technology Assessment for Cancer (CPTAC) program sponsored by the National Cancer Institute (http://proteomics.cancer.gov), with the overarching goal of advancing biomarker discovery and enabling the advancement of promising new technologies like MRM-MS towards clinically deployed assays.
Support for this work was provided in part by the Broad Institute of MIT and Harvard and by grants from the National Cancer Institute (U24CA126476) and National Heart Lung and Blood Institute (HHSN268201000033C) to SAC, and in part by a grant from the National Institutes of Health (Grant NCI R01 CA126219 to D.R.M, as part of NCI's Clinical Proteomic Technologies for Cancer Program).
This article has been published as part of BMC Bioinformatics Volume 13 Supplement 16, 2012: Statistical mass spectrometry-based proteomics. The full contents of the supplement are available online at http://www.biomedcentral.com/1471-2105/13/S16.
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.