NITPICK: peak identification for mass spectrometry data
 Bernhard Y Renard†^{1, 2},
 Marc Kirchner†^{1, 2},
 Hanno Steen^{3},
 Judith AJ Steen^{4} and
 Fred A Hamprecht^{1, 2}Email author
DOI: 10.1186/147121059355
© Renard et al; licensee BioMed Central Ltd. 2008
Received: 08 April 2008
Accepted: 28 August 2008
Published: 28 August 2008
Abstract
Background
The reliable extraction of features from mass spectra is a fundamental step in the automated analysis of proteomic mass spectrometry (MS) experiments.
Results
This contribution proposes a sparse template regression approach to peak picking called NITPICK. NITPICK is a Nongreedy, Iterative Templatebased peak PICKer that deconvolves complex overlapping isotope distributions in multicomponent mass spectra. NITPICK is based on fractional averagine, a novel extension to Senko's wellknown averagine model, and on a modified version of sparse, nonnegative least angle regression, for which a suitable, statistically motivated early stopping criterion has been derived. The strength of NITPICK is the deconvolution of overlapping mixture mass spectra.
Conclusion
Extensive comparative evaluation has been carried out and results are provided for simulated and realworld data sets. NITPICK outperforms pepex, to date the only alternate, publicly available, nongreedy feature extraction routine. NITPICK is available as software package for the R programming language and can be downloaded from http://hci.iwr.uniheidelberg.de/mip/proteomics/.
Background
The reliable extraction of proteomic features from complex biological mixtures is of utmost interest for unraveling the intricate biomolecular interplay at the heart of many systems biology research questions. In this context, mass spectrometry (MS) has become a key technology which provides peptide and protein identification, modification characterization and quantification capabilities. In contrast to gene expression microarray technologies, MS analysis yields a direct view on the whole set of proteins (the proteome) present in the system under investigation and can thus contribute to a richer picture of protein interaction, realtime dynamics and their regulation [1]. MS contributes to clinical research and the diagnosis process [2], it is used to detect, grade and characterize cancer diseases [3], it serves as a general purpose tool for microorganism characterization [4, 5] and provides sensitive and specific means for pharmaceutical quality control.
MS experiments typically contain tens to thousands of spectra, each of which holds intensity information for tens to hundreds of thousands of mass channels. These data stem from a set of different mass analysis technologies, combining chemical separation procedures (chromatography), ionization methods (electrospray ionization, matrixassisted laser desorption/ionization) and mass analyzers (timeofflight, quadrupole, ion cyclotron motion). Despite physicochemical preprocessing and the availability of high mass resolution instruments, spectra which stem from complex biochemical mixtures (e.g. cell lysate, blood or serum) frequently exhibit overlapping isotope distributions of independent molecular species. Moreover, in many quantitative MS approaches, these mixtures are present by design and their manual unmixing, quantification and interpretation is tedious or unfeasible.
As a consequence, the automated analysis and interpretation of multicomponent mass spectra is highly desirable. An (incomplete) set of challenges for MS feature extraction includes the sheer data set sizes, mixtures of isotope patterns, the presence of multiple charge states, chemical and detector noise, speciesdependent ionization efficiencies, chemical reproducibility and deviations from detector linearity. Among all requirements that derive from these challenges, it is important to emphasize the crucial nature of the feature extraction step: as all subsequent analysis steps rely on the set of extracted features, meaningful biological conclusions are highly dependent on the adequacy and reliability of the feature extraction method.
Apart from few special alternate approaches [6, 7], all automated methods for feature extraction from isotoperesolved mass spectra compare the observed (experimental) spectral pattern to a set of precalculated theoretical isotope patterns. The calculation of isotope patterns is based on the estimation of average stoichiometries for a particular molecular mass (averagine[8] and related methods [9]) or on relative isotope abundance estimation [10] or on protein databasedriven mean isotope distribution calculation [11]. The computation of isotope patterns is based on efficient implementations [12–14] of Yergey's original polynomial method [15, 16].
Comparison of theoretical and experimental isotope distributions is typically accomplished based on subtractive fitting and peak selection algorithms, attempting to sequentially detect the dominant components in a mixture spectrum. These subset selection methods attempt to determine a small set of basis functions capable of approximating the observed signal well. Facing the infeasibility of an exhaustive search over all possible subsets of explanatory basis functions, they apply greedy search strategies. Here, "greediness" refers to the fact that these approaches consistently overestimate individual feature contributions and are incapable of excluding a basis function once it has been included in the active set. Hence, although providing sparseness, they are not globally optimal. In the context of mixture modeling of mass spectra, these approaches amount to sequential isotope distribution template matching procedures [6, 8–11, 17–22]. Fitting is carried out via χ^{2} distances [8, 20], least squares [9–11, 17, 21–23], weighted least squares [19], or crosscorrelation [18, 24]. The automatic determination of the charge state associated with an isotope pattern present in an experimental spectrum is based on crosscorrelation [19, 25] or on dot products in Fourier space [25, 26], exploiting the shift theorem of the Fourier transform. There are only few [27] nongreedy feature selection algorithms and mixture model approaches for MS data [28–31]. Among these, Matching[28] and Roussis' method [29] rely on manual preselection of contribution candidates. Sparse nongreedy procedures include pepex[30] and Du's method [31]. The pepex approach is suitable for single charge data and is based on a nonnegative sparse regression scheme, with an approximate L_{0}norm constraint. Du and Angeletti [31] perform data reduction prior to feature extraction and apply a sparsenesspromoting variable selection scheme [32]. With the exception of Du's [31] and Kaur's [19] methods, none of the mentioned mixture model approaches provide support for the detection of a sparse set of a priori unknown peptide peaks under an arbitrary set of charge states. Du's method [31] and NITPICK overcome Kaur's greedy iterative weighted least squares fitting approach. In contrast to [31], NITPICK does not rely on a heuristic parameterization and is instead based on statistical model selection, making use of an algorithmically more efficient nongreedy sequential feature selection procedure with a statistically motivated termination criterion. NITPICK was designed to support the calculation of accurate monoisotopic peak lists from raw mass spectra and was specifically tailored to cases where the raw spectra stem from unknown, possibly overlapping experimental isotope patterns of multiple charge states.
The methods section details the mixture modeling approach, fractional averagine for improved stoichiometry estimation and data fitting, and our main contribution, a computationally efficient method for improved nonnegative feature selection and the corresponding statistical complexity estimation approach in conjunction with the derivation of a lower bound for early termination. Comparative results on simulated and realworld data sets are given in the results and consequently discussed. Eventually, we conclude and offer perspectives. Derivations of the formulas used in the main article are available in the appendix.
Methods
The NITPICK algorithm (cf. figure 1) models an observed mixture spectrum as a linear combination of theoretical isotope distribution patterns. Statistically, finding a sensible parameterization of this mixture model amounts to a constrained regression problem in which we seek to minimize the raw signal reconstruction error in a leastsquares sense while adhering to a set of additional constraints. Such an approach requires reliable underlying isotope patterns, and we propose an improvement for the wellknown averagine model to achieve this goal. We subsequently introduce NITPICK's iterative feature selection procedure, which employs a novel, nongreedy isotope distribution selection method and is based on a statistically motivated termination criterion, attempting to eliminate premature or late iteration termination.
Mixture model
Each of the concentration coefficients c_{ i }, i = 1, ..., K is associated with a column ϕ_{ i } of the N × K model matrix Φ. We regard these columns as basis functions and their elements ϕ_{ ji } correspond to the mass spectrum abundance expected in the j th mass bin m_{ j } of the i th pure component ϕ_{ i }.
For the estimation of the concentration vector c, the model matrix Φ has to be available, and in general this is not the case. One hence resorts to approximating the basis functions by a large set of theoretical isotope distributions (i.e. isotope abundance patterns) densely spread over the prespecified mass/charge binning scheme. Effectively, this recasts the original peak picking task into the framework of a feature (i.e. basis function) selection problem.
Model matrix calculation
Given an elemental stoichiometry, the corresponding theoretical isotope distribution is welldefined and can easily be calculated [12–15]. Hence, if a prespecified set of stoichiometries of potential pure components is available, the calculation of the respective set of theoretical isotope distributions (including chemical modifications and multiple charge states) is straightforward. These isotope distributions are subsequently convolved with instrumentspecific, possibly massdependent peak shape functions, yielding the basis functions ϕ_{ i }.
Fractional averagine
where p_{ l } is the probability of occurrence of the l th isotope ${\sum}_{l=1}^{k}{p}_{l}=1$, x = (x_{1}, ..., x_{ k }) ^{ T } denotes the number of times a particular isotope is chosen $\sum}_{l=1}^{k}{x}_{l}={\rho}_{i$ and t = (t_{1}, ..., t_{ k })^{ T } is the corresponding variable of the MGF. By rearrangement of the MGFs of all elements, it is possible to separate integer and realvalued contributions, yielding the common averagine model $\widehat{\rho}$ = (⌊ρ_{1}⌋, ⌊ρ_{2}⌋, ..., ⌊ρ_{5}⌋)^{ T } for the integers and the fractional averagine correction $\tilde{\rho}$ = (ρ_{1}  ⌊ρ_{1}⌋, ρ_{2}  ⌊ρ_{2}⌋, ..., ρ_{5}  ⌊ρ_{5}⌋)^{ T } for the remaining fractional masses. The theoretical isotope distribution for ${\tilde{\rho}}_{i}$ is given by the linear combination of a peak of intensity one at mass zero and the theoretical isotope distribution of the i th averagine element, weighted by 1  ${\tilde{\rho}}_{i}$ and ${\tilde{\rho}}_{i}$, respectively. Thus, efficient calculation of the theoretical isotope distribution of the stoichiometry $\widehat{\rho}$ is carried out based on the Mercury7 algorithm [14], and the theoretical isotope distribution for the fractional stoichiometry ρ is subsequently obtained with five additional convolution steps.
Basis function selection
Given the set of basis functions Φ = [ϕ_{1}ϕ_{2} ... ϕ_{ k }], basis function selection and subsequent determination of the contribution coefficients c_{ i } provides a solution to eq. (1). Thus, as the modeling parameters and, in particular, the monoisotopic masses for all basis function are known, one can determine which isotope distributions are present and in what abundance (assuming ∑_{ k }ϕ_{ ki } = 1).
For fixed t, this is a quadratic programming problem with linear inequality constraints which can be solved by an active set algorithm, sequentially introducing the inequality constraints and seeking a feasible solution satisfying the KuhnTucker conditions [32, 35, 36]. Equation (4) corresponds to $\widehat{c}(\lambda )=\mathrm{arg}{\mathrm{min}}_{c}\left\{\rights\Phi c{}^{2}+\lambda {\displaystyle {\sum}_{i=1}^{K}\left{c}_{i}\right}\}$ with c_{ i } ≥ 0 where the parameter t is related to the Lagrangian multiplier λ which determines the number of free parameters df(λ) in the linear model [32, 36–38].
Common procedures for the optimal selection of λ or df(λ) are based on the minimization of the prediction error. This involves estimation of training optimism via C_{ p }statistics, the Akaike Information Criterion (AIC), or the Bayesian Information Criterion (BIC) [37]. Alternatively, direct estimation of prediction error can be carried out via crossvalidation or generalized crossvalidation (GCV) [37]. All these methods require the LASSO trace $\widehat{c}$(λ_{ l }), where λ_{ l } ∈ ℒ and $\mathcal{L}=\{{\lambda}_{1},\mathrm{...},{\lambda}_{\left\mathcal{L}\right}\}$ defines the set of LASSO regularization parameters for which the prediction error is calculated. In general, the calculation of the LASSO trace is computationally intensive and it is not clear how the elements of ℒ should be selected [36]. Least angle regression (LARS) [39] is an algorithmically different approach to variable selection which can be modified such that the LARS algorithm implements the nonnegative LASSO from equation (4). The LASSOmodified LARS is a constructive active set procedure which constructs the LASSO regularization path in a stepwise manner. Denote by $\mathcal{A}$(λ) the set of indices i ∈ {1. ..., K} of those ϕ_{ i } which are in the active set for a particular choice of λ. Starting from λ = ∞ and letting λ → 0, the algorithm computes nonnegative LASSO solutions for all λ for which the active set changes, thus implicitly defining ℒ. The LASSOmodified LARS guarantees $\mathcal{A}$(λ_{ j }) ≠ $\mathcal{A}$(λ_{j+1}), but it allows for the deletion of previously selected basis functions, and hence $\mathcal{A}$(λ_{ j })  need not increase monotonically for increasing j. Basis functions can be required to enter the active set in their predefined directions [39] which allows the implementation of a nonnegativity constraint. Necessary matrix inversions are constrained to $\mathcal{A}$(λ)  × $\mathcal{A}$(λ)sized scatter matrices ${\Phi}_{\mathcal{A}(\lambda )}^{T}{\Phi}_{\mathcal{A}(\lambda )}$ and can be implemented as iterative updates, thus the procedure is computationally efficient.
Complexity estimation
It is desirable to terminate active set updates as soon as the basis functions in the active set are able to explain the observed data sufficiently well, i.e. until the increase in explanatory power does not justify the increase in model complexity anymore. We now describe a modification to the nonnegative LASSOmodified LARS, which enables us to sequentially build a BIC trace along the LASSO regularization path and to identify minima along this trace. Upon termination, the proposed procedure returns the estimate ${\widehat{c}}_{\mathcal{A}}$ and the set $\mathcal{A}=\left\{i\right{\widehat{c}}_{{\mathcal{A}}_{i}}>0\}$ of active basis functions.
BIC measure
The noise variance σ^{2} in eq. (5) is estimated as the mean residual sum of squares of a lowbias nonnegative least squares estimate [37].
Estimation of df(λ)
which is monotonously increasing for decreasing λ (see appendix B for a proof).
Optimal termination
Regression on selected models
The sum constraint in equation (4) is ultimately responsible for the sparseness property of the LASSO. Its regularizing effect is similar to the one of the regularization term found in ridge regression, especially with respect to the fact that all LASSO estimates ${\widehat{c}}_{i}$, i = 1, ..., K are subject to shrinkage [32, 37] and represent biased versions of the least squares estimates. Given an active set $\mathcal{A}$, the shrinkage bias on the ${\widehat{c}}_{i}$ can effectively be removed by introducing a subsequent nonnegative least squares regression step after the basis functions have been selected by the LASSO procedure [32]. This also holds true for the NNLASSOmodified LARS procedure, and the corresponding unbiased quantification estimate ${\widehat{c}}_{\mathcal{A}}^{q}$ is given by equation (6) with $\mathcal{A}$(λ) = $\mathcal{A}$.
Postprocessing
where ${\nu}_{G}(j)=\{k\in \mathcal{A}{b}_{k}{b}_{j}\le \frac{G1}{2}\}$ defines an m/zneighborhood of size G around each peak and b_{ j } is the mass/charge bin index of the monoisotopic mass m_{0} of the j th theoretical isotope distribution ϕ_{ j }. If $\mathcal{A}\ne {\mathcal{A}}^{\prime}$, ${\widehat{c}}_{\mathcal{A}(\lambda )}^{q}$ is reestimated using eq. (6) with $\mathcal{A}(\lambda )={\mathcal{A}}^{\prime}$
Results
Stoichiometry models
The fractional averagine stoichiometry model was compared against the classical averagine model based on the analysis of their respective approximation errors using simulated theoretical peptide isotope distributions.
Data Set
All UniProt (version 51.4.) [41] human proteins were subjected to in silico tryptic digestion. For each of the R digestion product peptides ${\mathcal{P}}_{r},r\in \{1,\mathrm{...},R\text{}}$, exact element stoichiometries ${\rho}_{r}^{x}$ and exact theoretical isotope distributions ${d}_{r}^{x}$ were calculated. Peptides with monoisotopic masses above m/z 5000 were discarded.
Comparison of deviations
Peak picking
For peak picking/feature extraction performance evaluation, we determine representative peak picking statistics: we calculate accuracy, sensitivity, specificity, and positive and negative predictive values on simulation data. Further, and in contrast to previous contributions, we explicitly perform manual validation on a realworld data set.
Data sets
Simulation data sets
For the simulation, all UniProt (version 51.4.) [41] human protein sequences were subjected to in silico tryptic digestion. Simulation sets were generated by random drawing of digestion product peptides and intensities. To ensure a fair comparison with the pepex procedure (which was selected for benchmarking as the only publicly available procedure implementing nongreedy feature extraction) which is limited to singly charged data sets, all simulated peptide were endowed with a single charge. Mercury7 [14] was used for the calculation of the respective theoretical isotopic distributions. After convolution with an m/zdependent Gaussian aperture function [42], intensityweighted linear combinations of peptide spectra were calculated and a Poisson noise model (see appendix D) was applied to obtain spectra of different signal to noise (SNR) ratios. Simulations were performed in the densely populated m/z 500–700 range (see Additional file 1 for the data sets).
Realworld data set
Experiments on realworld data were performed using Bovine Serum Albumin (BSA) LC/(ESI)MS calibration data. The data set was acquired on a QSTAR XL mass spectrometer (Applied Biosystems/MDS Sciex) equipped with microsale capillary HPLC system (Famos Autosampler, LC packings, Agilent 1100 HPLC pump). A mixture spectrum with many overlapping peaks was obtained by integration of the LC/MS data set over the retention time domain (see Additional files 2 and 3). Peak identification was carried out in the m/z 500–700 range and peak shape functions were modeled according to massdependent Gaussian distributions with standard deviations σ(m/z) = 0.005 m/z [42].
Performance estimation
We characterize peak picking performance based on a set of measures from statistical test theory, all of which depend on the availability of the numbers of true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN).
Ground truth is based on knowledge of the complete set of peptide signals present in a mass spectrum. For simulated data sets, this information is available. In realworld experiments, the definition of ground truth is complicated by sample complexity, stochastic sample modification, nonpeptidic components and limited dynamic range. As a consequence, TNs, FNs and the overall number of true peaks are not available for realworld data, limiting the available statistical measures to positive predictive values and the ratio of true positives (sensitivity ratios).
Nevertheless, we can determine the number of TPs and FPs in both cases: we check whether a detected peak really exists and if it has been assigned its correct monoisotopic mass m_{0} and charge z. If so, it is counted as true positive (TP) or, otherwise, as false positive (FP).
Simulation data
As the complete set of simulated peaks is known, the remaining set of undetected peaks can be determined and its members are counted as false negatives (FN). With the true number of positives and negatives available the calculation of the number of true negatives (TN) is straightforward, thus enabling the use of related statistical test error measures for performance characterization:

accuracy (ACC) measures the rate of correct peak vs. no peak decisions, i.e. $\text{ACC}=\frac{\text{TP}+\text{TN}}{\text{TN}+\text{FP}+\text{TP}+\text{FN}}$

the negative predictive value (NPV) gives the rate at which there is no peak at positions where the procedure was unable to find a peak, $\text{NPV}=\frac{\text{TN}}{\text{TN}+\text{FN}}$

the positive predictive value (PPV) measures the rate of correct peak detections among all peaks detected by the procedure, $\text{PPV}=\frac{\text{TP}}{\text{TP}+\text{FP}}$

sensitivity (SE) measures the method's ability to detect a peak if it exists, $\text{SE}=\frac{\text{TP}}{\text{TP}+\text{FN}}$

specificity (SP) measures the method's ability to correctly identify the absence of peaks in the spectrum, $\text{SP}=\frac{\text{TN}}{\text{TN}+\text{FP}}$
All measures have been computed with and without the application of postprocessing.
Realworld data
Resorting to LC/MS data and creating a semiartificial data set by integration over the retention time domain was motivated by the fact that this approach yields a data set accessible to human manual validation. With LC resolution power available to the human expert (and resorting to comparatively simple mixtures), all peaks detected in the integrated mixture can still be manually verified. Exemplary peak picking results are illustrated below.
Comparative results
Pepex
We chose to compare NITPICK to a conceptually similar, modelbased approach called pepex [30]. In contrast to modelfree approaches and in accordance with NITPICK, pepex models observed spectra based on a linear mixture model, which is augmented by a complexity constraint. It uses the averagine model to describe unknown features and is capable of terminating its feature selection routine after a sufficient number of basis functions has been selected. However, as the publicly available implementation of the pepex approach is limited to charge state z = 1 data sets, NITPICK comparison against pepex was limited to the simulated data set.
For the analysis, the pepex algorithm was tailored to the problem at hand: its parameters were heavily optimized to maximize peak picking performance on the simulation data set. As a consequence, the reported results underestimate the pepex generalization error and overestimate its performance (see Additional file 4). For NITPICK, no specific parameter optimization was carried out, postprocessing was kept to a minimum (G = 3), and the reported results are representative (see Additional file 5).
MarkerView
We also compared NITPICK's ability to extract peak information from a retention time integrated mixture spectrum against the proprietary MarkerView application (Applied Biosystems/MDS Sciex, Concord, Canada) version 1.2, which includes an LC/MS peak picking algorithm. In contrast to NITPICK, MarkerView was provided with the original LC/MS data set and thus had retention time information available. Peak picking was carried out in the m/z 400–1400 range and detected peaks were manually validated (see Additional files 6 and 7).
Discussion
Stoichiometry models
In comparison (see figure 3, classical averagine in dashed black, fractional averagine in solid red), Senko's classical averagine [25] features a larger number of very small deviances from the truth than fractional averagine. This is caused by the rounding to integers property of the classical approach, yielding exact models more often. At the same time, the deviance distribution of the fractional averagine model has a significantly lighter tail, i.e. the model generates significantly less stoichiometries whose theoretical isotope distributions have large deviations. The cumulative distribution based on the fractional averagine model approaches 1 more quickly, and its use yields an overall decrease in theoretical isotope distribution deviations. This finding is supported by the corresponding onesided nonparametric MannWhitney test (p < 2.6 × 10^{11}). Because the overall impact on the peak picking performance depends on the squared mean error magnitude (7.6 × 10^{4} for classical averagine, 6.3 × 10^{4} for fractional averagine, corresponding to a 17% decrease for fractional averagine), fractional averagine clearly is the preferable model.
Peak picking
Simulation data set
Comparative results
In comparison with pepex, NITPICK exhibits better results with respect to all statistical measures in figure 4. It is especially obvious that pepex suffers from a severe increase in false positives (FPs) for very low SNR situations, yielding significant decreases in accuracy (ACC) and specificity (SP). For PPV, although the pepex approach outperforms NITPICK when no postprocessing is applied, it is inferior to the full NITPICK algorithm with simple spurious peak removal corresponding to eq. (12). With respect to sensitivity (SE) and specificity (SP), figure 4 reveals constant high (above 0.99) and superior specificity values for NITPICK at greatly increased sensitivity. Thus one can conclude that the NITPICK algorithm is more sensitive than pepex and, at the same time, provides picked peaks with higher confidence.
Realworld data set
In the m/z 775–782 range (figure 9), the separation of two heavily overlapping isotope distribution clearly illustrates the benefits of NITPICK's intensity modelbased approach to the peak picking/feature extraction problem: the second isotope peak of the isotope distribution located at m/z 779.32 (z=2) and the monoisotopic peak of the distribution located at m/z 780.35 (z=2) overlap completely and can only be distinguished by taking intensity information into account.
Comparison with MarkerView
On the BSA data set, MarkerView detected 388 peaks, for 96 (24.7%) of which charge state information was available. Peaks without charge state assignment were counted as true peaks if their detected mass/charge ratio was correct. This resulted in 205 true positives for 82 (40.0%) of which charge state information was available. In comparison to NITPICK, this yields a sensitivity ratio of $\text{SER}=\frac{{\text{SE}}_{Mar\mathrm{ker}View}}{{\text{SE}}_{NITPICK}}=\frac{205}{127}=1.61$ and a positive predictive value of PPV = 0.53.
As expected, with retention time information available, MarkerView manages to detect a significantly larger number of peaks. Surprisingly, though, retention time information did not contribute to an increased PPV. The partial lack of charge state information also caused the performance interpretation to favor MarkerView: for peaks with correct mass/charge ratio, we assumed completely errorfree charge state assignments, which is unlikely to hold true in reality. In contrast, in absence of retention time information, NITPICK delivered charge state information for each and every peak and peaks were counted as true positives if and only if their assigned charge state was correct. MarkerView's PPV and SER are subject to overestimation, whereas NITPICK's PPV is not. Even under this proMarkerView bias, if joint maximization of PPV and sensitivity is desired, NITPICK arguably proved competitive with MarkerView: despite the 1.6fold increase in sensitivity, only slightly more than half of the peaks reported by MarkerView are true positives.
Analysis CPU time on the realworld spectrum was 114s on a 2 GHz AMD Opteron machine. Measurements are based on native, interpreted R code. Preliminary tests with an inhouse C++ implementation (to be published elsewhere) yielded a speed increase by a factor of ≈ 20.
Conclusion and perspectives
Conclusions
We present NITPICK, an iterative, nongreedy, globally optimal mixture modeling approach for feature extraction from multicomponent mass spectra. The calculation of the set of explanatory theoretical isotope distributions is based on fractional averagine, a mass errorfree extension to the wellknown averagine [8] model. Subsequent feature selection is driven by a modified least angle regression [8] algorithm for which we derived a suitable, statistically motivated early stopping criterion. Experiments show that NITPICK is able to unmix and deconvolve complex mixture mass spectra. The algorithm was thoroughly evaluated on simulated and realworld data sets and was found to perform better than a conceptually similar algorithm. NITPICK was even found to deliver competitive results when compared against a vendorsupplied algorithm which, in contrast to NITPICK, had retention time resolution available.
We would like to note that although the analysis at hand was confined to a proteomics data set, the application of the proposed methodology is in no way limited to this type of data and can easily be adapted to similar problems outside the field of proteomics.
NITPICK is available as software package for the R programming language and can be downloaded from http://hci.iwr.uniheidelberg.de/mip/proteomics/.
Perspectives
The constrained least squares regression model in equation (3) implicitly assumes Gaussian noise on the observed spectra. Especially with lowintensity timeofflight spectra the Gaussian approximation is crude, yielding suboptimal estimates. The incorporation of a data type and intensitydependent procedure pursuing a suitable Poisson regression approach [36] in appropriate cases could improve on this shortcoming.
The nonnegative least squares step in equation (6) assumes errorfree basis functions ϕ_{ i }. Although fractional averagine improves over the classical averagine model, this assumption is still violated. Possible remedies include direct intensity estimation techniques [43, 44] and enhanced sparse feature selection methodology which allows for errors in explanatory variables. Alternatively, extended stoichiometry models could provide problemtailored basis functions if model bias is not an issue.
For charge states z < 3 and mass ranges m/z ≲ 1400, there exist socalled forbidden regions [45] within the mass spectrum, i.e. mass ranges which are inaccessible to peptides (including modifications). Such information has been reported to be suitable as a preprocessing filter [31].
Further computational efficiency could be achieved by a complexitydriven hierarchical estimation approach, resorting to subtractive feature extraction for simple signals and to the full mixture modeling for complex samples only.
Appendix
A Computation of fractional averagine
with ⌈c⌉ = ${\mathrm{min}}_{j\in \mathbb{N}}(j\ge c)$, ⌊c⌋ = ${\mathrm{max}}_{j\in \mathbb{N}}(j\le c)$ and X_{ i }representing the number of times the i th isotope of an element occurs. Under the (reasonable) assumption of independence of the atomic distributions of the elements, the resulting joint distribution for a molecule follows from the multiplication of the distributions of its elements.
B Proof of the monotony of the GDF for the nonnegative lasso
where e_{ i }denotes the i th canonical unit vector of length N and ${I}_{N}={\displaystyle {\sum}_{i=1}^{N}{e}_{i}{e}_{i}^{T}}$ is the identity matrix of size N.
as ${\Phi}_{\mathcal{A}(\lambda )}^{T}{\Phi}_{\mathcal{A}(\lambda )}$ is the inverse of a covariance matrix and, thus, positivesemidefinite, and λ is by definition always greater or equal 0. Thus, the second part of equation (16) is monotone with regard to λ and therefore the GDFs are monotone as long as a given active set is valid.
It remains to be shown that changes of ${\Phi}_{\mathcal{A}(\lambda )}$ do not influence the monotony, so it needs to be shown that neither the addition of ϕ_{ j }to the set ${\Phi}_{\mathcal{A}(\lambda )}$ nor the removal of ϕ_{ k }from ${\Phi}_{\mathcal{A}(\lambda )}$ lead to a decrease of cov(s, ${\Phi}_{\mathcal{A}(\lambda )}{\widehat{c}}_{\mathcal{A}(\lambda )}^{q}$) as given in (10). A formal proof is given further below, nevertheless, this can also be argued intuitively.
This obviously leads to an increase of cov (s, ${\Phi}_{\mathcal{A}(\lambda )}{\widehat{c}}_{\mathcal{A}(\lambda )}^{q}$) as less unexplained variation remains. A variable ϕ_{ k }is removed from the active set ${\Phi}_{\mathcal{A}(\lambda )}$ only if cov (ϕ_{ k }, s ${\Phi}_{\mathcal{A}(\lambda )}{\widehat{c}}_{\mathcal{A}(\lambda )}^{q}$) < 0, so if the residuals are negatively correlated with the variable its removal leads to an increase of cov (s, ${\Phi}_{\mathcal{A}(\lambda )}{\widehat{c}}_{\mathcal{A}(\lambda )}^{q}$) as well.
Thus, as long as changes of the set ${\Phi}_{\mathcal{A}(\lambda )}$ appear one at a time (which is ensured by the active set implementation), they do not influence the monotonous character of the estimate of the degrees of freedom.
which is also always positive and thus can be dropped from the resulting inequality in exactly the same fashion as $\widehat{\gamma}$ could be dropped for the case of the addition of a variable. Consequently, changes in ${\Phi}_{\mathcal{A}(\lambda )}$ do not change the monotony of the GDF estimate.
C Lower bound properties of BIC_{ min }
BIC_{ min }is a lower bound for BIC, if ∀k ≥ iBIC_{ min }(i) ≤ BIC(k),
which is always fulfilled because MSE ≤ MSE(λ_{ i }) and df (λ_{ i }) ≤ df (λ_{ k }) for i ≤ k and N ≥ 1, ${\sigma}_{\epsilon}^{2}$ > 0.
D SNR definition for simulated spectra
Notes
Declarations
Acknowledgements
The authors would like to thank Yin Yin Lin (Dept. of Pathology, Children's Hospital, Boston, MA, USA) for LC/MS data acquisition, Lyle Burton (AB/MDS Sciex, Concord, Canada) for MarkerView 1.2 evaluation versions, Ullrich Köthe, Linus Görlitz, Björn Menze, Michael Kelm (Interdisciplinary Center for Scientific Computing (IWR), University of Heidelberg, Germany), and Flavio Monigatti (Dept. of Pathology, Children's Hospital, Boston, MA, USA) for comments, suggestions, and fruitful discussions. We gratefully acknowledge financial support by the Hans L. Merkle foundation (M.K.), the Karl Steinbuch Scholarship (B.Y.R.), dm/Filiadata GmbH (B.Y.R.), Robert Bosch GmbH (F.A.H.), the Children's Hospital Trust (J.A.J.S. and H.S.), and the DFG under grant no. HA4364/21 (B.Y.R., F.A.H).
Authors’ Affiliations
References
 Jensen ON: Interpreting the protein language using proteomics. Nature Reviews Molecular Cell Biology 2006, 7(6):391–403. 10.1038/nrm1939View ArticlePubMedGoogle Scholar
 Beretta L: Proteomics from the Clinical Perspective: Many Hopes and Much Debate. Nature Methods 2007, 4(10):785–786. 10.1038/nmeth1007785View ArticlePubMedGoogle Scholar
 Schwartz SA, Weil RJ, Johnson MD, Toms SA, Caprioli RM: Protein Profiling in Brain Tumors Using Mass Spectrometry: Feasibility of a New Technique for the Analysis of Protein Expression. Clinical Cancer Research 2004, 10: 981–987. 10.1158/10780432.CCR09273View ArticlePubMedGoogle Scholar
 Claydon MA, Davey SN, EdwardsJones V, Gordon DB: The Rapid Identification of Intact Microorganisms Using Mass Spectrometry. Nature Biotechnology 1996, 14: 1584–1586. 10.1038/nbt11961584View ArticlePubMedGoogle Scholar
 Pineda FJ, Antoine MD, Demirev PA, Feldman AB, Jackman J, Longenecker M, Lin JS: Microorganism Identification by MatrixAssisted Laser/Desorption Ionization Mass Spectrometry and ModelDerived Ribosomal Protein Biomarkers. Analytical Chemistry 2003, 75(15):3817–3822. 10.1021/ac034069bView ArticlePubMedGoogle Scholar
 Zhang Z, Marshall AG: A Universal Algorithm for Fast and Automated Charge State Deconvolution of Electrospray MasstoCharge Ratio Spectra. Journal of the American Society for Mass Spectrometry 1998, 9(3):225–33. 10.1016/S10440305(97)002845View ArticlePubMedGoogle Scholar
 Yu W, Wu B, Lin N, Stone K, Williams K, Zhao H: Detecting and Aligning Peaks in Mass Spectrometry Data with Applications to MALDI. Computational Biology and Chemistry 2006, 30: 27–38. 10.1016/j.compbiolchem.2005.10.006View ArticlePubMedGoogle Scholar
 Senko M, Beu S, McLafferty F: Determination of Monoisotopic Masses and Ion Populations for Large Biomolecules from Resolved Isotopic Distributions. Journal of the American Society for Mass Spectrometry 1995, 6: 229–233. 10.1016/10440305(95)000178View ArticlePubMedGoogle Scholar
 Horn DM, Zubarev RA, McLafferty FW: Automated Reduction and Interpretation of High Resolution Electrospray Mass Spectra of Large Molecules. Journal of the American Society for Mass Spectrometry 2000, 11(4):320–332. 10.1016/S10440305(99)001579View ArticlePubMedGoogle Scholar
 Wehofsky M, Hoffmann R, Hubert M, Spengler B: Isotopic Deconvolution of MatrixAssisted Laser Desorption/Ionization Mass Spectra for SubstanceClass Specific Analysis of Complex Samples. European Journal of Mass Spectrometry 2001, 7: 39–46. 10.1255/ejms.387View ArticleGoogle Scholar
 Gras R, Muller M, Gasteiger E, Gay S, Binz PA, Bienvenut W, Hoogland C, Sanches JC, Bairoch A, Hochstrasser DF, Appel RD: Improving Protein Identification from Peptide Mass Fingerprinting through a Parameterized MultiLevel Scoring Algorithm and an Optimized Peak Detection. Electrophoresis 1999, 20: 3535–3550. 10.1002/(SICI)15222683(19991201)20:18<3535::AIDELPS3535>3.0.CO;2JView ArticlePubMedGoogle Scholar
 Rockwood A, Van Orden S, Smith R: Rapid Calculation of Isotope Distributions. Analytical Chemistry 1995, 67: 2699–2704. 10.1021/ac00111a031View ArticleGoogle Scholar
 Rockwood A, Van Orden SL, Smith RD: UltrahighSpeed Calculation of Isotope Distributions. Analytical Chemistry 1996, 68: 2027–2030. 10.1021/ac951158iView ArticlePubMedGoogle Scholar
 Rockwood A, Haimi P: Efficient Calculation of Accurate Masses of Isotopic Peaks. Journal of the American Society for Mass Spectrometry 2006, 17: 415–419. 10.1016/j.jasms.2005.12.001View ArticlePubMedGoogle Scholar
 Yergey JA: A General Approach to Calculating Isotopic Distributions for Mass Spectrometry. International Journal of Mass Spectrometry and Ion Physics 1983, 52: 337–349. 10.1016/00207381(83)850530View ArticleGoogle Scholar
 Senko M: Isopro 3.0.1997. [http://members.aol.com/msmssoft/]Google Scholar
 Breen EJ, Hopwood FG, Williams KL, Wilkins MR: Automatic Poisson Peak Harvesting for High Throughput Protein Identification. Electrophoresis 2000, 21: 2243–2251. 10.1002/15222683(20000601)21:11<2243::AIDELPS2243>3.0.CO;2KView ArticlePubMedGoogle Scholar
 Chen L, Sze SK, Yang H: Automated Intensity Descent Algorithm for Interpretation of Complex HighResolution Mass Spectra. Analytical Chemistry 2006, 78: 5006–5018. 10.1021/ac060099dView ArticlePubMedGoogle Scholar
 Kaur P, O'Connor PB: Algorithms for automatic interpretation of high resolution mass spectra. Journal of the American Society for Mass Spectrometry 2006, 17(3):459–468. 10.1016/j.jasms.2005.11.024View ArticlePubMedGoogle Scholar
 Szymura JA, Lamkiewicz J: Band Composition Analysis: a new Procedure for Deconvolution of the Mass Spectra of Organometallic Compounds. Journal of Mass Spectrometry 2003, 38: 817–822. 10.1002/jms.499View ArticlePubMedGoogle Scholar
 Wehofsky M, Hoffmann R: Automated Deconvolution and Deisotoping of Electrospray Mass Spectra. Journal of Mass Spectrometry 2002, 37: 223–229. 10.1002/jms.278View ArticlePubMedGoogle Scholar
 Zhang X, Hines W, Adamec J, Asara JM, Naylor S, Regnier FE: An Automated Method for the Analysis of Stable Isotope Labeling Data in Proteomics. Journal of the American Society for Mass Spectrometry 2005, 16: 1181–1191. 10.1016/j.jasms.2005.03.016View ArticlePubMedGoogle Scholar
 Mason CJ, Therneau TM, EckelPassow JE, Johnson KL, Oberg AL, Olson JE, Nair KS, Muddiman DC, Bergen HRI: A Method for Automatically Interpreting Mass Spectra of ^{18}O Labeled Isotopic Clusters. Molecular & Cellular Proteomics 2006, 6: 305–318. 10.1074/mcp.M600148MCP200View ArticleGoogle Scholar
 Wang W, Zhou H, Lin H, Roy S, Shaler TA, Hill LR, Norton S, Kumar P, Anderle M, Becker CH: Quantification of Proteins and Metabolites by Mass Spectrometry without Isotopic Labeling or Spiked Standards. Analytical Chemistry 2003, 75: 4818–4826. 10.1021/ac026468xView ArticlePubMedGoogle Scholar
 Senko MW, Beu SC, McLafferty FW: Automated Assignment of Charge States from Resolved Isotopic Peaks for Multiply Charged Ions. Journal of the American Society for Mass Spectrometry 1995, 6: 52–56. 10.1016/10440305(94)00091DView ArticlePubMedGoogle Scholar
 Tabb DL, Shah MB, Strader MB, Conelly HM, Hettich RL, Hurst GB: Determination of Peptide and Protein ion Charge States by Fourier Transformation of IsotopeResolved Mass Spectra. Journal of the American Society for Mass Spectrometry 2006, 17: 903–915. 10.1016/j.jasms.2006.02.003View ArticlePubMedGoogle Scholar
 Listgarten J, Emili A: Statistical and Computational Methods for Comparative Proteomic Profiling Using Liquid ChromatographyTandem Mass Spectrometry. Molecular and Cellular Proteomics 2005, 4(4):419–434. 10.1074/mcp.R500005MCP200View ArticlePubMedGoogle Scholar
 FernándezdeCossio J, Gonzalez LJ, Satomi Y, Betancourt L, Ramos Y, Huerta V, Besada V, Padron G, Minamino N, Takao T: Automated Interpretation of Mass Spectra of Complex Mixtures by Matching of Isotope Peak Distributions. Rapid Communications in Mass Spectrometry 2004, 18: 2465–2472. 10.1002/rcm.1647View ArticlePubMedGoogle Scholar
 Roussis SG, Proulx R: Reduction of Chemical Formulas from the Isotopic Peak Distributions of HighResolution Mass Spectra. Analytical Chemistry 2003, 75: 1470–1482. 10.1021/ac020516wView ArticlePubMedGoogle Scholar
 Samuelsson J, Dalevi D, Levander F, Rögnvaldsson T: Modular, Scriptable and Automated Analysis Tools for HighThroughput Peptide Mass Fingerprinting. Bioinformatics 2004, 20: 3628–3635. 10.1093/bioinformatics/bth460View ArticlePubMedGoogle Scholar
 Du P, Angeletti RH: Automatic Deconvolution of IsotopeResolved Mass Spectra Using Variable Selection and Quantized Peptide Mass Distribution. Analytical Chemistry 2006, 78: 3385–3392. 10.1021/ac052212qView ArticlePubMedGoogle Scholar
 Tibshirani R: Regression Shrinkage and Selection via the LASSO. Journal of the Royal Statistical Society 1996, Series B 58: 267–288.Google Scholar
 Kaur P, O'Connor PB: Use of Statistical Methods for Estimation of Total Number of Charges in a Mass Spectrometry Experiment. Analytical Chemistry 2004, 76: 2756–2762. 10.1021/ac035334wView ArticlePubMedGoogle Scholar
 Casella G, Berger RL: Statistical Inference. Duxbury Press; 2001.Google Scholar
 Lawson CL, Hanson RJ: Solving Least Squares Problems. PrenticeHall, Englewood Cliffs, N J; 1974.Google Scholar
 Park MY, Hastie T: An L_{1}Regularizationpath Algorithm for Generalized Linear Models. Journal of the Royal Statistical Society, Series B 2007, 69: 659–677. 10.1111/j.14679868.2007.00607.xView ArticleGoogle Scholar
 Hastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning; Data Mining, Inference, and Prediction. Springer Verlag New York; 2001.Google Scholar
 Ye J: On Measuring and Correcting the Effects of Data Mining and Model Selection. Journal of the American Statistical Association 1998, 93: 120–131. 10.2307/2669609View ArticleGoogle Scholar
 Efron B, Hastie T, Johnstone I, Tibshirani R: Least Angle Regression. Annals of Statistics 2004, 32(2):407–499. 10.1214/009053604000000067View ArticleGoogle Scholar
 Zou H, Hastie T, Tibshirani R: On the "Degrees of Freedom" of the Lasso. Annals of Statistics 2007, 35(5):2173–2192. 10.1214/009053607000000127View ArticleGoogle Scholar
 Bairoch A, Apweiler R: The SWISSPROT Protein Sequence Database and its Supplement TrEMBL in 2000. Nucleic Acids Research 2000, 28: 45–48. 10.1093/nar/28.1.45PubMed CentralView ArticlePubMedGoogle Scholar
 Tibshirani R, Hastie T, Narasimhan B, Soltys S, Shi G, Koong A, Le QT: Sample Classification from Protein Mass Spectrometry, by Peak Probability Contrasts. Bioinformatics 2004, 20(17):3034–3044. 10.1093/bioinformatics/bth357View ArticlePubMedGoogle Scholar
 Wallace WE, Kearsley AJ, Guttman CM: An OperatorIndependent Approach to Mass Spectral Peak Identification and Integration. Analytical Chemistry 2004, 76: 2446–2452. 10.1021/ac0354701View ArticlePubMedGoogle Scholar
 Kearsley AJ, Wallace WE, Bernal J, Guttman CM: A Numerical Method for Mass Spectral Data Analysis. Applied Mathematics Letters 2005, 18: 1412–1417. 10.1016/j.aml.2005.02.033View ArticleGoogle Scholar
 Mann M: Useful Tables of Possible and Probable Peptide Masses. 43rd Conference on Mass Spectrometry and Allied Topics 1995.Google Scholar
 Rockwood AL, Kushnir MM, Nelson GJ: Dissociation of individual isotopic peaks: predicting isotopic distributions of product ions in MS^{ n }. Journal of the American Society for Mass Spectrometry 2003, 14(4):311–22. 10.1016/S10440305(03)00062XView ArticlePubMedGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.