Skip to main content


Isotope pattern deconvolution for peptide mass spectrometry by non-negative least squares/least absolute deviation template matching



The robust identification of isotope patterns originating from peptides being analyzed through mass spectrometry (MS) is often significantly hampered by noise artifacts and the interference of overlapping patterns arising e.g. from post-translational modifications. As the classification of the recorded data points into either ‘noise’ or ‘signal’ lies at the very root of essentially every proteomic application, the quality of the automated processing of mass spectra can significantly influence the way the data might be interpreted within a given biological context.


We propose non-negative least squares/non-negative least absolute deviation regression to fit a raw spectrum by templates imitating isotope patterns. In a carefully designed validation scheme, we show that the method exhibits excellent performance in pattern picking. It is demonstrated that the method is able to disentangle complicated overlaps of patterns.


We find that regularization is not necessary to prevent overfitting and that thresholding is an effective and user-friendly way to perform feature selection. The proposed method avoids problems inherent in regularization-based approaches, comes with a set of well-interpretable parameters whose default configuration is shown to generalize well without the need for fine-tuning, and is applicable to spectra of different platforms. The R package IPPD implements the method and is available from the Bioconductor platform (


Mass spectrometry (MS), often in conjunction with high performance liquid chromatography (HPLC), is the de-facto standard analytical tool to derive important biological knowledge about the protein content of whole cells, organelles, or biomedical samples like tumour or blood plasma. Within a typical experimental setup, purified proteins of the sample under study are digested by an enzyme. Before entering the mass spectrometer, peptides are separated chromatographically according to their physico-chemical properties in order to avoid a massive overlapping of peptide signals within a single scan. Nevertheless, due to the sheer number of peptides present in a sample, interfering patterns still occur frequently, not least because of post-translational modifications such as the deamidation of asparagines or glutamine residues. In order to obtain an unambiguous assignment of the signals, and in particular their isotope patterns, which is a prerequisite for a proper identification and quantification, every data point in m/zdimension is classified either as ‘signal’ or as ‘noise’ during the so-called feature detection phase. As this processing lies at the very root of every proteomic application, the quality of feature detection can have dramatic impact on the finally derived results and conclusions. In view of the large amount of data even a single MS experiment can produce, automated analysis is indispensable. However, due to various artifacts arising from electric and chemical noise and baseline trends, the identification of isotope patterns is error-prone and time consuming. In addition, severe overlaps of peptide signals within the same mass spectrometric scan can hamper a straightforward analysis furthermore. In recent years, numerous procedures have been developed to process this data (cf., e.g., [18]). Within this paper, we propose a novel method that is demonstrated to perform especially well in challenging situations, characterized e.g. by strong local variations in noise and intensity levels or the presence of isotope patterns of different charges exhibiting overlap, which in many cases may be difficult to resolve even for a human expert by visual inspection. Existing software typically depends on a large set of parameters requiring careful fine-tuning, often being rather sensitive to changes in the measurement process like the change of the platform, which makes a proper parameter choice a laboursome task. In contrast, the proposed method has been designed to depend on a comparatively small set of well-interpretable parameters whose default configuration is shown to be robust, yielding mostly excellent, but at least competitive performance on spectra of different platforms. In a nutshell, our method uses non-negative least squares or non-negative least absolute deviation regression to fit a spectrum s by a large dictionary of templates mimicking isotope patterns; since true positions and charges of isotope patterns in the spectrum are unknown in advance, regions where the signal exceeds a local measure of noise are identified and then a vast set of templates is placed in those regions. In the spirit of sparse recovery, a small subset of the templates, which reasonably explains the observed signal, is selected by applying hard thresholding with a locally adaptive choice of the threshold to the regression coefficients obtained previously. Our method is related to a formerly proposed template-based approach (NITPICK, [3]). As opposed to the present work, NITPICK uses 1-regularized non-negative least squares. Without non-negativity constraints, this procedure is known as the lasso [9]. Reference [10] contains the first application of the lasso to the problem studied in the present paper. Given a dramatic increase in occurrence of high-dimensional datasets in recent years and the resulting need for feature selection, the lasso, due to computationally and theoretically appealing properties, has meanwhile become so popular that it can be regarded as a standard tool of modern data analysis [11]. In this respect, NITPICK follows the usual paradigm suggesting that 1-regularization is the method of choice. In the present paper, we argue for a deviation from that paradigm mainly in view of the following two aspects. First, a major benefit of our fitting+thresholding approach is that parameter choice is more user-friendly, since the threshold can be interpreted in terms of a signal-to-noise ratio. This is unlike the regularization parameter of the lasso, which can in general not be related directly to the signal. In the presence of heterogeneous noise and model misspecifications, the ‘right amount’ of regularization is notoriously difficult to choose. Second, there is a substantial body of work showing that non-negativity constraints alone may suffice to recover a sparse target. Non-negative least squares + thresholding is analyzed in [12], where it is shown that it can significantly outperform the usual 1-approach with respect to sparse recovery. See Section “Sparse recovery with non-negativity constraints: non-negative least squares + thresholding vs. the non-negative lasso” for a detailed discussion.


A spectrum is understood as a sequence of pairs { ( x i , y i ) } i = 1 n , where x i =m i /z i is a mass (m i , measured in Dalton Da) to charge (z i ), and y i is the intensity, i.e. the abundance of a particular mass (modulo charge state), observed at x i , i=1,…,n, which are assumed to be ordered increasingly.

Template model

The ( y i ) i = 1 n =y are modeled as a positive combination of templates designed on the basis of prior knowledge about peak shape and composition of isotope patterns. If our model were perfectly correct, we could write

y= Φ β = c = 1 C Φ c β c , Φ c =[ φ c , 1 φ c , p c ],c=1,,C,

where Φ is a non-negative matrix of templates and β is a non-negative coefficient vector. Both Φ and β can be arranged according to charge states c=1,…,C. Each sub-matrix Φ c can in turn be divided into columns φ c , 1 ,, φ c , p c , where the entries of each column vector store the evaluations of a template φc,j, j=1,…,p c , at the x i , i=1,…,n. It is assumed that only a small fraction of the templates in Φ are needed to represent the signal, i.e. βis highly sparse. The templates are of the form

φ c , j = k a c , j , k ψ c , j , k , c , j ,

where the ψc,j,kare functions representing a single peak within an isotope pattern, depending on a location mc,j and a parameter vector θc,j. In general, peaks can be modeled by Gaussian, Lorentzian, and sech2 shapes, cf. [13]. Due to their similarity, we restrict ourselves to the Gaussian, but provide in addition the exponentially modified Gaussian (EMG, cf., e.g., [14]), a model for a possibly skewed peak as occuring frequently in MALDI-TOF recordings, where late ion formation in the gas phase leads to tailed peaks [15]. The EMG is parameterized by c , j = ( α c , j , σ c , j , μ c , j ) R + × R + ×R (for αc,j 0, one obtains a Gaussian)

ψ c , j , k ( x ) = 1 α c , j exp σ c , j 2 2 α c , j 2 + μ c , j ( x m c , j , k ) α c , j × 1 F σ c , j α c , j + μ c , j ( x m c , j , k ) σ c , j , F ( t ) = t 1 2 Π exp u 2 2 du.

In (2), the nonnegative weights ac,j,kequal the height of the isotopic peak k within the pattern j of charge state c. These heights are computed according to the averagine model [16]. The mc,j,k are calculated from mc,jas m c , j , k = m c , j +κ k c , where κusually ranges between 1.002 and 1.008 Dalton, see e.g. [17]. Note that in Eq. (2) the location of the most intense peak (ac,j,0=max k ac,j,k) is taken as characteristic location of the template instead of using the finally reported monoisotopic position: we set mc,j,0=mc,j so that the remaining mc,j,k, k≠0, are computed by shifting mc,j in both directions along the m/z axis. With the normalization max x φc,j(x)=1 for all c,j, the entries of βcan be interpreted as intensities of the most intense peaks of the templates. The construction scheme is illustrated in Figure 1.

Figure 1

Template model. Illustration of the template construction (charge state c=2) for an EMG peak shape with a moderately strong right tailing.

Parameter estimation

The parameters θc,j=(αc,jσc,jμc,j)of the peaks (3) are unknown in practice. Following a central paradigm of our framework, which is to relieve the user of performing laboursome fine-tuning of parameters, we have developed a systematic procedure automatically providing estimates of these parameters, which is considerably more efficient and flexible than a grid search. For instance, the parameters may additionally depend on the m/z-position. Our framework for parameter estimation extends a conceptually similar approach in [18] designed for a Gaussian peak shape.

In a first step, we apply a simple peak detection algorithm to the spectrum to identify disjoint regions R r {1,,n},r=1,,R, of well-resolved peaks. For each region, we fit the chosen peak shape to the data { ( x i , y i ) } i R r using nonlinear least squares:

min i R r ( y i ψ ( x i ) ) 2 ,

yielding an estimate ̂ r ( x ̂ r ), where x ̂ r denotes an estimation for the mode of the peak in region R r . This concept is sketched in Figure 2. The nonlinear least squares problem (4) is solved by using a general purpose nonlinear least squares routine available in most scientific computing environments, e.g. nls in R. Once the sequence of estimators { ̂ r ( x ̂ r )} has been obtained, they are subject to a suitable aggregation procedure. In the simplest case, one could simply take averages. For spectra where peak shape characteristics, in particular peak width, are known to vary systematically with m/z position, we use the pairs {( x ̂ r , ̂ r ( x ̂ r ))} as input into a linear regression procedure to infer the parameters of pre-specified trend functions. Formally, we model each component θ l of θ as a linear combination of known functions gl,mof x=m/z and an error component ε l , i.e.

l (x)= m = 1 M l ν l , m g l , m (x)+ ε l (x),

for which a linear trend i.e. θ l (x)=νl,1 + νl,2x, is one of the most common special cases. In [19], a set of instrument-specific models for the peak width is provided, all of which can be fitted by our approach.

Figure 2

Parameter estimation. Illustration of peak parameter estimation. The figure displays a well-resolved peak in the region R={i:941.3Th x i 941.9Th}. In this example, the size of R equals 33, i.e. there are 33 pairs {(x i ,y i )}that enter a nonlinear least squares problem of the form (4). Under the assumption of an EMG model, the resulting fit is indicated by a solid line.

We refrain from using least squares regression to determine the parameters in (5) due to its sensitivity to possible outliers, which arise from poorly resolved, wiggly or overlapping isotope patterns, which may affect the quality of the estimates ̂ r . Therefore, the linear model is fitted in a robust way by using least absolute deviation regression. Given the resulting estimates of the parameters {νl,m}, m/z-specific estimates for the parameters in (3) are obtained by evaluating (5).

Template fitting

The computation of the design matrix Φrequires a set of m/zpositions at which templates are placed. In general, one has to choose positions from the interval [x1,x n ]. We instead restrict ourselves to a suitable subset of the set { x i } i = 1 n . The deviations from the positions of the true underlying isotope patterns is then at least in the order of the sampling rate, but this can be improved by means of a postprocessing step described in Section “Postprocessing and thresholding”. Using the whole set { x i } i = 1 n may be computationally infeasible if n is large and is in fact not necessary since isotope patterns occur very sparsely in the spectrum. Therefore, we apply a pre-selection step on the basis of what we term ‘local noise level’ (LNL). The LNL is defined as the median of the intensities y i falling into a sliding window of fixed width around a specific position. For x[x1,x n ], we define the local noise level based on sliding window width h as

LNL ( x ) = median ( { y i : i I x } ) , I x = { i : x i [ x h , x + h ] } .

Given the LNL, we place templates at position x i (one for each charge state) if the corresponding y i exceeds LNL(x i )by a factor Section “Finding a set of default parameters” describes how we determined defaults for the two parameters h and In fact, the LNL is a central quantity in our framework, because it does not only influence the placement, but also the selection of templates (see Section “Postprocessing and thresholding” below). Choosing h too small typically has the effect that the LNL is overestimated such that true peaks might be incorrectly classified as noise. Conversely, choosing h too large leads to an underestimation, thereby increasing the computational burden as well as the number of spurious patterns included in the final list. The advantages of working with the median are obvious: easy computation, robustness and equivariance with respect to monotone transformations. Similar notions of local noise can be found in the literature, see e.g. [8] where a truncated mean is used. Given the positions of the templates, we generate the matrix Φaccording to Eqs. (1) and (2). In the fitting step, we compute a non-negative least squares (q=2) or alternatively non-negative least absolute deviation (q=1) fit by determining a minimizer β ̂ of the criterion

min β 0 y Φ β q q , q = 1 or q = 2 ,

The optimization problem (7) is a quadratic (q=2) or linear (q=1) program and is solved using interior point methods (e.g. [20]). The details are relegated to Appendix “Fitting with non-negativity constraints” section. As far as the choice of q is concerned, we point out that q=1 yields a robust fit that can deal better with deviations from model assumptions, i.e. deviations from the averagine model or from the peak model. However, in general, we are unable to provide any recommendation about how to choose q. Therefore, in our validation, both are evaluated.

Comparison with pepex

In prior work [21], subsequently referred to as ‘pepex’, non-negative least squares fitting is used as well. An important difference to our approach is that the matrix Φ is not constructed from the convolution of isotope distributions and peak shapes as described in Section “Template model”. Instead, peak detection is applied first to reduce the raw intensity data to peak clusters, a step that is usually referred to as centroiding. At the second stage, called de-isotoping, peak clusters are fitted by a design matrix containing isotope distributions themselves, not convolved versions. While the approach is computationally more attractive and avoids estimation of peak shape parameters (cf. Section “Parameter estimation”), the division into centroiding and de-isotoping may lead to poor performance for low resolution and noisy data, or in the presence of overlapping patterns. In these cases, peak detection is little reliable. In our template-based approach, there is no separation of centroiding and de-isotoping. It performs much better in the aforementioned cases, since it operates directly on the data and is hence less affected if single peaks of a pattern are difficult to detect. This reasoning is supported by our evaluation in Section “Results and discussion” as well as that in [3]. At the same time, our approach can in principle be applied to centroided spectra as well. In this case, the columns of the matrix Φ directly represent isotope distributions instead of isotopic patterns.

Postprocessing and thresholding

While indeed a considerable fraction of the entries of β ̂ are precisely equal to zero, treating all positions for which the corresponding entry differs from zero as locations of isotope patterns would yield a huge number of false positives, at least because of regions, in which noise fitting reduces the objective in (7). Therefore, the fitting step of the previous section is accompanied by a thresholding step, with the aim to separate signal from noise. However, fitting followed by thresholding alone does not lead to a proper output. The strategy could be successful if our template model were free of any kind of misspecification. Even when neglecting possible misfits of the averagine model, we still have to cope with two sources of systematic errors − a limited sampling rate and mismatches in the peak model. These are the main reasons for what we term ‘peak splitting’, referring to the phenomenon that several templates are used to fit precisely one pattern. Figure 3 illustrates the effect of sampling in a noiseless setting. In the top panel, the signal is sampled in such a way that the top of the peak is lost. When placing two templates at the two sampling points x l x u of maximum signal, non-negative least squares fitting attributes weights β ̂ l , β ̂ u of roughly equal size to the templates. The postprocessing procedure outlined below yields a suitable correction. One might object that ‘peak splitting’ is a problem inherent in our entirely fitting-oriented approach (7) not incorporating any form of regularization. The bottom panel of Figure 3 shows the solution path of the non-negative lasso [22] given by { β ̂ (λ),λ0}, β ̂ (λ)= argmin β 0 y Φ β 2 2 +λ 1 β . One obtains two nearly parallel trajectories, demonstrating that only a heavily biased fit, which would undesirably lead to the exclusion of additional smaller signals, could accomplish the selection of only one template.

Figure 3

Peak splitting. Lower panel: Non-negative least squares fit of the sampled signal with and without postprocessing. Upper panel: Solution path of the non-negative lasso for the same data.

To a large extent, ‘peak splitting’ can be corrected by means of the following merging procedure, which we regard as postprocessing of the fitting step (7) and which we apply prior to thresholding. Given an estimate β ̂ , we define ̂ c ={ m c , j : β ̂ c , j >0} { x i } i = 1 n ,c=1,,C, as the set of all template locations where the corresponding coefficient exceeds 0.

  1. 1.

    Separately for each c, divide the sets ̂ c into groups G c , 1 ,, G c , G c of ‘adjacent’ positions. Positions are said to be adjacent if their distance on the m/z scale is below a certain tolerance ppm specified in parts-per-million, cf. Section “Finding a set of default parameters”. In the context of ‘peak splitting’, the templates at locations sharing the same group are assumed to fit precisely one true underlying peak.

  2. 2.

    With the notation of Eq. (2), for each c=1,…,C, and for g=1,…,G c , we solve the following optimization problem.

    ( m ~ c , g , β ~ c , g ) = argmin m c,g β c , g m c , j G c , g β ̂ c , j ψ m c , j x β c , g ψ m c , g x 2 dx ,

with the aim to find a location m ~ c , g and a weight β ~ c , g of the most intense peak ψ m ~ c , g within an isotope pattern φc,g approximating the fit of the most intense peaks { ψ m c , j : m c , j G c , g } within the isotope patterns { φ c , j : m c , j G c , g } best in a least squares sense.

  1. 3.

    One ends up with sets ~ c = { m ~ c , g } g = 1 G c and coefficients { β ~ c , g } g = 1 G c ,c=1,,C.

The additional benefit of solving (8) in step two as compared to the selection of the template with the largest coefficient within each group as proposed in [3] is that we are able to determine the location of the pattern even more accurately as predetermined by a limited sampling rate, since in (8) we optimize the location over a continuum. The optimization problem (8) can be solved fast and accurately by sampling the integrand on a fine grid of points and then solving a nonlinear least squares problem with optimization variables mc,g and βc,g.

All candidate positions ( m ~ c , g , β ~ c , g ) are assigned a signal-to-noise ratio

ratio c , g = GOF + ( m ~ c , g )· β ~ c , g LNL + ( m ~ c , g ) ,

where LNL + ( m ~ c , g )=max LNL ( m ~ c , g ) , 1 4 median ( { LNL ( x i ) } i = 1 n ) is a truncated version of the local noise level, with a lower bound included to avoid that the denominator in (9) takes on tiny values in low-intensity regions. The factor GOF + ( m ~ c , g ) represents a goodness-of-fit adjustment, a correction which aims at downweighting spurious peaks in low-intensity noise regions. These are not hard to distinguish from signal regions, which, in view of the presence of peak patterns, tend to be considerably regular. In order to spot noise regions, we fit the spectrum by single peaks (3) placed at each datum x i , i=1,…,n, where the peak shape model, the associated peak shape parameters and the parameter q are chosen according to the choice made for template generation (Sections “Template model”) and template fitting (Section “Template fitting”), respectively. Denote the residuals of the resulting fit by { r i } i = 1 n . A local measure of goodness-of-fit is defined by

GOF + x = min 1 i I x | r i | q i I x | y i | q , 0 . 5 ,

The idea underlying this procedure is that in noise regions, the fit to the data will be poor, and consequently, the size of the residuals is expected to be large relative to the signal, hence leading to a low goodness-of-fit statistic. The truncation at 0.5 limits the influence of this correction. A final list is generated by checking whether the signal-to-noise ratios (9) exceed a ‘significance threshold’ t specified by the user. We do not give a general guideline for choosing t, because a reasonable choice is very specific to experimental conditions, e.g. the platform used and the composition of the spectrum. It is important to note that while t itself is constant, we take into account that the noise level is heterogeneous, since thresholding is based on the ratios (9), where the local noise level enters.

Finding a set of default parameters

Apart from the signal-to-noise threshold t, we have introduced the parameters window, i.e. the width h of the sliding window required for the computation of the local noise level (6), the template placement parameter and the parts-per-million tolerance ppm within which peaks are considered to be merged by the postprocessing procedure. With the exception of the threshold t, we have fixed all parameters to a default setting which we expect to give reasonable (albeit potentially suboptimal) results on spectra different from the ones analyzed here, without the need of manual tuning. In order to find such a default setting, we performed a grid search using only one selected spectrum of those described in Section “Datasets” below. While our default setting, which can be found in the HTML manual of the R package IPPD, already performs well, we recommend to do such a calibration to optimize the performance of our method.

Sparse recovery with non-negativity constraints: non-negative least squares + thresholding vs. the non-negative lasso

We believe that our preference for the first alternative is a major methodological contribution that has potential to impact related problems where non-negativity problems come into play. In the present section, we provide, at a high level, a series of arguments rooting in the statistics and signal processing literature that clarify our contribution and support our preference.

Linear models and usual paradigms in statistics

The fact that we favour non-negative least squares + thresholding may seem implausible since it questions or partially even contradicts paradigms about high-dimensional statistical inference. Consider the linear model

y Φ β ,y R n , Φ â n × p ,

which corresponds to model (1), where ‘≈’ is used instead of ‘=’ to account for stochastic noise or model misspecifications. Linear models of the form (10) have been and continue to be objects of central interest in statistical modelling.

  • Classical work in statistics shows that under mild conditions if the number of sample n grows at a faster rate than the number of features p, the ordinary least squares estimator β ̂ ols β (in probability) as n.

  • Since many contemporary datasets, like the MS datasets of the present paper, are characterized by a large p, which is of the same order as n or even larger, the first bullet has considerably lost relevance. Translated to MS datasets, it provides a statement about the case where the resolution tends to infinity. Therefore, modern statistical theory studies regimes in which p is allowed to grow at a faster rate than n, with a focus on results that hold for finite sample sizes. These results hinge on some sort of sparsity assumption on β, the simplest being that βis zero except for some index set (support) of small cardinality. In this context, a multitude of results has been proved (see e.g. [23] for an overview) indicating that the lasso estimate β ̂ lasso is a statistically optimal procedure in the sense that if the regularization parameter is chosen in the right way, the squared Euclidean distance β ̂ lasso β 2 2 is nearly of the same order as that of an estimator one could construct if the non-zeroes of βwere known.

The second bullet provides quite some justification for NITPICK, which is based on the lasso. However, as detailed below, the italicized part can be critical. On the other hand, there are several results that support our approach.

The power of non-negativity constraints

  • It turns out that the non-negativity constraint β0imposed in non-negative least squares (NNLS) may lead to a drastically better performance than that of the ordinary least squares estimator in ‘large p’ situations provided Φsatisfies additional conditions. Roughly speaking, it is shown in [12] that if Φhas non-negative entries, which is fulfilled for the template matching problem of Section “Template model”, the NNLS estimator β ̂ does not overfit and is unique even in the singular case (p>n). These results indicate that NNLS may behave surprisingly well in a high-dimensional setup, without using 1-regularization, which is often propagated in the literature as basically the only option ([24], Section 16.2.2).

  • There are several recent papers [2527] in the sparse recovery literature in which it is shown that a sparse, non-negative vector can be recovered from few linear measurements np. In [12], these results are extended to a noisy setup. More specifically, it is shown that NNLS + thresholding can consistently recover βand its support. Very recently, using similar conditions as in [12], Meinshausen [28] has established several guarantees of NNLS in a high-dimensional setup.

One should bear in mind that the non-negativity constraints are essential for our approach. Thresholding the unconstrained ordinary least squares estimator β ̂ ols in general leads to poor results in the ‘large p’ situation.

Shortcomings of 1-regularization in theory

In [12], it is not only shown that NNLS + thresholding is a sound strategy to perform sparse recovery of a non-negative target, but also examples are given where the non-negative lasso is outperformed even if its regularization parameter is set to match theoretical results and regardless of whether subsequent thresholding as advocated in [29, 30] is used or not. In particular, inferiority of the lasso arises in the presence of small, yet significantly non-zero entries in β. These are specifically affected by the non-negligible bias of 1-regularization [31]. It is important to note that the comparison in [12] does not contradict prior comparisons of the lasso (aka soft thresholding) and (hard) thresholding for orthonormal designs (ΦΦ=I) in [32, 33], where both approaches perform similarly well and non-negativity constraints are not particularly important. Orthonormal designs, which lead to greatly simplified estimation problem are not of interest in the context of the paper, since the template matrix Φis far from being orthonormal.

Shortcomings of 1-regularization in practice

The study in [12] is of more theoretical nature, since all constants of the problem, in particular the noise level, are known, so that the regularization parameter can be set in an optimal fashion. This can realistically not be accomplished in practice. Likewise, the information-theoretic criterion employed in [3] as well as the data-splitting approach of [34] rely on knowledge of the noise level, or a consistent estimate thereof, which is hard to obtain in the ‘large p’ situation [35]. In any case, the regularization parameter remains a quantity that is hard to grasp and hence hard to set for a practitioner, since it cannot be related directly to the signal. In contrast, the threshold t admits a straightforward interpretation.

Moreover, when using 1-regularization, data fitting and model selection are coupled. While this is often regarded as advantage, since model selection is performed automatically, we think that it is preferable to have a clear separation between data fitting and model selection, which is a feature of our approach. Prior to thresholding, the output of our fitting approach gives rise to a ranking which we obtain without the necessity to specify any parameter. Selection is completely based on a single fit simply by letting the the threshold vary. On the contrary, if one wants to reduce the number of features selected by the lasso, one resets the regularization parameter and solves a new optimization problem. Note that it is in general not possible to compute the entire solution path of the lasso [22] for the MS datasets used for the present paper, where the dimension of Φ is in the ten thousands so that the active set algorithm of [22] is prohibitively slow. In this regard, model selection by thresholding is computationally more attractive.

Results and discussion

For the assessment of the pattern picking performance, in total eight spectra generated by two different ionization methods, matrix assisted laser desorption/ionization (MALDI) and electrospray ionization (ESI), respectively, form the basis of the evaluation. While MALDI has been coupled to a time-of-flight (TOF) mass analyzer, ESI MS spectra have been recorded on both a linear ion trap (LTQ) and an Orbitrap mass analyzer. In addition, a series of spectra were prepared with the aim of investigating in detail the method’s performance in the presence of overlapping peptides.


For MALDI mass spectra (Additional file 1), time of flight mass analysis was performed; spectra were recorded on an ABI MALDI-TOF/TOF 4800 instrument in positive ion mode using α-cyano-4-hydroxy-cinnamic acid (CHCA) as matrix. Nanospray ESI spectra (Additional file 2) were measured in positive ion mode on a Thermo LTQ Orbitrap Velos MS; both high resolution measurements using the Orbitrap mass analyzer (referred to as ‘Orbitrap’) and, alternatively, low resolution linear ion trap (IT) measurements were performed with this setup. This experiment has been chosen in order to demonstrate the utility of our method at different concentration levels, that it is robust with respect to changes in the data-generating process and that the method is capable of handling singly charged ions, the main form generated by MALDI MS, as well as higher charged ions formed in ESI MS. Tryptic digests (performed in 40 mM ammonium bicarbonate) of model proteins were used as analytes: bovine myoglobin and chicken egg lysozyme (10 and 500 fmol each) for MALDI-TOF experiments, and lysozyme (250 and 1000 fmol) for ESI experiments. Disulfide bonds were reduced with dithiothreitol (DTT) prior to alkylation, free cysteine residues were alkylated by iodacetamide. No further sample pretreatment was performed prior to MS analysis. When referring to these spectra, we omit that tryptic digests are given: e.g., the term ‘MALDI-TOF myoglobin spectrum (500 fmol)’ means the respective tryptic digest.

To demonstrate explicitly the method’s ability to separate strongly overlapping patterns even in the case of badly resolved signals, 22 additional spectra have been generated in positive ion mode on a Bruker Daltonics HCT Ultra Ion Trap MS with an electrospray ion source. Three synthetic peptides (cf. Section “Unmixing of overlaps” for details) with sequences corresponding to tryptic peptides from bovine serum albumin (BSA) were used as analytes. In each measurement two out of three peptides were mixed in different ratios to get overlapping peptide signals, also with different charge states. Two different concentrations (500 fmol/μ l and 1000 fmol/μ l) were injected into the mass spectrometer via a Cole-Parmer syringe pump.

Validation strategy

Validation of pattern picking is notoriously difficult, because a gold standard which is satisfactory from both statistical and biological points of view is missing. In this context, a major problem one has to account for is that spectra frequently contain patterns whose shape is not distinguishable from those of peptides, but which are in fact various artifacts resulting e.g. from impurities during sample preparation and measurement. These artifacts do not constitute biologically relevant information and are, in this sense, ‘false positives’. An important instance are signals derived from the matrix (or from matrix-clusters) frequently observed in MALDI MS. The pattern of these signals is similar to that of peptides; nevertheless, due to their molecular composition, which differs significantly from that of an average peptide, the exact masses can be used to exclude these signals from the data analysis. On the other hand, from a statistical perspective which judges a method according to how well it is able to detect specific patterns in a given dataset, a qualification as ‘true positive’ is justified. With the aim to unify these aspects, we have worked out a dual validation scheme. In order to reduce the number of artifacts, all automatically generated lists of candidates for peptide masses as well as the lists of a human expert (see below) are postprocessed by a peptide mass filter [36]: only peptides whose monoisotopic mass deviated less than 200 ppm from the closest peptide mass centera are used for subsequent evaluation.

Comparison with manual annotation

The first part investigates how well a method is able to support a human expert who annotates the spectra manually. More specifically, the automatically generated lists are matched to the manual annotation such that an entry of the list (potential peptide mass) is declared ‘true positive’ whenever there is a corresponding mass in the manual annotation deviating by no more than Δ ppm. Otherwise, it is declared ‘false positive’. In order to adapt Δppm to the resolution of the different mass lines, we used the following strategy: assuming that most of the peptides will have a mass larger than 700 Da, we determined the spacing Δm/z between neighboring data points in m/zdirection for each mass spectrum in the lower mass range. If we further assume that a simple manual annotation by visual inspection can result in a mass deviation from the ‘correct’ mass position of at most Δm/z, we can derive the following tolerance values: Δ=100 ppm for ion trap recordings, Δ=50 ppm in the case of MALDI-TOF recordingsb and Δ=20ppm for Orbitrap data.

As the performance of our as well as those of all competing methods depends on a threshold-like parameter governing, crudely speaking, the trade-off between precision and recall, we explore the performance for a range of reasonable parameter values, instead of fixing an (arbitrary) value, which we believe to be little meaningful. The results are then visualized as ROC curve, in which each point in the (Recall, Precision)-plane corresponds to a specific choice of the parameter. Formally, we introduce binary variables {B i (t)} for each mass i contained in the list of cardinality L ̂ (t) when setting the threshold equal to t, where B i (t)equals 1 if the mass is matched and 0 otherwise, and denote by L the number of masses of the manual annotation. The true positive rate (recall, R), and the positive predictive value (precision, P) associated with threshold t are then defined by R(t)= i B i ( t ) L , P(t)= i B i ( t ) L ̂ ( t ) . An ROC curve results from a sequence of pairs {R(t),P(t)}for varying t.

Database query

The second part evaluates the lists in terms of a query to the Mascot search engine [37], version 2.2.04. In particular, we account for a major problem of a manual annotation, namely that peptides yielding weak MS signals might easily be overlooked, but might be detected by methods designed to extract those weak signals. Since we are especially interested in demonstrating the method’s ability to separate overlapping patterns, we adapted the standard search parameters of Mascot’s peptide mass fingerprint routine to allow two missed cleavage sites and to incorporate the following (variable) post-translational modifications: ‘Oxidation (M)’, ‘Carbamidomethyl (C)’, ‘Amidated (Protein C-term)’, ‘Deamidated (NQ)’. In particular, the latter two modifications will frequently trigger MS signals interleaving with the pattern of their unmodified counterpart: in the case of a deamidation the modified ion shows a mass of approx. 0.98 Da more compared to the amidated peptide. The same mass tolerances as for the manual annotation are used. As for the comparison with the manual annotation, we evaluate several lists corresponding to different choices of the threshold. Instead of an ROC curve, which turned out to be visually unpleasant, we display the statistics (score, coverage and fraction of hits) of two lists per method, namely of those achieving the best score and the best coverage, respectively. The complete set of results as well as further details of our evaluation like the manual annotation are contained in Additional file 3.

Competing methods

We compare our method in its two variants depending on the choice of the fitting criterion (cf. Eq. (7)), labelled l1(q=1) and l2(q=2), respectively, with the following competing methods.


The ‘lasso’ method in this paper serves as surrogate for NITPICK. Since the ‘lasso’ is embedded into our framework while implementing a methodology that closely resembles NITPICK, we use the ‘lasso’ for the sake of convenience, to avoid an involved parameter optimization for NITPICK. Our lasso implementation benefits from the improved merging procedure of Section “Postprocessing and thresholding”. To accomodate a heterogeneous noise level, NITPICK divides spectra into bins. This can be avoided by determining a minimizer β ̂ (λ;W) of the weighted non-negative lasso problem

min β 0 y Φ β 2 2 + λ 1 W β , λ > 0 ,

where W is a diagonal matrix with entries wc,j=LNL+ (mc,j), j=1,…,p c , c=1,…,C, whose purpose is to re-scale the amount of 1-regularization according to the local noise level. The columns of the template matrix Φ in (8)). The parameter λhere plays the role of the threshold t, cf. Section “Validation strategy”.


As discussed in Section “Template fitting”, pepex performs centroiding and de-isotoping separately. De-isotoping is based on non-negative least squares. Since pepex is limited to detect patterns of charge state one, its performance is only assessed for MALDI-TOF spectra. Accordingly, when comparing the ouptput of pepex with the manual annotation, the few patterns of charge state two are excluded. The parameters nm, pft, mincd, maxcd and nsam were set to optimize performance with respect to manual annotation. The ROC curves are based on peaklists resulting from ten different choices of the signal-to-noise parameter snr.

Isotope wavelet

As opposed to our method, this approach is not able to handle overlaps. On the other hand, it typically shows strong performance in noisy and low intensity regions or on datasets with extremely low concentrations [39, 40]. While the isotope wavelet is not limited to charge one, it is run in charge one only mode for the MALDI-TOF spectra, to achieve more competitive performance. For the sake of fair of comparison, the result of the isotope wavelet on the MALDI-TOF spectra are evaluated in the same way as those of pepex.


The parameter setting for the ABI MALDI-TOF/TOF MS software was as follows: Local Noise Width (m/z) 250, Min Peak Width at FWHM 2.9. The Cluster Area Optimization S/N threshold has been dynamically adapted to about three times the S/N threshold as suggested by the ABI documentation. Since the vendor software is limited to charge one, its outputs are evaluated in the same way as those of pepex. Given the disproportionally high effort needed to find an optimal parameter setting of the vendor software for ESI spectra, its performance is not assessed.


Manual annotation vs. database query

When inspecting Figures 4 and 5 on the one hand and Table 1 on the other hand, one notices that results of the evaluation based on the manual annotation are not in full accordance with the results of the database query. The difference is most striking for the MALDI-TOF spectra at 500 fmol, where our methods (l1 and l2) yield a significant improvement, which does not become apparent from the database query. This is because only a fraction of the manual annotation is actually confirmed by the database query. The part which is not matched likely consists of artifacts due to contamination or chemical noise as well as of specific modifications not captured by the database query. In light of this, our dual validation scheme indeed makes sense.

Figure 4

Results for pattern picking, MALDI-TOF. Pattern picking performance for the MALDI-TOF spectra as described in Section “Datasets”. The points in the (Recall,Precision)-plane correspond to different choices of a method-specific threshold(-like) parameter.

Figure 5

Results for pattern picking, ESI. Pattern picking performance for the ESI spectra as described in Section “Datasets”. The points in the (Recall,Precision)-plane correspond to different choices of a method-specific threshold(-like) parameter.

Table 1 Mascot results


Figure 4 and Table 1 reveal an excellent performance of our methods (l1 and l2) throughout all MALDI-TOF spectra under consideration. For the myoglobin spectra high sequence coverages are attained that clearly stand above those of competing methods. For the spectra at 10 fmol, only the performance of lasso is competetive with that of our methods in terms of the Mascot score; all other competitors, including the vendor software which has been tailored to process these spectra, are significantly weaker. In particular, the strikingly high proportion of ‘hits’ (≥94%) indicates that even at moderate concentration levels, our methods still distinguish well between signal and noise. This observation is strongly supported by the ROC curves in Figure 4, where the precision drops comparatively slowly with increasing recall. In this regard, our methodology clearly contrasts with approaches like the isotope wavelet that aim at achieving high protein sequence coverage. The latter often requires the selection of extremely lowly abundant peptide signals hidden in noise at the expense of reduced specificity.

For MALDI-TOF spectra at high concentration levels, pepex achieves the best scores and is competitive with respect to sequence coverage. However, the performance of pepex degrades dramatically at lower concentration levels, as it is unambiguously shown by both parts of the evaluation. In particular, the database scores are the worst among all methods compared. This provides some support for our reasoning at the end of Section “Template fitting”.

For the ESI spectra, our methods in total fall a bit short of the lasso (particularly for the ion trap spectra), but perform convincingly as well, thereby demonstrating that they can deal well with multiple charge states. This is an important finding, since the presence of multiple charges makes the sparse recovery problem as formulated in model (1) much more challenging, because the number of parameters to be estimated as well as the correlations across templates are increased. In spite of these difficulties, Figure 5 and Table 1 suggest that the performance of our pure fitting approach (7) does not appear to be affected. Using a more difficult set of spectra, the capability to process ESI data with impressive success is additionally shown in the next section.

Additional remarks

  • In Figure 4, the area under the curve (AUC) of our methods attained for myoglobin is higher for lower concentration. At first glance, this may seem contradictory since an increase in concentration should lead to a simplified problem. However, a direct comparison of the AUCs is problematic, since the number of true positives (17 at 10 fmol, 106 at 500 fmol) is rather different. For instance, there are choices of the threshold that yield 18 true positives and not a single false positive for both of our methods at 500 fmol, yet the AUC is lower.

  • The fact that some of the ROCs start in the lower left corner results from outputs containing only false positives.

Unmixing of overlaps


One of the main advantages of our method over more simplistic pattern picking methods is the ability to disentangle isotope patterns of overlapping peptide signals, whose presence may lead to a significantly more challening pattern picking problem as e.g. discussed in [41] in the slightly different context of intact protein mass spectra. Therefore, a potential application for our approach will be the analysis of a certain class of posttranslational modifications, the deamidation of amino acid residues containing a carboxamide side chain functionality. The deamidation of asparagine (Asn) or glutamine (Gln) residues, yielding aspartic acid (Asp) or glutamic acid (Glu) residues, respectively, is an important posttranslational modification, which can have immense effects on the structure of peptides [42] and is of great relevance in a number of pathophysiological events [43]. During the deamidation, the side chain carboxamide is hydrolysed, which is accompanied by a mass increase of 0.98 Da. Thus, in a spectrum of a mixture of the amidated and deamidated form, a direct overlap of both signals can be observed. It has to be noted that additionally to the amidated/deamidated forms, in case of Asn deamidation, a second product containing an iso-peptide bond is formed, too, which has the same molecular behaviour; these two forms can be identified solely by their differential MS/MS behavior.


The peptides analyzed here in order to assess the performance of our approach were synthesized by means of Fmoc-solid phase peptide synthesis; sequences corresponding to tryptic peptides from bovine serum albumin (BSA) with the sequences listed in Table 2 were selected.

Table 2 Peptides mixed together

In each measurement two out of the three listed peptides were mixed together in different ratios (Additional file 4). Given such a spectrum, we study the question whether our method returns the true underlying composition. We classify the output of our method as correct interpretation of the spectrum if the templates corresponding to the true underlying peptides achieve signal-to-noise-ratios of at least one and these ratios are the two largest among all templates used for fitting. This procedure corresponds to a selection-optimal choice of the threshold based on the knowledge of the true composition of the spectrum. This simplification may be justified in view of the extreme difficulty of the problem as illustrated in Figure 6, in particular in view of lowly resolved spectra with an average m/z-spacing of 0.06 Da. For the remaining parameters, we compare a grid search (performed separately for each spectrum) and the default parameter set (Section “3 and Figure 6 indicate that already the default parameter setting is able to solve successfully a wide range of problem instances. As one would expect, Table 3 and Figure 6 suggest that the higher the concentration and the more balanced the amplitudes of the overlapping peptides, the more likely it is that the overlap can be resolved. On the other hand, the higher the degree of overlap of the peptides, which depends on both their charges and the distance of their positions, the more difficult the problem is. This becomes obvious when considering the overlap of the two peptides located at 351.2 and 351.4 Da, respectively.

Figure 6

Unmixing of overlap. Graphical representation of selected overlap problems as tabulated in Table 3. The experimental setups are given in the title of the plots. The dots represent the signal, while solid and dashed lines represent the templates used by our method to match the signal, using the default parameter setting and the choice for the threshold as explained in the text. The grey area represents the overall fit when using the selected templates.

Table 3 Unmixing of overlaps


We have proposed a template matching approach for feature extraction in proteomic mass spectra. The main methodological innovation is a framework for sparse recovery in which sparsity is not promoted explicitly by a regularization term, as it is usually done and was done in previous work. We fully exploit the strength of non-negativity constraints, which permits us to circumvent the delicate choice of a ‘proper’ amount of regularization, an ever-lasting problem in statistics, and to work with thresholding instead. The latter is not only computationally attractive, because one does not have to repeatedly solve the same optimization problem for different choices of the regularization parameter, but also increases user-friendliness, since the threshold is directly related to the signal-to-noise ratio, the quantity domain experts are interested in. The replacement of a regularization parameter by a threshold is a cornerstone in our conceptual design guided by the principle to relieve the user from laboursome fine tuning of parameters. We believe that a small set of well-interpretable parameters with suitable defaults additionally improves robustness and reproducibility of results. In this context, we would like to emphasize again that apart from the threshold, the user does not have to specify any parameters before running our software.

In a comprehensive experimental study involving instruments of varying resolution and spectra of varying concentration levels, where we comparatively assess the performance of our approach on the basis of an elaborate dual validation scheme, it is demonstrated that the performance for pattern picking is excellent for MALDI-TOF spectra and outstands due to its specificity in selecting signal and only little noise. A major strength of the method is its ability to unmix overlapping peptide signals as shown for a series of ESI spectra. In total, we demonstrate that our approach is broadly applicable to a variety of spectra. While our approach is guided by a concrete application in proteomics, the framework is general enough to be of much of use for related deconvolution problems emerging in other fields − only the templates have to be adjusted according to the specific application.

While in this paper, we have focused on single spectra, the approach can be extended to process whole LC-MS runs, as it has already been implemented in our R package IPPD. More precisely, the sweep line scheme of [44] is used to agglomerate the results from single spectra. To apply our methods on a routine basis, an improved implementation, notably parallelization, is required, since e.g. processing a single spectrum of the Maxquant datasets [2] takes 10s on average on a Unix system equipped with an Intel(R) Core(TM)2 Duo CPU T9400 (2.53GHz) and 4 GB main memory. There is much room for an improvement, since our implementation is based on interpreted R code.

Concerning future directions of research, a question we have not yet answered in a satisfactory way is the choice of the fitting criterion. While both criteria (least squares and least absolute deviation) employed in this paper perform well, their implicit assumption of additive noise might be questionable [45]. It is worth investigating whether a multiplicative noise model could even yield better results. Second, one might ask whether the performance could be further improved when it is used jointly with the isotope wavelet, which is affected by overlaps, but has the potential to achieve higher protein sequence coverage.


aMonoisotopic peptide mass centers are modelled by: 1.000485·m n + 0.029, where m n denotes the nominal mass.bFor the MALDI-TOF lysozyme datasets an extended search tolerance of 100ppm was applied due to experimental miscalibration of the MS.


Fitting with non-negativity constraints

In the following, we provide the details concerning optimization problem (7). In view of the special structure of Φ, (7) is computationally tractable even if n and the number of templates are in the ten thousands. We exploit the sparsity of the problem arising from templates which are highly localized, i.e. the domain on which they are numerically different from zero covers only a small part of the whole m/z range of the spectrum. As a consequence both Φand the Gram matrix ΦΦ, which is crucial in the computation, can conveniently be handled by using software for sparse matrices. For R, such software is available in the Matrix package [46].

Non-negative least squares

Consider the quadratic program

min β 1 2 y Φ β 2 2
subject to β 0 .

In order to solve (12), we use the so-called log-barrier method which amounts to solving a sequence of an unconstrained nonlinear convex problems in which the constraints I(β j ≥0), j=1,…,p, are taken into account by incorporating log-barrier terms −log(β j )/γ in the objective. As γ, the log-barrier acts like a function which equals + if β j <0and zero otherwise. Beginning with a moderately sized starting value for γ, we solve the convex problem

min β 1 2 y Φ β 2 2 1 γ j = 1 p log( β j )

using Newton’s method. The gradient and Hessian with respect to β, respectively, are given by

β = Φ ( y Φ β ) 1 γ [ 1 / β 1 1 / β p ] . β 2 = Φ Φ + 1 γ diag ( 1 / β 1 2 , , 1 / β p 2 ) .

The Newton descent direction d β is obtained from the linear system

β 2 d β = β .

Solution of linear systems of this structure constitutes the main computational effort to be made. Fast solutions are obtained by using CHOLMOD[47], which offers an efficient implementation for computing the Cholesky factorization of sparse symmetric, positive definite matrices. Since the diagonal of β 2 changes from one Newton iteration to the next, one Cholesky factorization has to be performed per Newton step. Once we have solved (14) for a specific γ, we solve a new problem of the type (14) for γ·M, M>1. This is repeated until γexceeds a predefined maximum value. For a thorough account on the log-barrier method, we refer to [20].

Complexity analysis of non-negative least squares

We here provide the order of magnitude of floating points operations (flops) required per update (i.e. per Newton step) for the specific non-negative least squares problems considered for this paper. In our implementation, we exploit that the templates contained in the matrix Φare highly localized. As a result, after a suitable column permutation, the matrix ΦΦ is roughly a band matrix with bandwidth k no larger than only few hundreds. The dominant operation is solving the linear system β 2 d β = β with the help of the Cholesky factorization, which can be done in O(p k2)flops (e.g. [20], p.670). Our algorithm terminates after usually no more than one hundred Newton steps.

Non-negative least absolute deviation

Consider the optimization problem

min β y Φ β 1
subject to β 0 .

Problem (15) can be recast as the following linear program.

min r r 1 subject to Φ β y + r 0 , y Φ β + r 0 , r 0 , β 0 .

For its solution, we use the log-barrier method sketched in the previous paragraph. After incorporating log-barrier terms for all constraints, the objectives of the unconstrained convex problems are of the form

r 1 1 γ i = 1 n log ξ i + + log ξ i + log r i + j = 1 p log β j ,

where we have used the notational shortcuts

ξ i + = ( Φ β ) i y i + r i , ξ i = y i ( Φ β ) i + r i , i = 1 , , n.

The gradients w.r.t. rand β, respectively, are given by

r = 1 1 γ 1 ( ξ 1 + + ξ 1 + r 1 ) 1 ( ξ n + + ξ n + r n ) , β = 1 γ ( Φ ( [ Ξ + ] 1 [ Ξ ] 1 ) 1 + [ 1 / β 1 1 / β p ] ) , Ξ ± = diag ( ξ 1 ± , , ξ n ± ) .

Introducing R=diag(r1,…,r n ) and B=diag(β1,…,β p ), the Hessian is given by the block matrix

r 2 r β r β β 2 = 1 γ ( [ Ξ + ] 2 + [ Ξ ] 2 + R 2 ) 1 γ ( [ Ξ + ] 2 Φ [ Ξ ] 2 Φ ) 1 γ ( Φ [ Ξ + ] 2 Φ [ Ξ ] 2 ) 1 γ ( Φ ( [ Ξ + ] 2 + [ Ξ ] 2 ) Φ ) + B 2 ) .

The linear system for the Newton descent directions reads

r 2 r β r β β 2 d r d β = r β .

Note that r 2 is diagonal, so it is a cheap operation to resolve for d r once d β is known:

d r = ( r 2 ) 1 ( r β d β + r ) .

Plugging this into the second block of the linear system, one obtains

r β ( r 2 ) 1 ( r β d β + r ) + β 2 d β = β

which is equivalent to

( β 2 r β ( r 2 ) 1 r β ) d β = β + r β ( r 2 ) 1 r .

In order to solve the linear system, we proceed as for non-negative least squares. The computational cost of this operation is roughly the same, since the sparse structure of ΦΦcan still be exploited. For non-negative least squares, re-computation of the Hessian β 2 only involves a diagonal update, an operation of negligible computational cost. However, for non-negative least absolute deviation, computation β 2 involves the matrix multiplication (Φ([Ξ+ ]−2 + [Ξ]−2)Φ), i.e. essentially a repeated computation of a scaled Gram matrix. In spite of the special structure of ΦΦ, the computational cost is of the same order as the solution of the linear system even when using a self-written routine for matrix multiplication tailored to the specific structure.


  1. 1.

    Mo F, Mo Q, Chen Y, Goodlett DR, Hood L, Omenn GS, Li S, Lin B: WaveletQuant, an improved quantification software based on wavelet signal threshold de-noising for labeled quantitative proteomic analysis. BMC Bioinformatics 2010, 11: 219. 10.1186/1471-2105-11-219

  2. 2.

    Cox J, Mann M: MaxQuant enables high peptide identification rates, individualized p.p.b.-range mass accuracies and proteome-wide protein quantification. Nat Biotechnol 2008, 26: 1367–1372. 10.1038/nbt.1511

  3. 3.

    Renard B, Kirchner M, Steen H, Steen J, Hamprecht F: NITPICK: peak identification for mass spectrometry data. BMC Bioinformatics 2008, 9: 355. 10.1186/1471-2105-9-355

  4. 4.

    Hoopmann MR, Finney GL, MacCoss MJ: High-speed data reduction, feature detection, and MS/MS spectrum quality assessment of shotgun proteomics data sets using high-resolution mass spectrometry. Anal Chem 2007, 79: 5620–5632. 10.1021/ac0700833

  5. 5.

    Gambin A, Dutkowski J, Karczmarski J, Kluge B, Kowalczyk K, Ostrowski J, Poznanski J, Tiuryn J, Bakun M, Dadlez M: Automated reduction and interpretation of multidimensional mass spectra for analysis of complex peptide mixtures. Int J Mass Spectrom 2007, 260: 20–30. 10.1016/j.ijms.2006.06.011

  6. 6.

    Mantini D, Petrucci F, Pieragostino D, Del Boccio P, Di Nicola M, Di Ilio C, Federici G, Sacchetta P, Comani S, Urbani A: LIMPIC: a computational method for the separation of protein MALDI-TOF-MS signals from noise. BMC Bioinformatics 2007, 8: 101. 10.1186/1471-2105-8-101

  7. 7.

    Noy K, Fasulo D: Improved model-based, platform-independent feature extraction for mass spectrometry. Bioinformatics 2007, 23: 2528–2535. 10.1093/bioinformatics/btm385

  8. 8.

    Kaur P, O’Connor PB: Algorithms for automatic interpretation of high resolution mass spectra. J Am Soc Mass Spectrom 2006, 17: 459–468. 10.1016/j.jasms.2005.11.024

  9. 9.

    Tibshirani R: Regression shrinkage and variable selection via the lasso. J R Stat Soc Ser B 1996, 58: 671–686.

  10. 10.

    Du P, Angeletti R: Automatic deconvolution of isotope resolved mass spectra using variable Selection and quantized peptide mass distribution. Anal Chem 2006, 78: 3385–3392. 10.1021/ac052212q

  11. 11.

    Tibshirani R: Regression shrinkage and selection via the lasso: a retrospective (with discussion). J R Stat Soc Ser B 2011, 73: 273–282. 10.1111/j.1467-9868.2011.00771.x

  12. 12.

    Slawski M, Hein M: Sparse recovery by thresholded non-negative least squares. In Advances in Neural Information Processing Systems 24. MIT press, Cambridge, Massachusetts; 2011:1926–1934.

  13. 13.

    Lange E, Gropl C, Reinert K, Kohlbacher O, Hildebrandt A: High-accuracy peak picking of proteomics data using wavelet techniques. Pac Symp Biocomput 2006, 11: 243–254.

  14. 14.

    Schulz-Trieglaff O, Hussong R, Gröpl C, Hildebrandt A, Reinert K: A fast and accurate algorithm for the quantification of peptides from mass spectrometry data. In Proceedings of the Eleventh Annual International Conference on Research in Computational Molecular Biology (RECOMB 2007), Volume 11,. Springer, Berlin; 2007:437–487.

  15. 15.

    Zubarev R: Accurate monoisotopic mass measurements of peptides: possibilities and limitations of high resolution time-of-flight particle desorption mass spectrometry. Rapid Commun Mass Spectrom 1996, 10(11):1386–1392. 10.1002/(SICI)1097-0231(199608)10:11<1386::AID-RCM652>3.0.CO;2-T

  16. 16.

    Senko M, Beu S, McLafferty F: Determination of monoisotopic masses and ion populations for large biomolecules from resolved isotopic distributions. J Am Soc Mass Spectrom 1995, 6: 229–233. 10.1016/1044-0305(95)00017-8

  17. 17.

    Horn DM, Zubarev RA, McLafferty FW: Automated reduction and interpretation of high resolution electrospray mass spectra of large molecules. J Am Soc Mass Spectrom 2000, 11: 320–332. 10.1016/S1044-0305(99)00157-9

  18. 18.

    Lou X, Renard B, Kirchner M, Koethe U, Graf C, Lee C, Steen J, Steen H, Mayer M, Hamprecht F: Deuteration distribution estimation with improved sequence coverage for HDX/MS experiments. Bioinformatics 2010, 26: 1535–1541. 10.1093/bioinformatics/btq165

  19. 19.

    Suits F, Hoekman B, Rosenling T, Bischoff R, Horvatovich P: Threshold-avoiding proteomics pipeline. Anal Chem 2011, 83: 7786–7794. 10.1021/ac201332j

  20. 20.

    Boyd S, Vandenberghe L: Convex Optimization. Cambridge University Press, New York; 2004.

  21. 21.

    Samuelsson J, Dalevi D, Levander F, Rognvaldsson T: Modular, scriptable and automated analysis tools for high-throughput peptide mass fingerprinting. Bioinformatics 2004, 20: 3628–3635. 10.1093/bioinformatics/bth460

  22. 22.

    Efron B, Hastie T, Johnstone I, Tibshirani R: Least Angle Regression (with discussion). Ann Stat 2004, 32: 407–499. 10.1214/009053604000000067

  23. 23.

    van de Geer S, Bühlmann P: On the conditions used to prove oracle results for the Lasso. Electron J Stat 2009, 3: 1360–1392. 10.1214/09-EJS506

  24. 24.

    Hastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning, 2nd Edition. Springer, New York; 2008.

  25. 25.

    Bruckstein A, Elad M, Zibulevsky M: On the uniqueness of nonnegative sparse solutions to underdetermined systems of equations. IEEE Trans Inf Theory 2008, 54: 4813–4820.

  26. 26.

    Wang M, Tang A: Conditions for a unique non-negative solution to an underdetermined system. In Proceedings of Allerton Conference on Communication, Control, and Computing, Volume 49,. IEEE Press, Piscataway, New Jersey; 2009:301–307.

  27. 27.

    Donoho D, Tanner J: Counting the faces of randomly-projected hypercubes and orthants, with applications. Discrete Comput Geometry 2010, 43: 522–541. 10.1007/s00454-009-9221-z

  28. 28.

    Meinshausen N: Sign-constrained least squares estimation for high-dimensional regression. Tech. rep.. Department of Statistics, Oxford University; 2012.

  29. 29.

    Meinshausen N, Yu B: Lasso-type recovery of sparse representations for high-dimensional data. Ann Stat 2009, 37: 246–270. 10.1214/07-AOS582

  30. 30.

    Zhou S: Thresholding Procedures for high dimensional variable selection and statistical estimation. In Advances in Neural Information Processing Systems 22. MIT press, Cambridge, Massachusetts; 2009:2304–2312.

  31. 31.

    Zhang T: Some sharp performance bounds for least squares regression with L1 regularization. Ann Stat 2009, 37: 2109–2144. 10.1214/08-AOS659

  32. 32.

    Donoho D, Johnstone I: Ideal spatial adaption by Wavelet shrinkage. Biometrika 1994, 81: 425–455. 10.1093/biomet/81.3.425

  33. 33.

    Fan J, Li R: Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Association 2001, 97: 210–221.

  34. 34.

    Wasserman L, Roeder K: High-dimensional variable selection. Ann Stat 2009, 37: 2178–2201. 10.1214/08-AOS646

  35. 35.

    Fan J, Guo S, Hao N: Variance estimation using refitted cross-validation in ultrahigh dimensional regression. J R Stat Soc Ser B 2012, 74: 37–65. 10.1111/j.1467-9868.2011.01005.x

  36. 36.

    Wolski WE, Farrow M, Emde AK, Lehrach H, Lalowski M, Reinert K: Analytical model of peptide mass cluster centres with applications. Proteome Sci 2006, 4: 18. 10.1186/1477-5956-4-18

  37. 37.

    Perkins DN, Pappin DJ, Creasy DM, Cottrell JS: Probability-based protein identification by searching sequence databases using mass spectrometry data. Electrophoresis 1999, 20: 3551–3567. 10.1002/(SICI)1522-2683(19991201)20:18<3551::AID-ELPS3551>3.0.CO;2-2

  38. 38.

    Friedman J, Hastie T, Tibshirani R: Regularized paths for generalized linear models via coordinate descent. J Stat Software 2010, 33: 1–22.

  39. 39.

    Hussong R, Tholey A, Hildebrandt A: Efficient Analysis of Mass Spectrometry Data Using the Isotope Wavelet. In COMPLIFE 2007: The Third International Symposium on Computational Life Science, Volume 940(1),. Edited by: Siebes APJM, Berthold MR, Glen RC, Feelders AJ. AIP, Melville; 2007:139–149.

  40. 40.

    Hussong R, Gregorius B, Tholey A, Hildebrandt A: Highly accelerated feature detection in proteomics data sets using modern graphics processing units. Bioinformatics 2009, 25: 1937–1943. 10.1093/bioinformatics/btp294

  41. 41.

    Liu X, Inbar Y, Dorrestein P, Wyne C, Edwards N, Souda P, Whitelegge J, Bafna V, Pevzner P: Decovolution and database search of complex tandem mass spectra of intact proteins. Mol Cell Proteomics 2010, 9: 2772–2782. 10.1074/mcp.M110.002766

  42. 42.

    Tholey A, Pipkorn R, Bossemeyer D, Kinzel V, Reed J: Influence of myristoylation, phosphorylation, and deamidation on the structural behavior of the N-Terminus of the Catalytic subunit of CAMP-Dependent protein kinase. Biochemistry 2001, 40: 225–231. 10.1021/bi0021277

  43. 43.

    Reissner K, Aswad D: Deamidation and isoaspartate formation in proteins: unwanted alterations or surreptitious signals? Cell Mol Life Sci 2003, 60: 1281–1295. 10.1007/s00018-003-2287-5

  44. 44.

    Schulz-Trieglaff O, Hussong R, Gröpl C, Leinenbach A, Hildebrandt A, Huber C, Reinert K: Computational quantification of peptides from LC-MS data. J Comput Biol 2008, 15: 685–704. 10.1089/cmb.2007.0117

  45. 45.

    Du P, Stolovitzky G, Horvatovich P, Bischoff R, Lim J, Suits F: A noise model for mass spectrometry based proteomics. Bioinformatics 2008, 24: 1070–1077. 10.1093/bioinformatics/btn078

  46. 46.

    Bates D, Maechler M: Matrix: Sparse and Dense Matrix Classes and Methods. 2009. [R package version 0.999375–21] [R package version 0.999375-21]

  47. 47.

    Davis T: CHOLMOD: sparse supernodal Cholesky factorization and update/downdate. 2005.

Download references


The authors would like to thank Markus Martin for setting up the Bruker Daltonics HCT Ultra Ion Trap MS and Bart van den Berg for measuring the LC-MS datasets used in the vignette of the R package IPPD. We thank Fredrik Levander and Thorsteinn Rognvaldsson for providing us the pepex implementation. We thank the reviewers and an associate editor whose comments and suggestions led to a substantial improvement over previous drafts.


Clusters of Excellence ‘Multimodal Computing and Interaction’ (to M.S., R.H. and B.G.), ‘Inflammation@Interfaces’ (to A.T. and T.J.) within the Excellence Initiative of the German Federal Government; DFG (grants BIZ4:1-4 to R.H. and A.H.).

Author information

Correspondence to Martin Slawski.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

MS and MH devised the methodology as presented in Section “Methods”. MS implemented the Bioconductor package, with contributions by RH and MH. The comparative data analysis was performed by RH, MS, MH and AH; RH and AH performed the MASCOT queries. AT developed the experimental design and provided an interpretation of the MS data. TJ and BG conducted the MS experiments and produced the results of the vendor software. All authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Slawski, M., Hussong, R., Tholey, A. et al. Isotope pattern deconvolution for peptide mass spectrometry by non-negative least squares/least absolute deviation template matching. BMC Bioinformatics 13, 291 (2012).

Download citation


  • Regularization Parameter
  • Lasso
  • Manual Annotation
  • Isotope Pattern
  • Cholesky Factorization