Since the development of quantitative PCR (Q-PCR) in the early nineties [1], it has become an increasingly important method for gene expression quantification. Its aim is to amplify a specific DNA sequence under monitoring and measuring conditions that allow stepwise quantification of product accumulation. Product quantification has fostered the development of analysis techniques and tools. These data mining strategies focus on the cycle in which fluorescence reaches a defined threshold (value called *quantification cycle* or *Cq*) [2, 3]; with the *Cq* parameter, quantification could be addressed following two approaches: (i) the standard curve method [4] and (ii) the ΔΔ*Cq* method [5].

It is worth noting that these classical quantification methods assume that amplification efficiency is constant or even equal to 100%. An efficiency value of 100% implies that during the exponential phase of the Q-PCR reaction, two copies are generated from every available template. But it has been shown that these assumptions are not supported by experimental evidences [6]. With the aim of estimating PCR efficiency, and thus to include it in further analysis procedures, two strategies have been developed: (i) kinetics-based calculation and (ii) standard curve assessment.

Taking into account the reaction kinetics, which is basically equivalent to the bacterial growth formulae [7], amplification efficiency could be visualized in a half-logarithmic plot in which log transformed fluorescence values are plotted against the time (cycle number). In these type of graphic representations, the phase of exponential amplification is linear and the slope of this line is the reaction efficiency [8]. Empirical determinations of amplification efficiencies show that ranges lay between 1.65 and 1.90 (65% and 90%) [9]. Standard curve-based calculation method relies on repeating the PCR reaction with known amounts of template. *Cq* values *versus* template (i.e. reverse transcribed total RNA) concentration input are plotted to calculate the slope. Laboratories where few genes are analyzed for diagnostic may develop standard curves but they are in most cases out of scope for research projects where tens-hundreds of genes will be tested for changes in gene expression.

Several aspects influence PCR yield and specificity: reagents concentration, primer and amplicon length, template and primer secondary structure, or G+C content [10]. The goals of a PCR assay design are: (i) obtaining the desired product without mispriming and (ii) rising yield towards optimum. In most cases, the sequence to amplify is a fixed entity, so setting up an efficient reaction involves changes in reagents concentrations (salts, primers, enzyme) and specifically an optimal primer design. Thus a plethora of primer designing tools have been published, regarding as little as G+C content for *T*
_{
m
} calculation [11, 12], evaluating salt composition [13] or even employing Nearest Neighbor modules, which consider primer and salt concentrations [14].

Efficiency values are essential elements in the ΔΔ*Cq* method and its variants: relative quantities are calculated using the efficiency value as the base in an exponential equation in which the exponent depends on the *Cq*. Thus efficiency strongly influences the relative quantities calculation, which are required to estimate gene expression ratios [5].

In this work, we analysed Q-PCR efficiency values from roughly 4,000 single PCR runs with the aim of elucidating the major variables involved in PCR efficiency. With this data we developed a generalized additive model (GAM), which relies on nonlinear regression analysis, and implemented it in a open, free online web tool allowing efficiency prediction.