- Methodology article
- Open Access
Empirical array quality weights in the analysis of microarray data
© Ritchie et al; licensee BioMed Central Ltd. 2006
- Received: 28 September 2005
- Accepted: 19 May 2006
- Published: 19 May 2006
Assessment of array quality is an essential step in the analysis of data from microarray experiments. Once detected, less reliable arrays are typically excluded or "filtered" from further analysis to avoid misleading results.
In this article, a graduated approach to array quality is considered based on empirical reproducibility of the gene expression measures from replicate arrays. Weights are assigned to each microarray by fitting a heteroscedastic linear model with shared array variance terms. A novel gene-by-gene update algorithm is used to efficiently estimate the array variances. The inverse variances are used as weights in the linear model analysis to identify differentially expressed genes. The method successfully assigns lower weights to less reproducible arrays from different experiments. Down-weighting the observations from suspect arrays increases the power to detect differential expression. In smaller experiments, this approach outperforms the usual method of filtering the data. The method is available in the limma software package which is implemented in the R software environment.
This method complements existing normalisation and spot quality procedures, and allows poorer quality arrays, which would otherwise be discarded, to be included in an analysis. It is applicable to microarray data from experiments with some level of replication.
- Array Variance
- Heteroscedastic Model
- Array Quality
- Array Weight
- REML Estimator
Assessment of data quality is an important component of the analysis pipeline for gene expression microarray experiments [1, 2]. Although careful pre-processing and normalisation can ameliorate some problems with microarray data, including background fluorescence, dye effects or spatial artifacts , many sources of variation can affect the experimental procedure [4–7] and it is inevitable that variations in data quality will remain. In this article we demonstrate an approach in which variations in data quality are detected and adjusted for as part of the differential expression analysis. The method is widely applicable, easy to use and can have a high payoff.
Quality assessment procedures can be applied at the probe level or at the array level. Probe quality is influenced by local factors on the array such as printing irregularities or spatial artifacts. For spotted microarrays, spot-specific morphology and signal measurements obtained from image analysis software can be used to assign a quality score to each probe on the array [8–11]. Spots with low quality scores are commonly removed from further analysis. An alternative approach is to measure agreement between gene expression values from repeat probes directly and eliminate those spots with inconsistent replicate values [12, 13]. For high-density oligonucleotide microarrays with multiple probes per gene, quality measures can be obtained from probe level models (PLMs). Image plots of robust weights or residuals obtained from robust PLMs can highlight artifacts on the array surface .
Probe quality assessment is not sufficient because some artifacts only become evident at the array level. Indeed the detection of problems is even more critical at the array level than at the probe level because a single bad array may constitute a sizeable proportion of the data from a microarray experiment. The quality of data from an entire array can be influenced by factors such as sample preparation and day-to-day variability . Sub-standard arrays are typically identified using diagnostic plots of the array data [1, 15–17]. The correlation between expression values of repeatedly spotted clones on an array is also used as an array quality measure . Where large data sets are available, a statistical process control approach can identify outlier arrays . In Affymetrix GeneChip experiments, array quality can be assessed using PLM standard errors or from RNA degradation plots .
Almost all the methods cited above classify the data as either "good" or "bad", and exclude "bad" probes or arrays from further analysis. In our experience however the "bad" arrays are usually not entirely bad. Very often the lesser quality arrays do contain good information about gene expression but which is embedded in a greater degree of noise than for "good" arrays. In this article, a graduated, quantitative approach is taken to quality at the array level in which poorer quality arrays are included in the analysis but down-weighted.
Quality assessment methods can be divided into those which are "predictive" and those which are "empirical". The operational meaning of quality is that high quality features produce highly reproducible expression values, while low quality features produce values which are more variable and hence less reproducible. Predictive quality assessment methods attempt to predict variability by comparing features such as spot morphology to normative measures. On the other hand, methods which compare duplicate spots within arrays are empirical in that they observe variability.
In this article we extend the empirical approach to multi-array experiments for which we measure the discrepancies between replicate arrays. In order to be as general as possible, we do not limit ourselves to simple replicate experiments, but work with a linear model formulation which allows us to handle experiments of arbitrary complexity including those with factorial or loop designs. The degree of replication in such experiments is reflected in the residual degrees of freedom for computing the residual standard errors. Our method is implemented by way of a heteroscedastic variance model. It is common for statistical models of microarray data to allow each probe to have its own individual variance. Our heteroscedastic model allows the variance to depend on the array as well as on the probe. The array variance factors then enter into the subsequent analysis as inverse array quality weights. Importantly, our method not only detects variations in data quality but adjusts for this as part of the analysis.
Our approach can be combined with predictive quality assessment methods and is an effective complement to them. Predictive methods can be used to filter spots or to provide quantitative prior spot weights which are incorporated into the linear model analysis. However the causes of poor quality data cannot always be clearly identified. The empirical array weight method described here estimates and accommodates any variation in quality which remains after the spot quality weights have been taken into account, i.e., after prediction has achieved as much as it can. Our approach is particularly effective when arrays vary in quality but the problems cannot be isolated to particular regions or particular probes on the offending arrays.
The presence of array-level parameters in our heteroscedastic model means that the statistical analysis can no longer be undertaken in a purely gene-wise manner. A naive approach to fitting the model would be computationally expensive. We propose two computationally efficient algorithms for estimating the model by the well-recognised statistical criterion of residual maximum likelihood (REML). These algorithms view the microarray data as many small data sets, one for each probe, with a small number of shared parameters corresponding to the array variance factors. An innovative gene-by-gene update procedure is proposed for particularly fast approximate REML estimation.
The array weight method developed here can be applied to any microarray experiment with array-level replication, including experiments using high-density oligonucleotide arrays, but our experience is mainly with experiments using spotted microarrays. High density arrays allow the additional possibility of measuring reproducibility for multiple probes for each gene rather than relying on gene or probe-set summaries . A full treatment of empirical array quality for these platforms is therefore likely to involve an analysis of reproducibility at both the probe level and probe-set level, a further development which is not investigated in this article.
In this paper, the linear model approach to microarray data analysis is reviewed and the heteroscedastic model which includes array weights is introduced. Next, the experimental and simulated data sets used in this study are explained and results for these data are presented. The computational algorithms for fitting the heteroscedastic model are then described, followed by discussion and conclusions. Supplementary materials including data, R scripts and additional plots are available .
Linear models provide a convenient means to measure and test for differential expression in microarray experiments involving many different RNA sources [21, 22]. The linear model approach allows a unified treatment of a wide variety of microarray experiments, including dye-swaps, common reference experiments, factorial experiments and loop or saturated designs, with little more complication than simple replicated experiments. Although the statement of the linear model, given below, requires some mathematical notation, the application of the methods we describe is in practice very simple using available software. Consider a microarray experiment with expression values y gj for genes g = 1, ..., G and arrays j = 1, ..., J. The expression values could be log-ratios from two-colour microarrays or summarised log-intensity values from a single-channel technology such as Affymetrix GeneChips. We assume that the expression values have been appropriately pre-processed, background corrected and normalised. The term gene is used here in a general way to include any ESTs or control probes that might be on the arrays. Assume that the systematic expression effects for each gene can be described by a linear model
E(y g ) = X β g (1)
where y g = (yg 1, ..., y gJ ) T is the vector of expression values for gene g, X is a known design matrix with full column rank K, and β g = (βg 1, ..., β gK ) T is a gene-specific vector of regression coefficients. The design matrix will depend upon the experimental design and choice of parameterisation and the regression coefficients represent log-fold changes between RNA sources in the experiment [22, 23]. For example, consider a two-colour microarray experiment with three replicate arrays comparing RNA sources A and B. The individual log-ratios y gj = log2(R gj /G gj ), where R gj and G gj are the Cy5 and Cy3 intensities, measure differences in gene expression between the two samples. For a simple replicated experiment with sample B always labelled Cy5, the design matrix would be a column of ones, and the coefficient β g would represent the log-fold-change for gene g in sample B over A. Replicated experiments with dye-swaps would be the same except that minus ones would indicate the dye-swap arrays. Consider another example where samples A and B are compared through a common reference sample. If there are two arrays for each sample and the common reference is always Cy3, then the design matrix would be
Here the first coefficient βg 1estimates the log-fold-change between A and the common reference while the second coefficient βg 2estimates the comparison of interest between B and A. The design matrix can be expanded indefinitely to represent experiments of arbitrary complexity.
The linear model also assumes
var(y gj ) = /w gj (2)
where w gj is a prior spot quality weight and is the unknown gene-specific variance factor. The spot quality weights will usually have arisen from a predictive spot quality assessment step, with large weights representing good quality spots and low weights representing poor quality spots. To avoid unnecessary complications we will assume throughout that all the y gj are observed and that all the spot weights are strictly positive, w gj > 0. In practice, the methods developed in this article can be modified to accommodate missing y- values or zero weights, but this complicates the presentation somewhat and will be omitted.
For simplicity we will assume that the y gj are normally distributed and that expression values from different arrays are independent. The weighted least squares estimator of β g is
where Σ g = diag(wg 1, ..., w gJ ) is the diagonal matrix of prior weights. The t-statistic for testing any particular β gk equal to zero is
where is the residual mean square from weighted regression and c gk is the k th diagonal element of .
It is important to appreciate that the spot weights w gj act in a relative fashion for each gene. The t-statistic t gk and its associated p- value would be unchanged if all the w gj for a given g were scaled up or down by a constant factor. Hence it is only the relative sizes of the w gj across arrays j for any given g which are important.
The t-statistic has J - K degrees of freedom. In microarray analyses with a small to moderate number of arrays, for which J - K is small, it is usually beneficial to replace with a variance estimator which is shrunk or moderated across genes to obtain moderated t-statistics . Genes can then be selected for differential expression based on large moderated t-statistics or small p- values.
In this article we allow the unknown variance factors to depend on the array as well as on the gene,
var(y gj ) = /w gj . (4)
We need a model for the variance factors which reflects the fact that the genes differ in variability and also that the arrays in the experiment may differ in quality in a way which increases or decreases the variability of all or most of the probes on a particular array. The simplest model which does this is the additive log-linear model
log = δ g + γ j (5)
[24, 25]. We impose the constraint so that the = exp δ g represent the gene-wise variance factors while the γ j represent the relative variability of each array. Array j will have γ j < 0 or γ j > 0 depending on whether it is relatively better or poorer quality than the average. For instance, an array with exp γ j = 2 is twice as variable as a typical array and will be given half weight in an analysis. Note that the variances are assumed to depend multiplicatively on array quality. This is more appropriate than, say, an additive model of gene and array variances because it preserves relativities between the gene-wise precisions as array quality varies. The log-linear variance model also has substantial numerical and inferential advantages over other variance models in that positivity for the variances is ensured for any values of the δ g and γ j parameters.
The fact that all the genes contribute to the estimation of the γ j means that, once estimated, the array weights can be taken to be fixed quantities when analysing each individual gene. The array weights v j = 1/exp can be incorporated into a differential expression analysis simply by combining them with the prior weights into modified weights = w gj v j . The weighted least squares calculations described in the previous section (Equation 3) can then be conducted with replacing w gj throughout. The use of appropriate array weights will produce more precise estimates of the gene expression coefficients and improve power to detect differentially expressed genes.
Note that, although the scaling of the array weights is in principle arbitrary, our convention that means we always choose the array weights v j to have geometric mean equal to one.
Summary of QC LMS controls. Theoretical fold-changes for the spike-in control probes in the QC LMS data set are shown. M values have been rounded to 2 decimal places.
Up-regulated 3-fold (U03)
log2 3 = 1.58
Up-regulated 10-fold (U10)
log2 10 = 3.32
Down-regulated 3-fold (D03)
-log2 3 = -1.58
Down-regulated 10-fold (D10)
-log2 10 = -3.32
Dynamic Range (DR)
log2 1 = 0.00
For the simulation studies, normal and non-normal expression values (y gj ) from replicate arrays were generated with G = 10000 genes and J = 3 and 5 arrays in six different scenarios. For each simulation, different array variances (exp γ j ) were assumed, and the gene-specific variances (exp δ g ) were sampled from the estimates () obtained from the QC data set. Non-normal deviates were sampled from the standardised residuals of the QC data set. These deviates are considerably more heavy-tailed than normal. In each data set, 5% (500) of genes were simulated to be differentially expressed at either 2-fold (250) or 3-fold (250), while the remaining 95% were simulated to have mean zero.
For the simulations with 3 arrays, the expression values for the third array were generated to be twice as variable as those from the first two arrays in simulation 1 (i.e., v1 = v2 = 2v3), ten times as variable as the first two arrays in simulation 2 (i.e., v1 = v2 = 10v3) or five times more variable on the second array and ten times more variable on the third array relative to the first in simulation 3 (i.e., v1 = 5v2 = 10v3).
Simulations with 5 arrays were generated to have at least two more variable arrays. In simulation 4, expression values on the fourth and fifth arrays were simulated to be two times and four times more variable than those on the first three arrays (i.e., v1 = v2 = v3 = 2v4 = 4v5). In simulation 5, expression values from the fourth and fifth arrays were five and ten times more variable than those on the first three arrays (i.e., v1 = v2 = v3 = 5v4 = 10v5). For simulation 6, the expression values were two times, four times, six times and ten times more variable on arrays two to five respectively relative to the first array (i.e., v1 = 2v2 = 4v3 = 6v4 = 10v5).
Estimates of array weights obtained from 1000 simulated microarray data sets. Means and standard deviations of the array weights estimated from 1000 simulated data sets assuming six different array variance scenarios are shown for normal and non-normal data using the full REML algorithm and the gene-by-gene update algorithm. Accurate estimates with small standard deviations are obtained using the full algorithm. The gene-by-gene update algorithm recovers weights which are generally only slightly flattened towards equal weights.
Mean (Standard deviation)
First we demonstrate the ability of the algorithms to return the correct array weights for simulated data sets where the true array variances are known. For each of the six simulation scenarios described in the previous section, 1000 independent data sets were generated and the variance model (Equation 5) was fitted to each. This was carried out for both normal and non-normal data. For each data set, estimates were obtained using the full REML algorithm and the approximate gene-by-gene update algorithm (see Methods section). Table 2 shows the means and standard deviations of the estimated array weights v j. The full algorithm is shown to assign weights almost exactly consistent with the predicted values. The gene-by-gene update method returns array weights which are slightly less extreme, i.e., slightly flattened towards equal weights, although still broadly accurate. The gene-by-gene estimates are also somewhat more variable than those for full REML, a consequence of the fact that the REML estimators are theoretically optimal. All the standard deviations are small enough however that the variability is negligible, even for the approximate algorithm. The results are virtually unchanged whether the data is normal or non-normal. Although the accuracy of the full REML algorithm is impressive here, it is important to appreciate that very precise estimates of the array variances are not required for a weighted analysis to be effective, so that the gene-by-gene algorithm may be adequate in practice.
Note also that the REML algorithms are invariant with respect to the gene-wise means or standard deviations, so the results given in Table 2 remain the same regardless of how the gene specific means or standard deviations are generated.
The black lines show the results obtained after removing the most variable array from simulations 1, 2 and 3 (Figure 2), or after removing the two most variable arrays in simulations 4, 5 and 6 (Figure 3). The light gray lines show the number of false positives obtained using equal weights and the dark gray lines indicate the false discovery rates when array weights from the full REML algorithm are used.
The first striking feature of Figures 2 and 3 is that the moderated t-statistics easily outperform the ordinary t-statistics regardless of the simulation assumptions, consistent with findings in other studies [22, 31]. The second feature is that the use of array weights always gives the lowest false discovery rate of the three weighting schemes, regardless of which t-statistic is used. Array weighting outperforms both equal weighting and array filtering in all cases, although in simulation 1 equal weighting is nearly as good (the dark gray and light gray lines overlap in Figure 2, panels a and b). It is interesting that the strategy most commonly proposed in the literature, that of array-filtering, is generally the worst performer across the scenarios, except in simulation 5 with moderated t-statistics, when equal weighting is worst. The use of array-filtering with ordinary t-statistics is very poor indeed. This is despite the fact that the simulation results make array-filtering appear somewhat better than it could be in practice. This is because we always removed the one or two arrays which were known to be the most variable, whereas in real data situations the true status of each array is uncertain and must be inferred using diagnostic plots or other methods. The results in Figures 2 and 3 are for the full REML algorithm, however the results are virtually identical when the approximate gene-by-gene update algorithm is used instead . This shows that the differences in estimated weights between the full and approximate REML algorithms observed in Table 2 are relatively unimportant from the point of view of evaluating differential expression.
QC LMS Data
In order to demonstrate our method on a smaller and more complex experiment, we now turn to the METH data. For this experiment, replication takes the form not only of duplicate arrays but also of redundancy between the direct and indirect comparisons available for each pair of treatments. The linear model requires three coefficients to represent differences between the three RNA treatments and the common reference leaving seven residual degrees of freedom. Of primary interest are the coefficients β1-0 and β3-0 which measure the gene expression differences 1mM-0mM and 3mM-0mM respectively. The design matrix was generated automatically using the limma software package. The linear model was fitted to all genes in the 10.5 K library and control probes were excluded.
The experimenters who conducted the METH experiment were suspicious of the reliability of the first 4 arrays hybridised, which they believed were not giving consistent results with the last 6 arrays.
Figure 4(b) shows the array weights estimated from this data. Arrays 1 and 4 were assigned the lowest weights of 0.29 and 0.36 respectively. Diagnostic plots of the data  reveal that arrays 1 and 4 have high levels of background fluorescence in both channels, which does indeed indicate that these arrays are of poorer quality. The diagnostics do not identify a particular subset of problem spots which could be filtered out, so spot quality methods do not offer a solution. The usual method of dealing with this problem would involve removing these two suspect arrays from further analysis. We now consider the alternative of retaining these microarrays but down-weighting their expression values using empirical array weights. Differential expression was assessed for both methods using moderated t-statistics  adjusted for multiple testing using the false discovery rate method . Table 3 shows the number of genes for the 1mM-0mM and 3mM-0mM treatment comparisons with adjusted p-values (q-values) less than 0.05. For the 1mM-0mM comparison, which has two poor quality arrays directly comparing these RNA sources, removal of the worst arrays throws away most of the information on this comparison and results in no differentially expressed genes. Using array weights gives 654 candidate differentially expressed genes for this comparison. Of these genes, 413 are also differentially expressed in the 3mM-0mM comparison and 237 show a monotonic response to dose with the 3mM-0mM fold-change being larger and in the same direction as the 1mM-0mM change. This suggests that many of these genes are worthy candidates for further validation.
Need for new algorithms
We now turn to the problem of computing REML estimates for the array variance parameters in the probe-array variance model (Equation 5). Algorithms for fitting heteroscedastic linear models are already available , however the high dimensionality of microarray data limits the usability of conventional algorithms. There are G + J - 1 parameters in the variance model and a further GK parameters in the linear model itself. The fact that the array parameters in the variance model are shared by all the genes means that the usual strategy of fitting models separately for each gene is not available. Even computers with many gigabytes of memory will run into memory limits using conventional algorithms with G much larger than around 50. Using a conventional algorithm for a typical microarray experiment with tens of thousands of genes is out of the question.
The basic difficulty from an algorithmic point of view is not the large number of expression values but rather the large number of parameters to be estimated. In the next section we develop a strategy for eliminating the gene-wise parameters β g and δ g from the estimation problem.
Conditional on the array variance factors γ j , the gene-wise coefficients β g and variances δ g can be computed in closed form using weighted least squares as described in the linear models section (Equation 3). The method of nested iterations is a strategy to reduce the dimension of an estimation problem by eliminating conditionally estimable parameters . The idea is applied here to eliminate the gene-specific parameters from the REML likelihood function. This reduces the estimation problem to one involving just the J - 1 array weights.
Explicit expressions for the REML log-likelihood for heteroscedastic models such as ours can be found in  and . Write f(y g ; δ g , γ) for the contribution to the REML log-likelihood for gene g with γ = (γ1, ..., γJ-1) T . The REML likelihood already has the property that the linear model parameters β g are eliminated. The REML log-likelihood to be maximised is
ℓ(y1, ..., y G ; δ1, ..., δ G , γ) = (6)
Rather than deal with this large dimensional problem, we eliminate the δ g by considering the profile REML likelihood for γ. Write for the value of δ g which maximises f(y g ; δ g , γ) for given γ. The profile REML log-likelihood for γ is
ℓ p (y1, ..., y G ; γ) = (7)
We consider now the nested iteration for maximising the profile likelihood. Write
be the REML information matrix for gene g. The derivative of f(y g ; , γ) with respect to γ is simply Ug,γevaluated at δ g = . The information matrix for γ from gene g, conditional on δ g = is
evaluated at δ g = .
The derivative of the profile REML log-likelihood ℓ p therefore is
and the information matrix associated with ℓ p is
evaluated at δ g = . The REML estimate of γ can be evaluated by the nested scoring iteration
γ(i+1)= γ(i)+ U γ (13)
where γ(i)is the i th iterated value and Aγ·δand U γ are to be evaluated at γ = γ(i). The iteration will begin from a suitable starting value γ(0).
Full scoring iterations
In this section, convenient expressions will be derived for the quantities Aγ·δand U γ . For any value of γ, the least squares estimator for β g can be computed using weighted least squares computations (Equation 3) with working weights replacing the prior weights w gj . The standardised residuals from this regression are
where x j is the j th row of X.
be the projection matrix from the regression and write h gj = h g,jj for the diagonal elements or leverages of H g . Finally, let Z be the J × J design matrix
Using these expressions we can write down computable expressions for quantities from the previous section. The conditional REML estimator of δ g is = log with
The score vector for γ is
where Z2 is the last J - 1 columns of Z and z g is the vector with components
for j = 1, ..., J. The information matrix is
where V g is the J × J matrix with diagonal elements (1 - ) and off-diagonal elements . Efficient algorithms exist to compute A g . Alternatively, it is often satisfactory to approximate the dense matrix V g with the diagonal approximation Vg 1= diag(1 - hg 1, ..., 1 - h gJ ) . With this approximation, a straightforward calculation gives
2Ag,γ·δ= diag(1- hg(J)) + (1 - h gJ )L - (h gJ - hg(J)) (h gJ - hg(J)) T (21)
where hg(J)= (hg 1, ..., hg J - 1) T and L is the J - 1 × J - 1 matrix of 1's. The nested information matrix Aγ·δtherefore has diagonal elements given by
and off-diagonal elements given by
In matrix terms we can write
2Aγ·δ= diag(u1, ..., uJ-1) + u J L -N T N/(J -K) (24)
where N is the matrix with i th row h g J - hg(J)and u j = (1 - h gj ). With these quantities, the nested scoring iteration (Equation 13) is very memory efficient and can be carried out easily on a standard personal computer.
Gene-by-gene scoring iterations
Although memory efficient, the nested scoring iteration may still require a lot of computation for large G since G gene-wise regressions must be evaluated for every iteration. If the prior spot weights are equal, w gj = 1, the gene-wise regressions can be computed very quickly but, if not, a full set of least squares computations must be repeated for each gene and each iteration. In this section we explore a much lighter computation scheme in which only one pass is done through the genes and the array variance parameters are updated for each gene. This results in a very efficient gene-by-gene update algorithm which produces approximate REML estimators for the array weights.
The gene-by-gene update algorithm is given by
γ(g+1)= γ(g)+ ()-1 Ug,γ (25)
where Ug,γis as above (Equation 18) while A* is an accumulating information matrix defined by
where Ag,γ·δis evaluated at γ(g)and . The iteration is started from γ0 = 0 and
These starting values begin the iteration from equal array variances with the information weight of ten genes. The effect of accumulating the information matrix in this way is to gradually decrease the step size of the iteration as the iteration passes through all the genes, resulting in a convergent iteration. The final value γ(G)is taken as the estimate of γ and is used to assign array weights. In our implementation in R , this algorithm calculates the array variance parameters in less than a second for the QC LMS data and in around 12 seconds for the METH data set on a 2.0 GHz Pentium M computer. The gene-by-gene nature of the algorithm means that minimal RAM is required for these computations.
While the gene-by-gene update algorithm is fast, it provides only an approximation to the REML estimators γ, and we need to check the accuracy of this approximation. To do this, expression values (y gj ) were simulated from normal distributions for J = 10 arrays and G = 10000 genes. The array variance parameters (γ j ) were equally spaced over the interval [-1, 1]. As already noted, the REML algorithm is invariant with respect to the gene-wise means and variances, so the gene-specific mean and variance parameters were set to zero in our simulations.
This article has presented an empirical method for estimating quantitative array quality weights which is integrated into the linear model analysis of microarray data. Computationally efficient algorithms are developed to compute the array quality weights using the well-recognized REML criterion. As well as full REML estimation, a fast gene-by-gene update method which requires only one pass through the genes is described.
Examples of array quality weights which give less influence to the gene expression measurements from unreliable microarrays and relatively more influence to the measurements from reproducible arrays have been presented. In both simulated and real data examples, it has been demonstrated that array weights improve our ability to detect differential expression using standard statistical methods. The graduated approach to array quality has also been shown to be superior to filtering poor quality arrays both in simulations and for an experimental data set. In the simulations, filtering is shown to perform quite poorly, especially in combination with ordinary t-statistics. In the data example, filtering resulted in no significant genes to follow up, whereas the weighted analysis provided a few hundred sensible candidates.
The method is restricted for use on data from experiments which include replication with at least two residual degrees of freedom. For simple replicated experiments, a minimum of three arrays are needed and results from simulation studies show that this method is reliable in these situations, even in the presence of non-normally distributed data. Simulations were also used to show that array variance parameters are estimated with greater accuracy when more genes are available for the gene-by-gene update algorithm, and that these computational savings do not seriously compromise the accuracy of the final estimates. As a rule of thumb, we recommend that the full REML array weights be used when there are fewer than 1000 probes and that the gene-by-gene update method be used otherwise. The analysis of the control probes from the QC LMS data set showed that useful array weights can be obtained from the gene-by-gene algorithm with as few as 120 genes. The situation is different when there are no spot weights or missing expression values in the data. In this case the full REML algorithm can be implemented very efficiently and so is recommended for any number of probes.
The empirical array weights form part of the quality and analysis pipeline and are not intended to replace the usual background correction, normalisation and quality assessment steps. In particular, array weights are not designed to account for spot-specific problems. The array weights method is instead designed to incorporate spot quality weights which might arise from gene filtering or from a predictive quality assessment step. The use of zero weights as prior weights (w gj = 0) presents no problems for the method, although some special numerical treatment not discussed here is needed to ensure the sum to zero constraints are satisfied.
The array weight approach is also not intended to replace diagnostic array quality plots such as MA-plots, and arrays which are catastrophically poor quality should still be discarded. Taking a graduated approach to array quality, does however allow arrays of less than ideal quality, which would otherwise have to be discarded, to be kept in the analysis, but down-weighted.
The authors have applied the array weight method to very high quality data sets which featured arrays with low background, well-behaved controls and a good dynamic range of spot intensities. For such data sets, the method assigns approximately equal array weights to each array (data not shown). This indicates that the method does no harm when it is not required.
One further topic that deserves some attention is the use of robust linear models to estimate the gene expression coefficients. The array weights method has the same motivation as robust regression methods, but accumulates information on variability across genes on each array, which gene-wise robust regression methods are unable to do. Another consideration is sample size. While robust methods perform well on large sample problems, many microarray data sets such as the METH experiment consist of a small number of arrays and, in these situations, robust methods may not be suitable.
Thanks to Terry Speed and Ken Simpson for their advice and for reading drafts of this manuscript. The anonymous reviewers are also thanked for their constructive comments on an earlier version of this manuscript.
- Smyth GK, Yang YH, Speed TP: Statistical issues in cDNA microarray data analysis. Methods Mol Biol 2003, 224: 111–136.PubMedGoogle Scholar
- Bolstad BM, Collin F, Brettschneider J, Simpson K, Cope L, Irizarry R, Speed TP: Quality Control of Affymetrix GeneChip data. In Bioinformatics and Computational Biology Solutions Using R and Bioconductor. Edited by: Gentleman R, Carey V, Huber W, Irizarry R, Dudoit S. Springer; 2005:33–47.View ArticleGoogle Scholar
- Smyth GK, Speed TP: Normalization of cDNA microarray data. Methods 2003, 31: 265–273.View ArticlePubMedGoogle Scholar
- Schuchhardt J, Beule A, Malik E, Wolski H, Eickhoff H, Lehrach HH: Normalization strategies for cDNA microarrays. Nucleic Acids Res 2000, 28: e47.PubMed CentralView ArticlePubMedGoogle Scholar
- Wildsmith SE, Archer GE, Winkley AJ, Lane PW, Bugelski PJ: Maximization of signal derived from cDNA microarrays. Biotechniques 2001, 30: 202–208.PubMedGoogle Scholar
- Spruill SE, Lu J, Hardy S, Weir B: Assessing sources of variability in microarray gene expression data. Biotechniques 2002, 33: 916–923.PubMedGoogle Scholar
- Novak JP, Sladek R, Hudson TJ: Characterization of variability in large-scale gene expression data: implications for study design. Genomics 2002, 79: 104–113.View ArticlePubMedGoogle Scholar
- Wang X, Ghosh S, Guo SW: Quantitative quality control in microarray image processing and data acquisition. Nucleic Acids Res 2001, 29: e75.PubMed CentralView ArticlePubMedGoogle Scholar
- Tran PH, Peiffer DA, Shin Y, Meek LM, Brody JP, Cho KW: Microarray optimizations: increasing spot accuracy and automated identification of true microarray signals. Nucleic Acids Res 2002, 30: e54.PubMed CentralView ArticlePubMedGoogle Scholar
- Fan J, Tam P, Woude GV, Ren Y: Normalization and analysis of cDNA microarrays using within-array replications applied to neuroblastoma cell response to a cytokine. Proc Natl Acad Sci USA 2004, 101: 1135–1140.PubMed CentralView ArticlePubMedGoogle Scholar
- Raffelsberger W, Dembele D, Neubauer MG, Gottardis MM, Gronemeyer H: Quality indicators increase the reliability of microarray data. Genomics 2002, 80: 385–394.View ArticlePubMedGoogle Scholar
- Jenssen TK, Langaas M, Kuo WP, Smith-Sørensen B, Myklebost O, Hovig E: Analysis of repeatability in spotted cDNA microarrays. Nucleic Acids Res 2002, 30: 3235–3244.PubMed CentralView ArticlePubMedGoogle Scholar
- Yang IV, Chen E, Hasseman JP, Liang W, Frank BC, Wang S, Sharov V, Saeed AI, White J, Li J, Lee NH, Yeatman TJ, Quackenbush J: Within the fold: assessing differential expression measures and reproducibility in microarray assays. Genome Biol 2002, 3(11):research0062.PubMed CentralPubMedGoogle Scholar
- Dumur CI, Nasim S, Best AM, Archer KJ, Ladd AC, Mas VR, Wilkinson DS, Garrett CT, Ferreira-Gonzalez A: Evaluation of quality-control criteria for microarray gene expression analysis. Clin Chem 2004, 50: 1994–2002.View ArticlePubMedGoogle Scholar
- Gollub J, Ball CA, Binkley G, Demeter J, Finkelstein DB, Hebert JM, Hernandez-Boussard T, Jin H, Kaloper M, Matese JC, Schroeder M, Brown PO, Botstein D, Sherlock G: The Stanford Microarray Database: data access and quality assessment tools. Nucleic Acids Res 2003, 31: 94–96.PubMed CentralView ArticlePubMedGoogle Scholar
- Petri A, Fleckner J, Matthiessen MW: Array-A-Lizer: A serial DNA microarray quality analyzer. BMC Bioinformatics 2004, 5: 12.PubMed CentralView ArticlePubMedGoogle Scholar
- Chen DT: A graphical approach for quality control of oligonucleotide array data. J Biopharm Stat 2004, 14: 591–606.View ArticlePubMedGoogle Scholar
- Steinfath M, Wruck W, Seidel H, Lehrach H, Radelof U, O'Brien J: Automated image analysis for array hybridisation experiments. Bioinformatics 2001, 17: 634–641.View ArticlePubMedGoogle Scholar
- Model F, Konig T, Piepenbrock C, Adorjan P: Statistical process control for large scale microarray experiments. Bioinformatics 2002, 18(Suppl 1):S155–163.View ArticlePubMedGoogle Scholar
- Supplementary materials[http://bioinf.wehi.edu.au/resources/webReferences.html]
- Kerr MK: Linear Models for Microarray Data Analysis: Hidden Similarities and Differences. J Comput Biol 2003, 10: 891–901.View ArticlePubMedGoogle Scholar
- Smyth GK: Linear models and empirical Bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol 2004, 3(1):Article 3.Google Scholar
- Yang YH, Speed TP: Design and Analysis of Comparative Microarray Experiments. In Statistical Analysis of Gene Expression Microarray Data. Edited by: Speed TP. CRG Press; 2003.Google Scholar
- Verbyla A: Modelling Variance Heterogeneity: Residual Maximum Likelihood and Diagnostics. J R Stat Soc [Ser B] 1993, 55: 493–508.Google Scholar
- Smyth GK, Huele AF, Verbyla A: Exact and approximate REML for heteroscedastic regression. Statist Modelling 2001, 1: 161–175.View ArticleGoogle Scholar
- Samartzidou H, Turner L, Houts T, Frorne M, Worley J, Albertsen H: Lucidea Microarray ScoreCard: An integrated analysis tool for microarray experiments. Life Science News 2001. [http://www4.amershambiosciences.com/]Google Scholar
- Buckley MJ: The Spot user's guide. CSIRO Mathematical and Information Sciences 2000. [http://www.cmis.csiro.au/IAP/Spot/spotmanual.htm]Google Scholar
- Yang YH, Buckley MJ, Dudoit S, Speed TP: Comparison of methods for image analysis on cDNA microarray data. J Comput Graph Statist 2002, 11: 108–136.View ArticleGoogle Scholar
- Yang YH, Dudoit S, Luu P, Speed TP: Normalization for cDNA microarray data. In Microarrays: Optical Technologies and Informatics, Proceedings of SPIE Edited by: Bittner ML, Chen Y, Dorsel AN, Dougherty ER. 2001., 4266:Google Scholar
- Smyth GK: Limma: linear models for microarray data. In Bioinformatics and Computational Biology Solutions Using R and Bioconductor. Edited by: Gentleman R, Carey V, Huber W, Irizarry R, Dudoit S. Springer, New York; 2005:397–420.View ArticleGoogle Scholar
- Kooperberg C, Aragaki A, Strand AD, Olson JM: Significance Testing for Small Sample Microarray Experiments. Stat Med 2005, 24: 2281–2298.View ArticlePubMedGoogle Scholar
- Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc [Ser B] 1995, 57: 289–300.Google Scholar
- Smyth GK: An Efficient Algorithm for REML in Heteroscedastic Regression. J Comput Graph Statist 2002, 11: 836–847.View ArticleGoogle Scholar
- Smyth GK: Partitioned algorithms for maximum likelihood and other non-linear estimation. Stat Comput 1996, 6: 201–216.View ArticleGoogle Scholar
- R Development Core Team: R: A Language and Environment for Statistical Computing.R Foundation for Statistical Computing, Vienna, Austria; 2006. [http://www.R-project.org]Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.