With the growing abundance of microarray data, statistical methods are increasingly needed to integrate results across studies. Two common approaches for meta-analysis of microarrays include either combining gene expression measures across studies or combining summaries such as p-values, probabilities or ranks. Here, we compare two Bayesian meta-analysis models that are analogous to these methods.
Two Bayesian meta-analysis models for microarray data have recently been introduced. The first model combines standardized gene expression measures across studies into an overall mean, accounting for inter-study variability, while the second combines probabilities of differential expression without combining expression values. Both models produce the gene-specific posterior probability of differential expression, which is the basis for inference. Since the standardized expression integration model includes inter-study variability, it may improve accuracy of results versus the probability integration model. However, due to the small number of studies typical in microarray meta-analyses, the variability between studies is challenging to estimate. The probability integration model eliminates the need to model variability between studies, and thus its implementation is more straightforward. We found in simulations of two and five studies that combining probabilities outperformed combining standardized gene expression measures for three comparison values: the percent of true discovered genes in meta-analysis versus individual studies; the percent of true genes omitted in meta-analysis versus separate studies, and the number of true discovered genes for fixed levels of Bayesian false discovery. We identified similar results when pooling two independent studies of Bacillus subtilis. We assumed that each study was produced from the same microarray platform with only two conditions: a treatment and control, and that the data sets were pre-scaled.
The Bayesian meta-analysis model that combines probabilities across studies does not aggregate gene expression measures, thus an inter-study variability parameter is not included in the model. This results in a simpler modeling approach than aggregating expression measures, which accounts for variability across studies. The probability integration model identified more true discovered genes and fewer true omitted genes than combining expression measures, for our data sets.
Due to the growing accumulation of publicly available microarray data, it is increasingly important to develop methods to integrate findings across studies. Combining results will increase sample sizes and thus the power to detect differentially expressed genes. While meta-analysis has been used extensively in medical and public health applications, it has only recently been developed for microarray studies [1–15]. The two primary methods for data integration consist of either combining gene expression measures across studies or combining summary measures of expression such as p-values, probabilities or ranks. In the first approach, Wang et al.  used a weighted average procedure to combine standardized mean expression differences across three independent studies. Through this method they identified genes that were consistently differentially expressed between leukemia and normal B cells. Choi et al.  and Stevens and Doerge  combined standardized gene effects into an overall mean effect using statistical modeling. The models accounted for different sources of variation in microarray studies, including differences between studies. Since inter-study variability was assumed to be constant for each gene, the uncertainty of this parameter was not included in the subsequent analyses (this refers only to the non-Bayesian model of Choi et al. ). Hu et al.  extended the method of Choi et al.  by incorporating an individual study quality index for each gene into the effect size estimate. The authors combined two lung cancer data sets and demonstrated that their method identified more differentially expressed genes than previous analyses. Other investigators combined "raw" gene expression data rather than gene effects when the data was comparable across studies. Examples include Morris et al. , which combined Affymetrix studies by creating new probesets across arrays, and Park et al. , which integrated log-expression ratios in an ANOVA meta-analysis model for cDNA microarrays.
Due to the difficulty in comparing cross-laboratory and cross-platform expression measures, several microarray meta-analysis methods have combined summary measures of expression rather than expression measures themselves. Rhodes et al.  calculated p-values in individual lung cancer studies and aggregated the p-values to provide an overall estimate of gene significance. Parmigiani et al.  introduced an integrative correlation approach that identified genes with consistent expression patterns across multiple platforms. Other approaches convert gene expression values within each study to rank orderings or probabilities of expression. The transformed data is then aggregated across studies to identify disease marker genes or prognostic signatures (Shen et al. , Xu et al. , Warnat et al. ).
Recently, Bayesian meta-analysis models have been introduced that are analogous to the classical methods described above. Bayesian approaches have several advantages over traditional methods and have been used widely in individual microarray studies [16–18]; in particular the discrete mixture model has been developed extensively [19–30]. Bayesian models are well-suited to the small sample sizes of microarray studies since they borrow information from all genes to estimate model parameters. They also provide a framework for incorporating all available information in a systematic manner, and explicitly include model and parameter variability. A third important benefit of the Bayesian approach is that a predictive distribution for future data is produced (Stangl and Berry , Tweedie et al. ). Bayesian models can also handle the large amounts of missing data inherent in microarray studies relatively easily.
The two primary Bayesian approaches to meta-analysis for microarray studies correspond to the traditional approaches: one combines standardized gene effects and the other combines probabilities, as follows. Choi et al.  introduced the first Bayesian meta-analysis model for microarray data, which integrated standardized gene effects in individual studies into an overall mean effect. Inter-study variability was included in the model with an associated uninformative prior distribution. This type of model, termed hierarchical Bayesian random effects, has been used broadly in non-microarray contexts (see, for example, DuMouchel and Harris , Smith et al. , Tweedie et al. , Normand , DuMouchel and Normand , Pauler and Wakefield , Sargent et al. , Gelman et al. ). The hierarchical Bayesian random effects meta-analysis model has several favorable features: it provides an overall effect size among all studies, and it accounts for inter-study variability, which may improve accuracy of results. However, many microarray meta-analyses include a small number of studies, e.g. between two and four studies [1, 2, 4–7, 9–11, 13–15]. Due to these typical small study numbers, the inter-study variability is challenging to model. An uninformative prior distribution for this parameter may not provide enough information, and thus more informative priors need to be considered. Alternatively, Conlon et al.  introduced a Bayesian meta-analysis model that combined study probabilities rather than gene effect levels, eliminating the need to estimate inter-study variability. The model produced the overall gene-specific posterior probability of differential expression while incorporating several sources of data replication. Here, we compare two Bayesian meta-analysis models: the standardized expression integration model and the probability integration model of Conlon et al. . The standardized expression integration model is similar in approach to hierarchical Bayesian random effects models (such as that of Choi et al. ) in that mean values are combined across studies to provide an overall mean value, with inter-study variability included as a parameter in the model with an associated prior distribution. However, instead of combining effect sizes as in Choi et al., we combine standardized gene expression values, since our data is assumed to be from the same platform and comparable across studies (for further detail, see Background: Background on effect sizes and standardized gene expression levels). In simulations, we illustrate that combining probabilities improves performance versus combining standardized gene expression levels based on three comparison measures: the increase in true discovered genes in meta-analysis versus separate studies, the decrease in true omitted genes in meta-analysis versus individual studies, and the number of true genes identified for fixed levels of Bayesian false discovery. Our findings are similar when analyzing biological data in two studies of Bacillus (B.) subtilis.
While many meta-analytic methods incorporate data across multiple microarray platforms, several recent reports have shown the difficulty in such approaches. Kuo et al.  and Jarvinen et al.  both worked with cell lines and concluded that combining data across cDNA and oligonucleotide platforms was not reliable. Mah et al.  found that there is only moderate overlap in gene expression levels between cDNA and oligonucleotide platforms. Due to these findings, we focus on meta-analysis for one platform, cDNA microarrays.
Background on effect sizes and standardized gene expression levels
Traditional random effects models and hierarchical Bayesian random effects models first summarize the data from two conditions into study-specific mean effects and associated variances; the models use these summary effect measures rather than the individual data values (i.e. "raw" data). For instance, the hierarchical Bayesian random effects model of Choi et al.  used the well-studied estimator of Hedges and Olkin , which estimates the effect size as the difference of the means of two groups divided by the pooled standard deviation. The estimator is sample-size adjusted to obtain an unbiased effect size; the associated variance is also based on Hedges and Olkin's work. However, Hedges and Olkin's method requires two separate groups of data with only one level of replication, which is not always available. For one sample data with replicate slides within repeated experiments, the Hedges and Olkin effect size estimates do not apply. For these reasons, for our one-sample cDNA microarray studies with multiple sources of replication, we use the standardized logarithms of the red/green (Cy5/Cy3) fluorescent intensity ratios in our meta-analysis models. We use log-ratios since they create more symmetric distributions and stabilize the variances (Dudoit et al. ). The log-expression ratios are standardized so that each slide has zero mean and unit standard deviation (see also Shen et al.  and Dominici and Parmigiani ). For further background on cDNA microarrays, see [46–49].
Results and discussion
Bayesian standardized expression integration meta-analysis model
Researchers often conduct multiple independent microarray studies for the same biological system. For example, Eichenberger et al.  designed two studies to identify genes under the control of a primary transcription factor sigma E (σE) in B. subtilis. The first study was a mutation of σEand the second was an overexpression of σE(referred to as the mutant and induction studies, respectively; for details of the data sets, see Methods: Biological data). Thus, genes that were overexpressed in one study should be underexpressed in the other. Combining results of the two studies in a meta-analysis will increase sample size and help more precisely identify target genes. In the Bayesian standardized expression integration model, expression levels are smoothed across studies to produce an overall expression measure for each gene. This model assumes that the standardized expression means are not the same in each study, but that each study-specific mean is a random sample from a common population distribution. Inter-study variability is included as a parameter in the model. More specifically,
for J independent studies. Here, yjgseare the observed microarray data, and are the normalized log-expression ratios for study j, gene g, slide s, and experiment e. For cDNA microarrays, the expression ratio is the ratio of fluorescent intensity levels for the red and green-labeled mRNA (Cy5 and Cy3) samples, or treatment and control. We standardized the yjgsevalues so that each slide has zero mean and unit standard deviation (see also [10, 45]). In this model, we take into account that the yjgseare affected by variability due to slides, experiments (cultures) and studies. In individual studies, yjgseis a sample from a normal distribution of the slide values within the same experiment for gene g. We represent this in the model as yjgse~ N(ξjge, ) where ξjgeis the mean among all slide values of an experiment for gene g. The parameter represents the variability of the slide value distribution for each gene. The within-experiment mean ξjgeis a sampling from a normal distribution of experiment values; we denote this as ξjge~ N(θjg, ). Here, θjgis the study-specific mean log-expression ratio of gene g, and represents the variability across experiments. In turn, the study-specific mean θjgis a sampling from a normal distribution of study values. Thus, θjg~ N(μg, ), where μgrepresents the overall mean log-expression ratio across studies, and is the variability across studies. The μgvalues are assumed to be normally distributed with a mean of zero with a small variance for non-differentially expressed genes, and with a large variance for differentially expressed genes. Note that only yjgsevalues are observed data, while the remaining parameters are unobserved. Based on the overall mean value μg, the model produces the posterior distribution for Ig, which is used to calculate the probability of differential expression Dg= Prob(Ig= 1 | data). More specifically, Ig~ Bernoulli(p) is the indicator variable for differential expression of gene g corresponding to μg≠ 0, and p is the fraction of expressed genes. Thus, Prob(Ig= 1) = p, where
With this model, genes are divided into two groups, differentially expressed (Ig= 1) and non-expressed (Ig= 0), with probabilities p and (1-p), respectively. For each gene, the posterior probability of differential expression over all studies, Dg= Prob(Ig= 1 | data), is produced, and we compare results based on Dg. In assigning prior distributions, when Ig= 0, the μgare assumed to be normally distributed with mean zero and small variance ; when Ig= 1, the μgare assumed to be normally distributed with mean zero and large variance (c × ).
The inter-study variability parameter influences the results to a large degree and requires careful consideration. We detail several specifications for the prior distribution of in the sections: Two-study simulation results; and Methods: Standardized expression integration model: prior distributions for inter-study variation.
We assign conjugate scaled inverse chi-squared prior distributions to the slide and experiment variation parameters, and . The scale parameters are derived from the data, by pooling information from all genes (similar to Tseng et al. , Gottardo et al. , Lönnstedt and Speed ; see Methods). The prior framework for a single study is similar to Gottardo et al.  except that we calculate a posterior distribution for p rather than using an iterative algorithm to estimate p. Our data also has one more level of replication than that of Gottardo et al. , i.e. repeated slides within repeated experiments. Our hierarchical Gaussian structure for one study also resembles the Bayesian ANOVA models (BAM) of Ishwaran and Rao [29, 30]. BAM restructures the problem of identifying overexpressed genes as a variable selection procedure and uses a Bayesian model designed toward selective shrinkage. The difference between our model in a single study context and BAM is that we have one-sample data, while BAM models are tailored to a two-sample format; our data also has more levels of replication.
Each study is assumed to contain only two conditions: a treatment and a control. We simulate posterior distributions for each parameter using a Markov chain Monte Carlo (MCMC) implementation of the model . See the Methods section for more details of the prior distributions and the MCMC procedure.
Bayesian probability integration meta-analysis model
The Bayesian model to combine probabilities was introduced in Conlon et al.  and is similar to the Bayesian standardized expression integration model, except that the mean expression levels of each study are not combined and thus inter-study variability is not modeled. We call this the probability integration model because, for each gene, it calculates the overall probability of differential expression given the data of each separate study; this effectively smoothes the probability of differential expression across studies. This differs from the standardized expression integration model, which first calculates an overall mean gene expression measure and determines the probability of differential expression given the estimated mean. We specify the probability integration model as follows:
The parameters common to both Models (1) and (2) are as defined for Model (1) (see also Conlon et al. ). The yjgseare again the observed microarray data, and are the normalized log-expression ratios for study j, gene g, slide s, and experiment e. In this model, we again take into account that the yjgseare affected by variability due to slides and experiments, and the yjgsevalues are standardized to have zero mean and unit standard deviation. The θjgis again the study-specific mean log-expression ratio of gene g. However, rather than modeling the study-specific mean θjgas a sampling from a normal distribution of study values, this model treats each study separately, and does not combine the mean values from each study into one mean value. Thus, an overall mean value μgis not produced for this model. The gene-specific posterior probability of differential expression is again produced; however, this probability is based upon the separate mean levels of each study rather than an overall mean level, as in Model (1). Note that the differences between Model (1) and Model (2) occur at the inter-study level; the models are the same for a single study. The MCMC implementation is similar to that for Model (1), with details in the Methods section. We compare Models (1) and (2) using integration-driven discovery rates, integration-driven revision rates and Bayesian false discovery rates, defined in the following.
Integration-driven discovery and revision
Choi et al.  define the integration-driven discovery rate (IDR) as the proportion of genes that are identified as differentially expressed in the meta-analysis that were not identified in any of the individual studies alone. IDR represents the increase in information based on combining studies versus individual studies. For our models, for a given threshold of posterior probability of differential expression, γ, genes are identified as differentially expressed if (Dg≥ γ). The IDR is defined as the percent of differentially expressed genes in the meta-analysis that are not differentially expressed in any of the individual analyses:
In simulations, we define true genes as genes that were simulated to be differentially expressed. The corresponding true integration-driven discovery rate, t IDR, is the percent of true genes discovered in the meta-analysis that were not discovered in any of the individual analyses:
Stevens and Doerge  define the integration-driven revision rate (IRR) as the percent of genes that are declared differentially expressed in individual studies but not in meta-analysis. IRR represents the genes that are missed or "dropped" in meta-analysis versus separate study analyses:
The corresponding true integration-driven revision rate, t IRR, is the percent of true genes that are identified as differentially expressed in at least one individual study but not in meta-analysis:
Bayesian false discovery rate
The false discovery rate (FDR) was introduced by Benjamini and Hochberg , and is defined as the expected number of discovered genes that are not truly differentially expressed divided by the total number of discovered genes. Further discussion and application of FDR in a microarray context include Tusher et al. , Storey , Storey and Tibshirani  and Genovese and Wasserman . In a Bayesian approach, Genovese and Wasserman  defined the posterior expected FDR (pe FDR) as:
where δgis an indicator for differential expression and Y represents the data (see also Do et al. ).
Two-study simulation results
We simulated data for two studies, Study 1 and Study 2, with 3,000 genes and a format similar to the B. subtilis mutant and induction studies. We used three levels for the percent of differentially expressed genes (denoted psto indicate simulated),ps= 5%, 10%, 25%, and three mean levels of inter-study variation . We refer to the mean levels of inter-study variability as low, medium and high, based on comparison to the biological data, as follows. Each level of was simulated from a Normal distribution with the following means: low variability: mean = 0.1 for differentially expressed and mean = 0.01 for non-expressed genes; medium: mean = 0.3 and 0.03; and high: mean = 0.7 and 0.07, for expressed and non-expressed genes, respectively. The variances of the Normal distributions were equivalent to the biological data (for more detail, see Methods). Again, we refer to genes that were simulated to be differentially expressed as true genes. Each array was standardized to have mean zero and unit standard deviation.
For the Bayesian standardized expression integration meta-analysis model, the prior distribution assigned to has a large influence on the results. We considered three prior distributions with extensive use in hierarchical Bayesian meta-analytic models: inverse-gamma prior distributions (see, for example, Choi et al. , Smith et al. , Normand , Sargent et al. ), log-logistic (DuMouchel and Normand ), and locally uniform (Gelman et al. ). See Methods for a detailed description of these prior assignments. Briefly, the informative inverse-gamma and locally uniform distributions pooled information from sets of genes to provide better estimates of inter-study variability. The log-logistic prior distribution treated each gene separately without pooling information from sets of genes, and was a function of the weighted average of the sampling variabilities across studies for each gene. The log-logistic distribution prevents the meta-analysis results from being overly influenced by studies with large sampling variability. Of the prior specifications, only the locally uniform prior improved upon individual study analyses. In particular, this prior distribution was centered at the median of the inter-study variability based on the data, with separate distributions for the differentially expressed and non-expressed genes (see Methods). We show results hereafter based on this prior distribution.
Standardized expression integration versus probability integration model
We implemented the Bayesian standardized expression integration model (SEI hereafter, Model (1)) and the Bayesian probability integration model (PI hereafter, Model (2)) to combine the two simulated studies for the three levels of percent differentially expressed genes and three levels of inter-study variation. We also analyzed each study individually. Note again that in individual studies, the SEI and PI models are equivalent, i.e. the only differences between the SEI and PI models are at the inter-study level. In order to compare the SEI and PI models, we calculated for each model the true integration-driven discovery rate (t IDR) and the true integration-driven revision rate (t IRR) for thresholds of γ ≥ 0.50, i.e. the posterior probability of differential expression greater or equal to 50%. The PI model produced higher t IDR and lower t IRR than the SEI model for all values of γ ≥ 0.50 for the simulated data. Figures 1 and 2 display the t IDR and t IRR results, respectively, for the three simulated levels of psand high mean . Table 1 presents results for all simulated data sets, for representative threshold value γ = 0.95.
We also fixed levels of pe FDR and compared the number of true discoveries for the two models. We found that both models improved the number of true discoveries versus individual analyses, and the PI model identified more true genes than the SEI model for the same levels of pe FDR < 20%. Figure 3 illustrates the results for the three pslevels and high mean . Table 1 reports results for all simulations, for representative pe FDR = 5%.
The primary difference between the two models is that the SEI model first combines the mean standardized gene expression levels from each study into an overall mean and then calculates the probability of differential expression based on this overall mean, while the PI model calculates the probability of differential expression based on the separate study means. For genes that have high mean standardized expression in one study but lower mean standardized expression in a second study, the SEI model tends to identify these genes as non-differentially expressed based on an approximately medium overall mean; however, the PI model identifies more of these types of genes as differentially expressed. This is due to keeping the study means separate in the PI model, and thus genes with high probability of differential expression in one study are not overly offset by genes with lower probability of differential expression in the second study. Due to these model differences, the SEI model declares more true genes as non-differentially expressed, thus producing higher t IRR than the PI model. For genes that are declared non-differentially expressed in both individual studies, the PI model identifies more of these genes as differentially expressed in meta-analysis versus the SEI model, resulting in higher t IDR and more true genes identified for the same level of false discovery than the SEI model.
Five-study simulation results
We also compared the SEI and PI models for five independent studies. For this, we simulated three additional studies with format similar to Study 1, but with different parameter values for slide and experiment variation (see Methods). All other parameter specifications were similar to the two-study simulations. In all data sets, the PI model again identified higher t IDR and lower t IRR than the SEI model for all thresholds of γ ≥ 0.50. Figures 4a and 4b show results for ps= 10% and high mean ; Table 2 details the results for all data sets, with representative threshold value γ = 0.95. Note that the SEI model produced t IDR = 0% for many levels of γ. In these incidents, for all true genes identified by the SEI model, at least one of the five individual studies had Dgat least as high as γ. When compared to the two-study simulations, combining more studies resulted in lower t IDR in most cases for both the SEI and PI models. This occurred since, for a larger number of studies, it was more likely that some genes had Dg≥ γ in at least one individual study, which reduced t IDR. The t IRR was also lower in most cases in comparison to the two-study simulations for both the SEI and PI models. This was due to the increase in Dgin the meta-analysis models when combining more studies, which reduced t IRR.
For all simulated data sets, both the SEI and PI models again identified more true discoveries than the individual analyses for the same levels of pe FDR < 20%; the PI model also found more true discoveries than the SEI model, similar to the two-study simulations. Figure 4c displays the results for ps= 10% and high mean ; Table 2 reports the results for all data sets, with representative pe FDR = 5%. When compared to two-study simulations, combining more studies resulted in more true discovered genes for the same levels of pe FDR, for both the SEI and PI models. This indicates that pooling more data improves the accuracy of pe FDR.
The PI model showed improved performance over the SEI model in simulations of five studies for similar reasons as discussed in the simulations of two studies. By calculating the probability of differential expression based on the separate study means in the PI model, genes with high probability of differential expression in at least one study produce a higher overall probability of differential expression in the PI meta-analysis. However, the SEI model first produces an overall mean and then calculates the probability of differential expression based on this overall mean, which results in fewer true genes being declared differentially expressed. Due to these meta-analytic model differences, the PI model results in higher t IDR, lower t IRR and more true differentially expressed genes identified for the same level of false discovery than the SEI model.
Biological data results
We implemented the SEI and PI models to combine the B. subtilis mutant and induction studies, using 2,509 genes that had at least one expression value in each study. We also analyzed each study individually. Each slide was standardized to have zero mean and unit standard deviation. Since the truly differentially expressed genes are unknown for the biological data, we report results somewhat differently than for the simulated data. For both models, we show IDR and IRR for fixed levels of γ ≥ 0.50 with corresponding pe FDR (Figures 5a, 5b and Table 3). The PI model produced higher IDR than the SEI model for most levels of γ ≥ 0.50, with corresponding lower pe FDR. In a few instances, the IDR was higher for the SEI than the PI model, but the difference was less than 1%, and the corresponding pe FDR was lower for the PI model. On average, the PI model produced an IDR 3.5% higher than the SEI model for γ ≥ 0.50, and 6.5% higher for γ ≥ 0.95; the corresponding average pe FDR was 1% and 0.3% lower for the PI model, respectively. The IRR was lower for the PI model for all values of γ ≥ 0.50. In addition, for fixed levels of pe FDR < 20%, both the PI and SEI models discovered more genes than the individual studies alone, and the PI model discovered more genes than the SEI model in all cases (Figure 5c).
The prior distributions of the slide effect and experiment effect variance parameters, and , respectively, require some information from the data in order for the models to converge. When assigning uninformative distributions to these parameters, i.e. ~ Inverse Gamma(0.001,0.001) and ~ Inverse Gamma(0.001,0.001), the models do not converge. We thus assigned inverse chi-squared distributions to and , with scale parameters based on pooling variance information from all genes, similar to other authors ([17, 23, 24]). We used 3 degrees of freedom so that the prior distributions were as uninformative as possible (similar to ). To examine the sensitivity of the results to the degrees of freedom of the scaled inverse chi-squared distributions, we performed sensitivity analyses for the two-study and five-study simulation data sets as well as the biological data. For the simulation data, we used the data sets with percent of differentially expressed genes ps= 10% and medium mean . We repeated the analyses with the following degrees of freedom for both and : 6, 10, 20, 40. Larger degrees of freedom correspond to more informative priors, i.e. smaller means and variances imposed upon the variance parameters. In all analyses, we found that with more informative priors, the mean posterior probabilities of differential expression for all genes increased. For the two-study simulation data, this resulted in larger numbers of true genes identified for the same levels of pe FDR for both the SEI and PI models. t IDR decreased for larger degrees of freedom for thresholds of γ ≥ 0.50 for both the SEI and PI models, since the individual studies as well as the meta-analyses produced higher posterior probabilities of differential expression, which lowered t IDR. t IRR decreased for larger degrees of freedom for thresholds of γ ≥ 0.50 for both the SEI and PI models, again due to the higher posterior probabilities of differential expression.
The five-study simulation results were similar to the two-study results, except that the t IRR increased slightly for the SEI model for larger degrees of freedom for some thresholds of γ ≥ 0.50. This was due to the larger number of individual studies; with more individual studies, there were more genes with higher posterior probabilities of differential expression in at least one individual study versus the SEI meta-analysis, which increased t IRR for the SEI model. For the biological data, we found similar results to the two-study simulation data: with larger degrees of freedom, IDR, IRR and pe FDR decreased on average for thresholds of γ ≥ 0.50 for both the SEI and PI models.
Overall, the PI model outperformed the SEI model for all prior degrees of freedom imposed, and using 3 degrees of freedom resulted in the most conservative findings for the posterior probabilities of differential expression. We show results for the two-study and five-study simulation data in Table 4 and Supplemental Figures S1 and S2 (see Additional file 1); the biological data results are displayed in Table 5 and Supplemental Figure S3 (see Additional file 1).
We compared two Bayesian approaches to meta-analysis of microarray data: the standardized expression integration and probability integration models. The standardized expression integration model includes inter-study variability and may thus improve accuracy of findings; it also produces an overall estimate of standardized gene expression among all studies. However, due to the typical small number of studies in meta-analyses for microarrays, the inter-study variability is difficult to model. Alternatively, the probability integration approach eliminates the need to specify inter-study variability since each study is modeled separately, with overall smoothing of probabilities across studies. In our simulations of two and five studies, the probability integration model produced higher t IDR and lower t IRR than the standardized expression integration model for fixed posterior probabilities of differential expression, and also identified more true discoveries for the same levels of pe FDR. We found similar results for the biological data, with the probability integration model producing higher IDR on average and lower IRR with corresponding lower values of pe FDR, for fixed probabilities of differential expression. We conclude that, for our data sets, aggregating probabilities across studies rather than combining gene expression levels improves IDR, IRR and the number of discovered genes versus peFDR.
In the standardized expression integration meta-analysis model, the prior assignment for the inter-study variability had a large impact on the results. We assigned some of the most common prior distributions used in practice: inverse gamma, log-logistic and locally uniform. The uninformative inverse gamma and log-logistic distributions were gene-specific and did not pool information from similar genes; these distributions did not improve results versus individual analyses. The informative inverse gamma distributions pooled information either from all genes or sets of differentially and non-differentially expression genes, but also did not improve upon separate study analyses. We found the most improvement in true integration-driven discovery rates and increases in true discoveries versus pe FDR using the locally uniform prior distribution centered at the medians of the differentially expressed and non-expressed genes; this emphasizes the need for priors that pool information across genes rather than using individual gene information or uninformative priors, for small data sets.
The probability integration model does not produce an overall measure of expression for each gene, similar to the classical meta-analysis methods of combining p-values, probabilities and ranks. However, study-specific gene expression values are produced, and these can be examined for individual genes of interest.
The standardized expression integration and probability integration models presented here were developed for studies from the same platform. A common control sample is not required across studies, and the studies are assumed to be independent. In addition, the models do not require the same array-layout across studies. For example, some studies could have replicate slides within repeated experiments, while other studies could have only replicate slides within a single study. The models are thus applicable to a wide range of study designs.
Two study simulation data
We simulated data for two studies, with similar format to the B. subtilis mutant and induction biological data (see Methods: Biological data), with simulated proportion of differentially expressed genes ps= 5%, 10%, 25%. Each study contained 3,000 genes. Study 1 had 5 replicate slides within 3 repeated experiments, and Study 2 had 4 replicate slides within 3 repeated experiments. We simulated data from Model (1), with model parameters chosen to resemble the biological data. We set = 0.025 and c = 48. For Study 1, we assigned variance across slides to 0.074 and across experiments to 0.026. For Study 2, we assigned slide variation to 0.023 and experiment variation to 0.022. The biological data had inter-study variability that was approximately Normally distributed with mean of 0.33 and variation of 0.12 for the top 10% of genes, and mean 0.031 and variation 0.004 for the remaining genes. We simulated from a Normal distribution, with mean values that were lower, similar to, and higher than these values for the overexpressed and non-expressed genes, respectively: low mean: 0.1 and 0.01; medium mean: 0.3 and 0.03; high mean: 0.7 and 0.07; with variation values equivalent to the biological data. We refer to these as low, medium and high mean levels of inter-study variability. Each slide was standardized to have mean zero and unit standard deviation. Although correlation of expression is expected among genes, this has been shown to be difficult to simulate. We thus assumed independence among genes in simulations, similar to other studies (see, for example, Gottardo et al. , Lönnstedt and Speed ).
Simulation data for five studies
For the simulation of five studies, Study 1 and Study 2 were the same as in the previous section, and we simulated data for 3 additional studies, with proportion of differentially expressed genes ps= 5%, 10%, 25%. In total there were 3,000 genes. For Studies 3, 4 and 5, we used the study format similar to Study 1, with 5 replicate slides within 3 replicate experiments. For the slide and experiment variance parameters, we assigned values that were either within the range of values for Study 1 and Study 2, or somewhat outside the range. For Study 3, the slide variance was assigned 0.05, and experiment variance 0.02. For Study 4, the slide variance was assigned 0.04, and experiment variance 0.022. For Study 5, the slide variance was set to 0.06, and experiment variance 0.03. We again set = 0.025 and c = 48 and simulated from Normal distributions with three mean levels: low, medium and high, with variation values equivalent to the biological data, similar to the two-study simulation data. Each slide was standardized to have zero mean and unit standard deviation.
B. subtilis is a bacterium that responds to starvation by forming spores, which allow it to survive in extreme environmental conditions. Two independent B. subtilis microarray studies were designed to identify genes in the sporulation pathway controlled by the sigma factor σE. The studies had reciprocal designs; the first was a mutation of σE, and the second was an induction of σE, with details in the following (see also [50, 58]).
For the mutant study, cells with a deletion in the gene for σE(the mutant sample) were compared to wild-type cells. The wild-type/mutant ratios identified genes under the control of σE. In total, five microarrays were created from three independent repeated experiments; the first experiment produced three replicate slides and the second and third experiments each produced one slide. Each array contained somewhat more spots than the B. subtilis genome size of 4,106 due to multiple spotting of specific genes on the arrays. The percent of missing data due to low quality spots ranged from 18.6% to 64.5% across the five slides; a total of 3,713 genes had measurable expression values for at least one slide. We used log-ratios of intensities , normalized slides using a rank-invariant method [17, 59] and standardized each slide to have zero mean and unit standard deviation.
The induction study was an overexpression of σEin which cells treated with an inducer were compared to unaltered cells. The induction/wild-type ratios identified genes in the σEregulon. In total, four microarrays were created from three independent repeated experiments. The first two experiments each produced one slide and the third experiment produced two replicate slides. The percent of data removed due to low quality spots ranged from 52.6% to 67.0% across the four slides; a total of 2,552 had measurable expression for at least one slide. We again analyzed the post-normalized log-ratios of intensities, and standardized each array to have zero mean and unit standard deviation.
Standardized expression integration model: prior distributions for inter-study variation
We assigned the following inverse gamma, log-logistic and locally uniform prior distributions for the inter-study variability parameter in the standardized expression integration model (Model (1)) for both the simulation and biological data. We applied both uninformative and informative priors; the informative priors either used individual gene information or pooled information from sets of genes. We found that uninformative priors and priors that used individual gene information did not improve upon individual study analyses. Only one prior distribution produced true integration-driven discovery rates > 0% for γ ≥ 0.50, and an increase in true discoveries versus pe FDR < 20% compared to individual study analyses in the two and five study simulations: 3) locally uniform centered at the median of the inter-study variability (see below).
1) Inverse gamma distribution (Choi et al. , Smith et al. , Normand , Sargent et al. ): this is a standard conjugate prior distribution for variance parameters. We first applied an uninformative prior distribution, in order to allow the data to inform on the posterior distribution. However, since this distribution did not result in improved performance, we also applied two informative distributions, as follows.
a) Uninformative inverse gamma distribution: IG(0.001,0.001), corresponding to mean 1, variance 1000.
b) Informative inverse gamma distribution: IG(α, β), with α, β chosen so that the mean and variance were equal to the mean and variance of the inter-study variability based on data of all genes. This prior distribution pools data from all genes in order to provide a more accurate measure of inter-study variability. Pooling variance information from all genes is similar to the methods of Tseng et al. , Gottardo et al.  and Lönnstedt and Speed .
c) Informative inverse gamma distribution: separate priors for differentially and non-differentially expressed genes. Since the inter-study variability is higher for differentially expressed genes than non-expressed genes, we assigned two different priors conditioned on Ig= 1 and Ig= 0. For Ig= 1, we assigned an inverse-gamma distribution with mean and variance equal to that of the inter-study variability based on the top p% of data. The proportion p was estimated from the average of the individual study analyses. Similarly for Ig= 0, we assigned the prior based on the remaining (1-p)% of data. Estimating inverse-gamma parameters separately for the top and remaining proportions of genes is similar to the method of Gottardo et al. .
2) Log-logistic distribution (DuMouchel and Normand ):
Here, K is the number of studies and is sampling variability for gene g in study j. Note that this prior assignment is calculated for each gene individually. Our data included slide and experiment variance; we thus calculated for gene g and study j as follows:
where E is the total number of experiments, neis the number of slides within experiment e, and is the sample variance of the yjgsewithin experiment e.
As discussed in DuMouchel and Normand , the is the harmonic mean of the K sampling variances, , and the density p(τg) has median equal to sg0. The quartiles of the distribution of τgare sg0/3, sg0and 3sg0, so that the distribution covers a sensible range of values. If the sample standard deviations are not equal across microarray studies, then sg0will be weighted toward the studies with smaller sgj; this prevents the meta-analysis results from being overly influenced by a few studies that have large sgjvalues.
3) Locally uniform distribution (Gelman et al. ): separate priors for differentially and non-differentially expressed genes. The uniform prior is used when a variable is known to lie within a specific interval and is equally likely to be found anywhere within the interval. For inter-study variability of gene expression, this prior assignment pools information from the differentially and non-differentially expressed genes in order to provide a more accurate estimate of the variability. Again, since the inter-study variability is higher for differentially expressed than non-expressed genes, we assigned different priors conditioned on Ig= 1 and Ig= 0. For Ig= 1, we assigned a locally uniform prior centered at the median inter-study variability based on the top p% of data. The percent p was determined from the average of the individual study analyses. Similarly for Ig= 0, we assigned a locally uniform prior based on the remaining (1-p)% of data. The ranges of the uniform distributions were ± one standard error of the median. The standard error of the median was estimated as 1.253 × standard error of the mean (Kendall et al. ). We selected the range of ± one standard error of the median based on exploratory calculations. In simulations, this range produced higher t IDR for γ ≥ 0.50 and more true discoveries for levels of pe FDR < 20% versus individual study analyses compared to the following alternative ranges: fixed medians (range of zero); and ± two standard errors of the medians. We also repeated prior assignment 3) using means in place of medians, but this prior specification did not result in improvements in discoveries versus individual study analyses.
Normal versus t-distribution modeling of the study-specific mean values
As discussed and implemented by several authors (Smith et al. , Sargent et al. , Choi et al. ), an alternative to the Normal distribution assumption for the study-specific mean values θjgis to model the values as t-distributions, due to the small number of studies. We repeated each of the prior assignments 1) to 3) above assuming a t-distribution with degrees of freedom less than 30 (with degrees of freedom 30 equivalent to a Normal distribution). For simulated data, we found that the t IDR versus γ ≥ 0.50 and the number of true discoveries versus pe FDR decreased with fewer degrees of freedom. Based on these findings, we model the mean values θjgas Normal distributions in the standardized expression integration model. The Normal assumption is commonly used in hierarchical Bayesian meta-analysis models, even for small numbers of studies (see, for example, Normand , Pauler and Wakefield , Sargent et al. ).
Markov chain Monte Carlo implementation
We implement a Markov chain Monte Carlo (MCMC) procedure to simulate the posterior distributions of each of the parameters. MCMC methods generate samples from a density p(ψ) for a parameter ψ (with p(ψ) possibly known only to a constant of proportionality) by creating a Markov chain on the state space of ψ which has p as its true (stationary) distribution (see Liu et al.  for further details).
Joint posterior distributions
For Model (1), the joint distribution of the data and parameters is:
where Ωj= (, c), j = study, g = gene, e = experiment, s = slide.
For Model (2), the joint distribution of the data and parameters is:
where Ωj= (, cj), j = study, g = gene, e = experiment, s = slide.
We assigned as uninformative prior distributions as possible to the parameters of Models (1) and (2) that still resulted in convergence of the models. For the parameter p, we assigned a non-informative Uniform(0, l) distribution. The prior distributions of the slide effect and experiment effect variance parameters, and , respectively, required some information from the data in order for the models to converge. We assigned the following prior distributions for these parameters (see also [17, 23, 24]):
Here, and are scale parameters of the inverse chi-squared distribution obtained from the data. is computed as follows:
where yjg.eis the log-expression ratio averaged over the slides within each experiment:
The scale parameter for is similarly produced as follows:
where yjg..is the log-expression ratio averaged over both slides and experiments. We assigned 3 degrees of freedom in each study for and , i.e. h = k = 3, so that the distribution is as uninformative as possible (see also Results and Discussion: Sensitivity analysis). The remaining parameters were assigned the following prior distributions, which were as uninformative as possible while still allowing the models to converge.
The degrees of freedom and scale parameters a, , respectively, were assigned so that the prior mean of was 1 with variance 0.1, and b, were assigned so that the prior mean of c was 100 with variance 10,000. For a description of the prior distribution for , see the Methods section: Standardized expression integration model: prior distributions for inter-study variation.
Full conditional posterior distributions
Each parameter was sampled from the full conditional posterior distributions by an MCMC algorithm using the WinBUGS software . We used 5,000 iterations for all analyses, which was more than sufficient. Further MCMC implementation details can be found in Conlon et al. .
WinBUGS running time
The WinBUGS running time ranged from approximately 7 minutes for 3,000 genes and two studies for the PI model to 2.41 hours for 20,000 genes and five studies for the SEI model, for 5,000 iterations, using a personal computer with an Intel Core Duo T2500 2.0 GHz Processor. Table 6 details running times in seconds for the number of genes: 3,000, 10,000, 20,000, for two and five studies and 5,000 iterations for both the PI and SEI models.
Availability and requirements
The WinBUGS code for implementing the models is included in the Supplemental Files (see Additional files 2 and 3).
Operating system: Windows 98 or later.
Other requirements: WinBUGS software version 1.4 or later .
Wang J, Coombes KR, Highsmith WE, Keating MJ, Abruzzo LV: Differences in gene expression between B-cell chronic lymphocytic leukemia and normal B cells: a meta-analysis of three microarray studies. Bioinformatics 2004, 20: 3166–3178.
Morris JS, Yin G, Baggerly KA, Wu C, Zhang L: Pooling information across different studies and oligonucleotide microarray chip types to identify prognostic genes for lung cancer. In Methods of Microarray Data Analysis IV. Edited by: Shoemaker JS, Lin SM. New York: Springer-Verlag; 2005:51–66.
Rhodes DR, Yu J, Shanker K, Deshpande N, Varambally R, Ghosh D, Barrette T, Pandey A, Chinnaiyan AM: Large-scale meta-analysis of cancer microarray data identifies common transcriptional profiles of neoplastic transformation and progression. Proc Natl AcadSci USA 2004, 101: 9309–9314.
Parmigiani G, Garrett-Mayer ES, Anbazhagan R, Gabrielson E: A cross-study comparison of gene expression studies for the molecular classification of lung cancer. Clinical Cancer Research 2004, 10: 2922–2927.
Newton MA, Kendziorski CM, Richmond CS, Blattner FR, Tsui KW: On Differential Variability of Expression Ratios: Improving Statistical Inference About Gene Expression Changes From Microarray Data. Journal of Computational Biology 2001, 8: 37–52.
DuMouchel W, Normand SL: Computer-modeling and graphical strategies for meta-analysis. In Meta-Analysis in Medicine and Health Policy. Edited by: Stangl DK, Berry DA. New York: Marcel Dekker; 2000:127–178.
Pauler DK, Wakefield J: Modeling and implementation issues in Bayesian meta-analysis. In Meta-Analysis in Medicine and Health Policy. Edited by: Stangl DK, Berry DA. New York: Marcel Dekker; 2000:205–230.
Sargent DJ, Zee BC, Milan C, Torri V, Francini G: Meta-analysis of individual-patient survival data using random-effect models. In Meta-Analysis in Medicine and Health Policy. Edited by: Stangl DK, Berry DA. New York: Marcel Dekker; 2000:255–275.
Mah N, Thelin A, Lu T, Nikolaus S, Kuhbacher T, Gurbuz Y, Eickhoff H, Kloppel G, Lehrach H, Mellgard B, Costello CM, Schreiber S: A comparison of oligonucleotide and cDNA-based microarray systems. Physiol Genomics 2004, 16: 361–370.
Dominici F, Parmigiani G: Combining studies with continuous and dichotomous responses: a latent-variables approach. In Meta-Analysis in Medicine and Health Policy. Edited by: Stangl DK, Berry DA. New York: Marcel Dekker; 2000:105–125.
Eichenberger P, Jensen ST, Conlon EM, van Ooij C, Silvaggi J, Gonzalez-Pastor JE, Fujita M, Ben-Yehuda S, Stragier P, Liu JS, Losick R: The sigmaE regulon and the identification of additional sporulation genes in Bacillus subtilis . Journal of Molecular Biology 2003, 327: 945–972.
Storey JS, Tibshirani R: SAM Thresholding and False Discovery Rates for Detecting Differential Gene Expression in DNA Microarrays. In The Analysis of Gene Expression Data: Methods and Software. Edited by: Parmigiani G, Garrett ES, Irizarry RA, Zeger SL. Springer, NY; 2003:272–290.
Genovese C, Wasserman L: Bayesian and frequentist multiple testing. In Bayesian Statistics 7. Edited by: Bernardo JM, Bayarri JM, Berger JO, Dawid AP, Heckerman D, Smith AFM, West M. Oxford: Oxford University Press; 2003:145–162.
We thank Joseph Horowitz for helpful discussion, and Patrick Eichenberger and the laboratory of Richard Losick for the B. subtilis microarray data and helpful advice. We also thank three anonymous reviewers for insightful comments that enhanced the manuscript. EMC was supported by a University of Massachusetts Healey Endowment Grant.
Authors and Affiliations
Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts, USA
Erin M Conlon & Anna Liu
Department of Mathematics, University of Arkansas, Fayetteville, Arkansas, USA
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.