 Methodology article
 Open Access
 Published:
Improved ChIPchip analysis by a mixture model approach
BMC Bioinformatics volumeÂ 10, ArticleÂ number:Â 173 (2009)
Abstract
Background
Microarray analysis of immunoprecipitated chromatin (ChIPchip) has evolved from a novel technique to a standard approach for the systematic study of proteinDNA interactions. In ChIPchip, sites of proteinDNA interactions are identified by signals from the hybridization of selected DNA to tiled oligomers and are graphically represented as peaks. Most existing methods were designed for the identification of relatively sparse peaks, in the presence of replicates.
Results
We propose a data normalization method and a statistical method for peak identification from ChIPchip data based on a mixture model approach. In contrast to many existing methods, including methods that also employ mixture model approaches, our method is more flexible by imposing less restrictive assumptions and allowing a relatively large proportion of peak regions. In addition, our method does not require experimental replicates and is computationally efficient. We compared the performance of our method with several representative existing methods on three datasets, including a spikein dataset. These comparisons demonstrate that our approach is more robust and has comparable or higher power than the other methods, especially in the context of abundant peak regions.
Conclusion
Our data normalization and peak detection methods have improved performance to detect peak regions in ChIPchip data.
Background
Microarray based analysis of immunoprecipitated chromatin (ChIPchip) constitutes a powerful technique to detect the interaction of DNA with regulatory proteins over large segments of chromatin [1, 2]. With advances in microarray fabrication, highdensity tiling arrays are now being employed for genomewide ChIPchip studies [3, 4]. In ChIPchip, immunoprecipitated chromatin is amplified, fluorescently labeled and hybridized to a tiled DNA microarray. Fluorescent signal detected from hybridization to several oligomers representing a contiguous region is graphically depicted as a "peak" and is suggestive of a protein binding site. Although putative binding sites can be individually validated using complementary strategies, comprehensive, genomewide identification of high confidence peaks constitutes a major challenge for ChIPchip studies.
Several methods have been developed to detect peak regions [3, 5â€“13]. Cawley et al. [3] and Keles et al. [9] applied the Wilcoxon rank sum test and ttest, respectively, to generate teststatistics for sliding windows. Cawley et al. used a fixed pvalue cutoff to select peak regions. Whereas Keles et al. employed the Benjamini and Hochberg stepup procedure [14] to control false discovery rate (FDR). In addition to the requirement for experimental replicates, Gottardo et al. [13] identified the absence of powerful multiple testing adjustment methods as a limitation of these methods. Li et al. [7] proposed a hidden Markov model (HMM) approach to identify peak regions, assuming model parameters could be estimated from previous experiments. Ji et al.[6] used a modified tstatistic with a more robust estimate of variance to measure probelevel binding signal, then used either moving window averaging or HMM to estimate windowlevel binding signal, and finally estimated local false discovery rate (lfdr) of each peak region [15]. Estimation of lfdr requires dissection of the mixture distribution of ChIPchip signals, which includes the distribution of ChIP enriched signals (or peak signals) and the background (null) distribution. Ji et al.[6] estimated the mixture distribution by unbalanced mixture subtraction, which requires additional information to construct the unbalanced mixtures. Instead of concentrating exclusively on the strengths of binding signals, Zheng et al. [12] identified peaks using both signal strength and signal pattern. Specifically, they modeled the DNA fragmentation process with a Poisson point process and concluded that if the binding signal is transformed to log scale, isolated "peaks" should exhibit a triangular shape allowing development of a double regression method, Mpeak, to identify triangular patterns from ChIPchip data.
Two recent studies [10, 13] have employed Bayesian hierarchical models to identify protein binding sites from ChIPchip data. A major advantage of Bayesian hierarchical models is that the information across probes can be shared; this is especially important when analyzing a limited number of replicates. However, the difficulty of fitting the complicated Bayesian hierarchical models poses a heavy computational burden. Despite their common characteristics, several attributes distinguish these two approaches. Keles's method[10], HGMM (hierarchical gamma mixture model), adopted a hierarchical gammagamma model [16]. HGMM is able to detect peak regions of different sizes. However, its constant coefficient of variation assumption can have an undesired effect in the presence probe outliers [13], and it assumes at most one peak per genomic region, so that the genome has to be partitioned (often arbitrarily) into smaller regions before applying HGMM. Gottardo et. al.'s method [13], BAC (Bayesian Analysis of ChIPchip), is based on approaches used for gene expression studies [17] with some additional modifications to exploit the spatial dependence between neighboring probes and to improve the robustness for ChIPchip studies. However, BAC, as it is currently implemented, cannot be applied to a single sample.
In this paper, we propose a mixture model approach to identify peaks from ChIPchip data. Our method builds on the important observation made by Buck et al. [5] that the signals from ChIPchip data are not symmetric. When transformed into log scale and represented as a histogram, the signal density often has a heavier righttail reflective of the presence of true positive signals. It is reasonable to assume that the majority of the lefttail of the signal density arises from background noise, which defines the null distribution. Based on the additional assumption that the null distribution is normal with mean of 0, Buck et al. [5] used negative signals to construct the null distribution and then evaluated the pvalues of tested regions. Following Buck et al. [5], we assume that the null distribution is symmetric, but we allow the null distribution to be nonnormal and allow its center to deviate from 0. We estimate the local false discovery rate (lfdr) [15] for each peak based on a nonparametric approach to dissect the null distribution (background signals) and alternative distribution (ChIP enriched signals). As pointed by Zheng et al. [12], omitting autocorrelation structure of nearby probes leads to bias in estimating the significance level of each peak. In this study, we adopted the Poisson point process used by Zheng et al. [12] to estimate autocorrelation and incorporate autocorrelation into the lfdr evaluation procedure.
Compared with the existing methods, our method does not rely on potentially restrictive assumptions, such as a normal null distribution [5], or prior knowledge, such as the availability of model parameters[7]. Our major assumption is that the null distribution is symmetric, which can typically be achieved after appropriate normalization (see below). Importantly, our method permits analysis in the absence of replicates, a situation that often arises in exploratory ChIPchip studies[18]. In addition, our method functions well with abundant peak regions, which is common in the increasing popular epigenetic studies [19, 20].
Our method also alleviates the burden of cross array normalization. In large scale studies, a number of arrays are often needed to cover the entire region of interest. Signal differences between arrays may due to technical effects (experimental bias) or relevant biological differences. If prior knowledge implies that there is no systematic biological difference across arrays, it may be more appropriate to combine those arrays prior to the application of peak finding methods. For example, in NimbleScan, the software provided by NimbleGen, the raw data (log ratio) is normalized by subtracting a robust estimate of the sample median. In other words, the data from different arrays are aligned by their medians. However, in practice, it may be difficult to know whether biological differences contribute to systematic differences across arrays. Our method uses the signals derived from one array to identify peaks thereby avoiding the potential problem of cross array normalization. Peaks from different arrays can then be compared by their lfdrs.
In raw data, the null distribution reflecting background noise may not be symmetric and may be heterogeneous depending on the GCcontent of the probes [11]. Therefore, withinarray data normalization is crucial to the success of our mixture distribution method. Song et al. [11] proposed a normalization method, MA2C (model based 2color arrays), that normalizes data by assuming the logintensities of the two channels follow a bivariate distribution with GCspecific means and variances. Song et al. have shown that MA2C standardizes data from different samples more efficiently than other existing methods. Although MA2C works well in many situations, sometime MA2C normalized data still have nonhomogenous null distributions across GCcontents. To overcome this issue, our method uses a Lowess smooth curve to capture the GCcontent specific information.
Our mixture model approach is general enough to be applied to onecolor arrays (e.g., some Affymetrix tiling arrays), twocolor arrays (e.g., some Nimblegen tiling arrays), and high throughput sequencing data. However, since the normalization method pertains to twocolor arrays, we focus on its application for twocolor arrays. We have implemented our method into an R package, Mixer, which can be downloaded from http://www.bios.unc.edu/~wsun/software/mixer.htm.
Methods
Data normalization
Let the x_{2i}and x_{1i}be log_{2}(Cy5) and log_{2}(Cy3) of the ith probe with GC content k, and let Î¼_{2k}and Î¼_{1k}be the expected value of x_{2i}and x_{1i}, respectively. MA2C normalizes data by calculating
where and are robust estimates of Î¼_{2k}and Î¼_{1k}, respectively, and is a robust estimate of the standard deviation of x_{2i} x_{1i} (  ). Considering = x_{1i}+ (  )as a predictive value of x_{2i}based on the linear model log_{2}(Cy5) = log_{2}(Cy3) + b_{0}, where b_{0} is estimated by  . Then x_{2i} x_{1i} (  ) is the residual from the baseline model log_{2}(Cy5) = log_{2}(Cy3) + (  ), and the MA2C normalized value is simply a variancestandardized residual of this linear model with a slope of 1 (see Fig. 6 of Song et al. [11] for an illustration). The underlying assumption of this baseline model is that log_{2}(Cy5)  log_{2}(Cy3) is constant given GC content. Although this assumption may be sufficient for some samples, the channel differences of logintensities may depend on the intensities themselves. For example, analyzing previously published array data [21], we found that the channel difference in one array is negative when log_{2}(Cy3) and log_{2}(Cy5) are small, but approaches 0 as log_{2}(Cy3) and log_{2}(Cy5) become larger (Figure 1 (aâ€“c)). This variation justifies the use of a fully parameterized linear model: log_{2}(Cy5) = b_{0} + b_{1} Ã— log_{2}(Cy3) as the baseline model. Therefore, an improvement over the MA2C normalization would be to assume a linear relation between log_{2}(Cy5) and log_{2}(Cy3) and estimate both intercept and slope from data in a robust way, for example, using median regression. However, we found that the relation between log_{2}(Cy5) and log_{2}(Cy3) may be nonlinear, and not fully captured by median regression (See Figure 1, and Sup. Figure 1(aâ€“b), Sup. Figure 2(aâ€“b) in Additional file 1). To accommodate nonlinear intensitydependent patterns, we normalized data by Lowess curve fitting conditioning on GCcontent. The Lowess normalization is able to account for either linear or nonlinear relation and it is robust to outliers. Specifically, given GCcontent, let z_{ i }= g(x_{1i}) be the Lowess fit (we fit Lowess curve by R function lowess), the normalized log ratio difference is calculated as
where M_{ i }is the median of x_{2i} z_{ i }. We found this Lowess normalization better captured the relationship between signal intensities (See Figure 1(aâ€“c), and Sup. Figure 1(aâ€“b), Sup. Figure 2(aâ€“b) in Additional file 1). Although Lowess normalization has been applied to gene expression microarray data [22â€“24], to the best of our knowledge, this is its first application to ChIPchip data.
Mixture models of ChIPchip data
ChIPchip data analysis represents a combined mixture model problem. Observed probelevel data are sampled from the mixture distribution of background signals (null distribution) and ChIPenriched signals (alternative distribution). In addition, peaks can be detected by moving windows of various lengths. Therefore there are two mixture model problems: one at the probe level and one at the window level. Let f_{0}(x) and f_{1}(x) be the probe level density functions of the null and alternative distributions respectively, and let Ï€_{0} and Ï€_{1} be the corresponding mixture proportions respectively, then the observed probelevel data follows the mixture distribution
We define a window as a fixed length region around a probe. Let the windowlevel density functions for null and alternative distributions be g_{0}(X) and g_{1}(X) respectively. We use X to denote the window level signal strength to distinguish it from the probe level signal strength x. Let the corresponding mixture proportions be Îº_{0} and Îº_{1}, then the observed windowlevel data follows mixture distribution
Probelevel analysis
We first consider the probe level distribution f_{ obs }(x) = Ï€_{0}f_{0}(x) + Ï€_{1}f_{1}(x) Similar to the approach of Buck et al. [5], we utilize lower (but not necessary negative) signals to infer the null distribution f_{0}(x) or g_{0}(X) (described below). We assume that the null distribution is symmetric but place no constraint on the function form or the location of the null distribution.
Let Î¼_{0} be the center of the null distribution, which is approximately the Ï€_{0}/2 percentile of the whole distribution assuming that the vast majority of the signals smaller than Î¼_{0} arise from the null distribution. This is a reasonable assumption because most ChIPenriched signals are higher than the majority of the background signals. Then in order to estimate Ï€_{0}, we just need to estimate Î¼_{0}. Based on the assumption that the null distribution is symmetric with center Î¼_{0}, it is reasonable to assume that Î¼_{0} is the mode of the entire distribution, or one of the two modes if the ChIPenriched signals also form a mode [25]. Therefore, in order to estimate Î¼_{0}, we identify the mode(s) of the observed density f_{ obs }(x) = Ï€_{0}f_{0}(x) + Ï€_{1}f_{1}(x)
We first rounded all the probe level signals to a given precision, for example, 0.01 or 0.001 to facilitate subsequent computation. The precision is chosen so that little or no information is lost. We estimate the signal density function by kernel method (R function density with normal kernel) [26, 27]. If the estimated density function has two or more modes, we refer to the highest one as the major mode and the others as minor modes. For simplicity, if there is only one mode, we also refer to it as the major mode. A mode cannot be Î¼_{0} if it is bigger than the overall median, otherwise
Specifically, we estimate Î¼_{0} based on the following procedure.

1.
If the major mode is smaller than the overall median, we take it as Î¼_{0}.

2.
If the major mode is bigger than the overall median and there is one and only one minor mode in 20^{th} â€“ 50^{th} percentile of the observed signal (we chose this range for robustness, as explained below), we take the minor mode as Î¼_{0}.

3.
In all the other situations, we make a conservative estimation of the mode location of the null distribution. Specifically, we iterate all the signal strengths within 20^{th} â€“ 50^{th} percentile (again, we chose this range for robustness, as explained below) and choose the greatest one so that the estimated null distribution is below the overall distribution, i.e., Ï€_{0}f_{0}(x) â‰¤ f_{ obs }(x) In practice, if such a conservative estimation has to be made, the resulting lfdr is an upper bound instead of an unbiased estimation of actual lfdr.
The major mode can be simply identified as the point with the highest density estimation. The minor mode can be identified as the point where the corresponding 1^{st} derivative of the density function is 0 and the 2^{nd} derivative is negative. We estimate the 1^{st} and 2^{nd} derivatives of the density function by SavitzkyGolay smoothing filters [28â€“30]. Because there are fewer observations at the tails of a density curve, the kernel estimations there may have bigger variations. This variation could result in "small" modes at the tails that happen by chance. In order to avoid these potentially artifactual modes, we assume Î¼_{0} is within 20^{th} â€“ 50^{th} percentile of the observed signal, which is equivalent to assuming the proportion of null signals is between 40% and 100%. This range is wide enough to accommodate the vast majority of the ChIP experiments. For experiments with even smaller proportions of null signals, pattern reorganization methods that capture ChIPenriched signals in segments may be more appropriate [31].
After identifying the mode of the null distribution (Î¼_{0}), hence Ï€_{0}, we take all the data points smaller than Î¼_{0}, denoted as D1, all the data points equal to Î¼_{0}, denoted as D2, and all the data points generated by flipping D1 around Î¼_{0}, denoted as D3, merge them together (i.e., D = {D1, D2, D3}) to estimate the null distribution f_{0}(x) by kernel method (R function density with normal kernel) [26, 27]. Finally the probe level lfdr, i.e., the posterior probability that one probe level signal arises from f_{0}(x) is
where p_{0}(x) indicates the probability that x is from the null distribution. In practice, kernel estimation of density functions may be unreliable at the tail area, due to limited number of observations. As a result, the lfdr estimates fluctuate. To circumvent this problem, we order those x where the lfdr is evaluated in ascending order x_{(1)} â‰¤ x_{(2)} â‰¤ ... â‰¤ x_{(m)} and update p_{0}(x_{(i)}) by
Therefore the estimation of p_{0}(x) is smoothed and decreases or remain the same as x increases. A similar strategy has been used to define qvalue from FDR estimates[32].
Windowlevel analysis
The windowlevel signal strength X, which can be defined as mean or median (or other robust estimations, for example, those used in [11]), is a function of window size and the probelevel signals within the window. In this study, we assume the window size is predetermined. Let the probelevel signals within one window be x_{1}, x_{2},..., x_{n}, we calculate X as
where is the average of probelevel signals and is the standard error of under null distribution. In other words, X measures the distance between and Î¼_{0}, in terms of the standard error , which is generally bigger than because there are autocorrelations between nearby probes even for background signals. We estimate by
Because we estimate under null distribution, depends only on the number of probes in the window and the distances between them, but not the particular probe level signals. This estimation in equation (4) has the same form as the one used by Zheng et al. [12]. However, based on the underlying assumption that the vast majority of the signals are from the null distribution, Zheng et al. used all the data below a threshold to estimate both var(x) and corr(x_{ i }, x_{ j }). In order to accommodate a relatively large proportion of ChIPenriched signals, we use different approaches to estimate var(x) and corr(x_{ i }, x_{ j }). Specifically, we estimate var(x) using the data D = {D1, D2, D3} and estimate corr(x_{ i }, x_{ j }) as follows. We model the signal strength at probe j by
where Ï‰_{ ij }is the probability that there is no break up of the DNA sequence between probe i and j, and e_{ ij }indicates the signal strength at probe j due to the DNA segments not harboring probe i. x_{ i }and x_{ j }are measured based on a large number of sequence segments bound to the probe i and j, respectively. Equation (5) can be understood as a summation of the contributions from all the sequence segments captured by probe j from an expectation perspective. Since e_{ ij }is independent with x_{ i },
Because we are modeling the correlation structures in the background signals, var(x_{ i }) = var(x_{ j }) = var(x), hence corr(x_{ i }, x_{ j }) = Ï‰_{ ij }. In order to estimate Ï‰_{ ij }, we modeled the sonication process by Poisson point process [12]. Suppose, on average there is one break up of DNA sequence per k bp, the incident rate in the Poisson point process is Î» = 1/k, and Ï‰_{ ij }= exp(Î» d_{ ij }), where d_{ ij }indicates the distance between probe i and j. Therefore given the parameter Î» (or equivalently k), we can estimate Ï‰_{ ij }, hence corr(x_{ i }, x_{ j }), and then we can calculate the windowlevel statistics X. Usually, the parameter Î» (or k) can be obtained from the experimental setting for the DNA sonication process. For sequencing studies, Ï‰_{ ij }can be simply estimated from the distributions of sequence fragment lengths [33].
Next, the window level mixture distribution g_{obs}(X) = Îº_{0}g_{0}(X) + Îº_{1}g_{1}(X) can be dissected similarly to the analysis of the probe level data. Finally, the window level lfdr, i.e., the posterior probability that one windowlevel statistics X is from the null distribution is
where q_{0}(X) indicates the probability that X is from the null distribution. Similarly to the probelevel analysis, we smooth the lfdr by updating q_{0}(X_{(i)}) as
Here X_{(1)} â‰¤ X_{(2)...} â‰¤ X_{(w)}are the windowlevel signals where the lfdr are evaluated.
Peak Identification
After probelevel and windowlevel analyses, we identify peaks by the following steps. First, "peak windows" with elevated signal strengths are identified using a windowlevel lfdr cutoff, e.g., lfdr â‰¤ 0.20. Second, overlapped "peak windows" are separated into discrete peak regions. Third, each resulting peak region is evaluated by further restriction on the number of probes within it and the signal strengths of those probes. A typical rule could be "a peak region should harbor at least 5 probes", or "a peak region should harbor at least 3 probes with probe level lfdr â‰¤ 0.2". The third step is optional but recommended since "isolated peaks" composed of only one or two probes are unlikely to represent true sites of proteinDNA interactions. Similar rules have been used in other ChIPchip data analysis methods [6, 12].
Results
We compared the results of our peak detection strategy with other published algorithms using three datasets. We focused on two common conditions that were typically not evaluated during the development of the existing peak detection algorithms: the absence of experimental replicates and the presence of abundant peak regions.
Spikein Data
We initially evaluated our method using the data set from a recent spikein study [21]. In this benchmark study comparing ChIPchip conditions, human genomic DNA was combined with defined cloned regions ("spikeins") over a wide range of concentrations to reflect the enrichment ratios often observed in ChIP experiments. The use of an experimental spikein data set allows definitive knowledge of the regions that are enriched. Although multiple tiling array designs were tested, since the current implementation of our normalization method is for twocolor arrays, we analyzed the data generated from seven NimbleGen arrays. The original data in "pair" format, which includes signals from both Cy3 and Cy5 channels, were downloaded from NCBI GEO database. Four arrays (GEO sample accession number: GSM254930, GSM254971, GSM254972, GSM254973) were hybridized to DNA spiked with specific unamplified fragments. The other three arrays (GSM254805, GSM254806, GSM254807) were hybridized to DNA spiked with fragments that had been amplified. Each array harbors 385,149 probes spanning 44 ENCODEselected regions[34]. 100 or 98 regions were spikein with unamplified and amplified DNA, respectively, at various concentrations from 1.25 fold to more than 100 fold. A complete description of these data can be found in Johnson et al. [21].
In the original data, the peak regions were sparse (covering ~0.2% of the total number of probes). We simulated data with increasingly abundant peak regions by replacing the signals from nonspikein regions with the signals from spikein regions. To better mimic the original data and more faithfully replicate the flanking contexts, we replicated each spikein region (450â€“550 bp) including 500 bp on either side (or to the boundaries of the corresponding ENCODE regions) as a unit, which we refer to as a peakcontaining region. Lengths of such peakcontaining regions vary from 1,172 bp to 1,550 bp, with median of 1,496 bp. We split the remaining nonpeakcontaining regions into 18,531 segments of 1,600 bp. We then used the peakcontaining regions to replace (fractions of the same lengths of) randomly selected nonpeakcontaining segments. In the first augmented data set, we replicated each peakcontaining region 20 times, resulting in 2,100/2,058 peakcontaining regions (covering ~4.3% of the total number of probes) in the unamplified/amplified DNA samples, respectively. In the second augmented data set, we replicated each peakcontaining region 50 times, resulting in 5,100/4,998 peakcontaining regions (covering ~10.2% of the total number of probes) in the unamplified/amplified DNA samples, respectively.
Analysis of Spikein Data
Using the native and augmented spikein data, we compared the efficacy of our peak detection method, which we named Mixer, with three other methods: MA2C, TileMap, and HGMM. These methods were selected because they are frequently used and/or they also aim to dissect the mixture distributions of ChIPchip data. BAC by Gottardo et al. [13] was not compared as it requires experimental replicates. Mpeak by Zheng et al. [12] was also not compared because Mpeak assumes that the peaks have triangular shapes. However, the signals from spikein regions exhibit rectangular patterns.
We used the Java version of MA2C software with the default normalization option ("robust with C = 2"). Other options led to similar or inferior results (data not shown). After normalization, the median was used by MA2C to identify peak regions with a bandwidth (halfwidth of the sliding window) of 300 bp and at least 5 probes per peak region. A bandwidth of 300 bp was chosen based on the lengths of the spikein regions. Other bandwidths (500 bp or 200 bp) produced inferior results (data not shown).
For the implementation of Mixer, as with MA2C, we used "halfwidth of the sliding window of 300 bp with at least 5 probes" as the criteria to select peak regions. We set the average sonicated sequence length as 1000 bp (i.e., Î» = 1/1000) to estimate the correlation between nearby probes. Substitution of values from 500 bp to 1500 bp did not significantly change the results. In order to demonstrate the difference between Lowess and MA2C normalization, we tested Mixer with data normalized by both methods.
We employed CisGenome[35] for TileMap calculation. Log_{2} transformed data were prenormalized using the quantile normalization option in CisGenome. TileMap summarizes windowlevel signals by either moving average or HMM. The significance of each peak is measured by an lfdr estimated from unbalanced mixture subtraction (UMS). We used HMM because it yields superior results in terms of higher power given an lfdr cutoff. Two parameters (p and q) must be provided to UMS to enable selection of probes (with percentiles greater than 100qth and less than 100pth) from the overall distribution to construct the null/alternative distributions. We used either the default values (p = 0.01 and q = 0.05) or adjusted values based on the knowledge of true proportion of spikein signals. Specifically, we set p = 0.002 and q = 0.02 for the original data with ~0.2% of spikein probes; p = 0.03 and q = 0.08 when ~4.3% of the probes are from spikeins; p = 0.08 and q = 0.13 when ~10.2% of the probes are from spikeins.
The R package R/HGMM was used for HGMM calculation. HGMM can take into account a distribution of peak sizes. We generated this distribution based on the actual lengths of the spikein regions. In most experiments, however, this information can only be estimated. Raw data (PM measure from pair file) were log_{2} transformed and normalized using the preprocess function of R/HGMM before applying the HGMM function.
We examined the influence of the proportion of null signals on Mixer's performance. Figure 2 shows the estimated densities of probe and windowlevel signals from the original and two simulated dataset from one array. As the number of spikein regions increases, the right tail of the windowlevel signal density becomes heavier. The increased signal density enhances accuracy and robustness to dissect the mixture distribution. Similar patterns were also observed for other arrays.
We then evaluated Mixer, MA2C, TileMap and HGMM using the spikein data. First, given a fixed cutoff of either FDR â‰¤ 0.20 (for MA2C) or lfdr â‰¤ 0.20 (for the other methods), we compared the power and actual FDR of these methods (Tables 1, 2 and 3). The discovery of a peak region was counted as a true discovery (or a true positive) if its center was within a spikein region; otherwise it was counted as a false discovery. Although an alternative comparison would examine the top K peaks identified by different methods, we based our comparison on fixed lfdr/FDR. This approach is more relevant since the number of binding sites is typically unknown.
We compared the results of Mixer after data normalization by Lowess or by MA2C. For the original data when the spikein regions are sparse, in general, Mixer performs much better with Lowess normalization than with MA2C normalization. Mixer with MA2C normalization often includes many false discoveries resulting in a high FDR (see Table 1). As spikein regions become more abundant, the normalization method makes less difference (Table 2, 3). Dissection of the mixture distribution becomes easier with additional data to estimate the alternative distribution, which may overcome the differences attributable to the normalization methods.
We then compared the performance of the peak detection algorithms on the original and augmented data sets. HGMM was computationally intensive, requiring more than 10 hours to analyze one array. In contrast, the other methods we tested completed the analysis of a single array in less than 10 minutes. With the original data, (i.e., no replicates and a small proportion of spikein regions), HGMM failed for four arrays due to errors in numerical optimization. Although the use of initial values other than the defaults may avoid such errors, we did not explore this due to the high computational cost. In the augmented data sets (with a larger proportion of spikein regions), HGMM did not fail for any array. However, HGMM was often overconservative missing 30â€“50% of spikein regions (Table 2, 3).
At the default parameters of p = 0.01 and q = 0.05 (i.e. using the top 1% of the data to estimate alternative distribution and 95% of the data to estimate null distribution), TileMap was overconservative and had limited power, especially when the proportion of spikein regions is high. TileMap performed much better when provided appropriate values for parameters p and q based on the true proportion of alternative distribution (Tables 1, 2 and 3). However, in actual applications, the alternative distribution is typically unknown. For example, for amplified DNA samples when there are 4998 (~10.2%) spikein regions, with lfdr smaller than 0.2, TileMap identifies ~70â€“80% of the spikein regions if p = 0.08 and q = 0.13, but only ~60% of the spikein regions with the default parameters, p = 0.01 and q = 0.05.
Both Mixer and MA2C have better power than TileMap and HGMM. As shown in Tables 1, 2 and 3, Mixer has lower FDR than MA2C for original data with sparse spikein regions and has slightly better power than MA2C with abundant spikein regions. However, a straightforward comparison between Mixer and MA2C is confounded by the fact that, unlike other methods, MA2C provides FDR estimates rather than lfdr estimates. Since lfdr and FDR cutoffs are not directly comparable, we employed ROC (r eceiver o perating c haracteristic)like curve to compare Mixer and MA2C (Figure 3). Unlike a typical ROC curve, these ROClike curves plot (number of true positives)/(number of spikein clones) on the Yaxis against (number of false positives)/(number of spikein clones) on the Xaxis in order to accommodate the large number of true negatives in ChIPchip data, [21]. To simplify the plots, we averaged across samples for amplified/unamplified DNA respectively. FDR and lfdr cutoffs were set between 0.01 to 0.50. Mixer outperformed MA2C when the spikein regions were abundant (Figure 3). However, when the spikein regions were sparse, MA2C outperformed Mixer if an appropriate FDR cutoff was chosen.
Analysis of CTCFbinding Data
We also evaluated our method using the ChIPchip data from a study of the zinc finger insulator protein CTCF (CCCTCbinding factor) in IMR90 human fibroblast cells[36]. This dataset includes 38 arrays each with about 38,500 50mer probes tiling the nonrepetitive sequences of the human genome in 100 bp resolution. The original pair data (pair data includes the intensities for two channels, Cy5 (CTCF ChIP sample) and Cy3 (input genomic DNA)) were obtained from the Ren laboratory website http://bioinformaticsrenlab.ucsd.edu/rentrac/wiki/CTCF_Project. Each of the 38 arrays was analyzed separately. The results of different peakfinding algorithms were compared to the results of an independent ChIPseq based analysis that identified 20,262 CTCF binding sites in human CD4^{+} T cells [37].
HGMM was not evaluated due to its high computational cost. Model parameters were similar to those described above. For TileMap, windowlevel signals were summarized by HMM, and the lfdr of each peak region was estimated from unbalanced mixture subtraction (UMS) with default parameters (p = 0.01 and q = 0.05). For MA2C, default options were used to normalize data (robust with C = 2) and summary windowlevel signals (by median). In Mixer, the average DNA fragment length was set to 1500 bp (T. Kim, personal communication).
Although true CTCF binding sites are unknown, to permit a systematic evaluation of the various peak detection strategies, we compared the peak regions identified by each method with the 20,262 CTCF binding sites reported from a ChIPseq study by Barski et al. [37]. Since experimental variation will likely result in differences between ChIPchip and ChIPseq data, ChIPseq data serves as a common and independent source for comparison, rather than a perfect standard. A common site was called when the center of the ChIPchip peak was located within the ChIPseq peak. Without the knowledge of all true CTCF binding sites we are unable to compare FDRs, as we had done for the spikein data. Therefore, we examined a fixed number of high confidence peak regions and compared the proportion of overlap. Specifically, we examined the overlap between the ChIPseq reported sites and 5,000, 10,000, or 20,000 peak regions with the highest confidence (lowest FDR or lfdr) identified by each peak detection algorithm. Peaks identified by Mixer consistently demonstrate a greater overlap with ChIPseq peaks than those identified by MA2C and TileMap (Table 4).
Analysis of FAIRE Data
We also compared Mixer, MA2C, and TileMap on array data produced by hybridization of DNA enriched by FormaldehydeAssisted Isolation of Regulator Elements (FAIRE)[19, 38]. Briefly, FAIRE identifies open chromatin regions using organic extraction of formaldehyde crosslinked chromatin. DNA recovered in the aqueous phase is fluorescently labeled and hybridized to arrays. FAIRE typifies the data from epigenetic studies where relevant features are expected to be abundant genomewide. FAIREchip thus provides an appropriate application for Mixer. For this analysis, FAIRE was performed on chromatin isolated from human foreskin fibroblasts and hybridized to a 1% ENCODE tiling array at 38bp resolution [19].
Four arrays hybridized with FAIREselected chromatin were normalized individually. After averaging identical probes across the arrays Mixer was applied. MA2C and TileMap were run using their default options for replicate analysis. Since hypersensitivity to endonucleases is a standard method to identify open chromatin regions, we compared the results with 3,150 open chromatin regions identified by DNase I hypersensitivitychip in lymphoblastoid cell lines [39, 40]. The FAIRE regions identified by each of the three methods share ~40% overlap with DNase sites, indicating similar specificities for the various methods. Since different techniques and different cell lines are compared, this overlap likely represents an underestimate of specificity. However, Mixer offers increased sensitivity as it identifies more peaks (especially those peaks with relatively weaker signals) at the same specificity. At a local FDR (for Mixer or TileMap) or FDR (for MA2C) cutoff of 0.2, Mixer identifies 1137 peaks (42.1% overlap with DNase hypersensitivity sites) whereas MA2C identifies 750 sites (43.3% overlap), and TileMap identifies 1114 sites (40.3% overlap). At a local FDR/FDR cutoff of 0.5, Mixer identifies 1559 peaks (40.3% overlap); MA2C identifies 1175 (39.7% overlap); and TileMap identifies 1202 (39.7% overlap).
A local FDR less than 0.5 is a much more stringent cutoff than FDR less than 0.5. The former means that the highest FDR for any one of the peak regions is 0.5, whereas the latter indicates that the average FDR is 0.5. Averaging the local FDR less than 0.5 results in an estimated FDR for Mixer or TileMap of less than 0.15. Because it uses a less stringent FDR cutoff, MA2C is expected to identify more peaks. The actual identification of fewer peaks by MA2C suggests the introduction of bias by MA2C normalization. To test this hypothesis, we supplied MA2C with Mixernormalized data and observed a significant improvement of its sensitivity; 1,483 peaks (~40% overlap DNase sites) were identified at FDR less than 0.20, still fewer than the 1,559 peaks identified by Mixer with an estimated FDR of less than 0.15.
Discussion
We have developed a mixture model approach to dissect the mixture distributions of ChIPchip data: the null distribution (corresponding to the background signals) and the alternative distribution (corresponding to the ChIPenriched signals), at both probe and window levels. This approach builds on the method of Buck et al. [5] to estimate null (background) distribution of ChIPchip signal data and utilizes the Poisson point process assumption proposed by Zheng et al. [12] to model DNA fragmentation. An advance over most existing peak detection strategies, our approach is less dependent on key assumptions and prior knowledge. Our method takes into account the autocorrelation structure of nearby probes, permits a relatively large proportion of ChIPenriched signals in the mixture distribution, and does not require crossarray normalization. After dissecting the mixture distribution, both probelevel and windowlevel lfdrs are provided to evaluate the statistical significance of the identified peaks. Using three data set representing widely divergent experimental conditions, we demonstrated that our method performs comparably or better than several representative existing methods, especially when the true peak regions are abundant. Our method also applies Lowess fit data normalization to capture the nonlinear relationship between log(Cy3) and log(Cy5) signals from twocolor arrays. Mixer emphasizes the identification of abundant short peak regions rather than extended binding regions. We have recently developed a different method to identify broad signal patterns [31].
Despite Mixer's advances, areas for improved performance remain. We smooth the lfdr estimate so that it decreases as probelevel/windowlevel signals increase. This smoothing strategy avoids major fluctuations of lfdr estimates when observations are limited (e.g. in tail areas). A similar strategy has been used to define qvalue from FDR estimates [32]. However, smoothing may lead to underestimates of the lfdr, especially for small lfdr. To improve the lfdr estimates, both signal strength and signal pattern (for example the "triangle" pattern used by Zheng et al. [12]) could be incorporated, a strategy we are currently evaluating.
The use of high throughput sequencing based chromatin identification (ChIPseq) has become increasingly common. However, determination of sufficient sequencing depth remains a significant challenge, especially for abundant epigenetic events. ChIPchip remains a valuable method for pilot experiments and to cross validate results, a particularly appropriate application of Mixer. Mixer could also be adapted to dissect mixture distributions from sequencing data. Tag counts derived from unfractionated input control could model a null distribution [41]. We are currently testing this approach.
Conclusion
In summary, we have developed a method that combines improved data normalization and peak detection for ChIPchip studies. Mixer offers several advantages including lfdr determination and enhanced performance when peak regions are abundant, a common scenario for genomewide studies of chromatin organization and epigenetics [4, 19, 20].
Availability and requirements
We have implemented our method in an R package mixer, which can be freely downloaded from http://www.bios.unc.edu/~wsun/software/mixer.htm. The source code can be redistributed and/or modified under the terms of the GNU General Public License as published by the Free Software Foundation.
References
Ren B, Robert F, Wyrick JJ, Aparicio O, Jennings EG, Simon I, Zeitlinger J, Schreiber J, Hannett N, Kanin E, et al.: Genomewide location and function of DNA binding proteins. Science 2000, 290(5500):2306â€“2309. 10.1126/science.290.5500.2306
Lieb JD, Liu X, Botstein D, Brown PO: Promoterspecific binding of Rap1 revealed by genomewide maps of proteinDNA association. Nat Genet 2001, 28(4):327â€“334. 10.1038/ng569
Cawley S, Bekiranov S, Ng HH, Kapranov P, Sekinger EA, Kampa D, Piccolboni A, Sementchenko V, Cheng J, Williams AJ, et al.: Unbiased mapping of transcription factor binding sites along human chromosomes 21 and 22 points to widespread regulation of noncoding RNAs. Cell 2004, 116(4):499â€“509. 10.1016/S00928674(04)001278
Kim TH, Barrera LO, Zheng M, Qu C, Singer MA, Richmond TA, Wu Y, Green RD, Ren B: A highresolution map of active promoters in the human genome. Nature 2005, 436(7052):876â€“880. 10.1038/nature03877
Buck MJ, Nobel AB, Lieb JD: ChIPOTle: a userfriendly tool for the analysis of ChIPchip data. Genome Biol 2005, 6(11):R97. 10.1186/gb2005611r97
Ji H, Wong WH: TileMap: create chromosomal map of tiling array hybridizations. Bioinformatics 2005, 21(18):3629â€“3636. 10.1093/bioinformatics/bti593
Li W, Meyer CA, Liu XS: A hidden Markov model for analyzing ChIPchip experiments on genome tiling arrays and its application to p53 binding sequences. Bioinformatics 2005, 21(Suppl 1):i274â€“282. 10.1093/bioinformatics/bti1046
Johnson WE, Li W, Meyer CA, Gottardo R, Carroll JS, Brown M, Liu XS: Modelbased analysis of tilingarrays for ChIPchip. Proc Natl Acad Sci USA 2006, 103(33):12457â€“12462. 10.1073/pnas.0601180103
Keles S, Laan MJ, Dudoit S, Cawley SE: Multiple testing methods for ChIPChip high density oligonucleotide array data. J Comput Biol 2006, 13(3):579â€“613. 10.1089/cmb.2006.13.579
Keles S: Mixture modeling for genomewide localization of transcription factors. Biometrics 2007, 63(1):10â€“21. 10.1111/j.15410420.2005.00659.x
Song JS, Johnson WE, Zhu X, Zhang X, Li W, Manrai AK, Liu JS, Chen R, Liu XS: Modelbased Analysis of 2Color Arrays (MA2C). Genome Biol 2007, 8(8):R178. 10.1186/gb200788r178
Zheng M, Barrera LO, Ren B, Wu YN: ChIPchip: data, model, and analysis. Biometrics 2007, 63(3):787â€“796. 10.1111/j.15410420.2007.00768.x
Gottardo R, Li W, Johnson WE, Liu XS: A flexible and powerful bayesian hierarchical model for ChIPChip experiments. Biometrics 2008, 64(2):468â€“478. 10.1111/j.15410420.2007.00899.x
Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Ser B 1995, 57: 289â€“300.
Efron B, Tibshirani R, Storey J, Tusher V: Empirical Bayes analysis of a microarray experiment. Journal of the American Statistical Association 2001, 96: 1151â€“1160. 10.1198/016214501753382129
Newton MA, Noueiry A, Sarkar D, Ahlquist P: Detecting differential gene expression with a semiparametric hierarchical mixture method. Biostatistics 2004, 5(2):155â€“176. 10.1093/biostatistics/5.2.155
Newton MA, Kendziorski CM, Richmond CS, Blattner FR, Tsui KW: On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data. J Comput Biol 2001, 8(1):37â€“52. 10.1089/106652701300099074
Mardis ER: ChIPseq: welcome to the new frontier. Nat Methods 2007, 4(8):613â€“614. 10.1038/nmeth0807613
Giresi PG, Kim J, McDaniell RM, Iyer VR, Lieb JD: FAIRE (FormaldehydeAssisted Isolation of Regulatory Elements) isolates active regulatory elements from human chromatin. Genome Res 2007, 17(6):877â€“885. 10.1101/gr.5533506
Wang Z, Zang C, Rosenfeld JA, Schones DE, Barski A, Cuddapah S, Cui K, Roh TY, Peng W, Zhang MQ, et al.: Combinatorial patterns of histone acetylations and methylations in the human genome. Nat Genet 2008, 40(7):897â€“903. 10.1038/ng.154
Johnson DS, Li W, Gordon DB, Bhattacharjee A, Curry B, Ghosh J, Brizuela L, Carroll JS, Brown M, Flicek P, et al.: Systematic evaluation of variability in ChIPchip experiments using predefined DNA targets. Genome Res 2008, 18(3):393â€“403. 10.1101/gr.7080508
Berger JA, Hautaniemi S, Jarvinen AK, Edgren H, Mitra SK, Astola J: Optimized LOWESS normalization parameter selection for DNA microarray data. BMC Bioinformatics 2004, 5: 194. 10.1186/147121055194
Workman C, Jensen LJ, Jarmer H, Berka R, Gautier L, Nielser HB, Saxild HH, Nielsen C, Brunak S, Knudsen S: A new nonlinear normalization method for reducing variability in DNA microarray experiments. Genome Biol 2002, 3(9):research0048. 10.1186/gb200239research0048
Yang YH, Dudoit S, Luu P, Lin DM, Peng V, Ngai J, Speed TP: Normalization for cDNA microarray data: a robust composite method addressing single and multiple slide systematic variation. Nucleic Acids Res 2002, 30(4):e15. 10.1093/nar/30.4.e15
Buck MJ, Lieb JD: ChIPchip: considerations for the design, analysis, and application of genomewide chromatin immunoprecipitation experiments. Genomics 2004, 83(3):349â€“360. 10.1016/j.ygeno.2003.11.004
R Development Core Team: R: A language and environment for statistical computing.Vienna, Austria R Foundation for Statistical Computing; 2007. [http://www.Rproject.org]
Silverman BW: Density Estimation. London: Chapman and Hall; 1986.
Savitzky A, Golay MJE: Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Anal Chem 1964, 36(8):1627â€“1639. 10.1021/ac60214a047
Steinier J, Termonia Y, Deltour J: Smoothing and differentiation of data by simplified least square procedure. Anal Chem 1972, 44(11):1906â€“1909. 10.1021/ac60319a045
Press WH, Flannery BP, Teukolsky SA, Vetterling WT: Numerical Recipes in C, The Art of Scientific Computing. 2nd edition. Cambridge University Press; New York City, NY; 1992.
Sun W, Xie W, Xu F, Grunstein M, Li KC: Dissect nucleosome free regions by a segmental semiMarkov model. PLoS ONE 2009, 4(3):e4721. 10.1371/journal.pone.0004721
Storey JD, Tibshirani R: Statistical significance for genomewide studies. Proc Natl Acad Sci USA 2003, 100(16):9440â€“9445. 10.1073/pnas.1530509100
Fejes AP, Robertson G, Bilenky M, Varhol R, Bainbridge M, Jones SJ: FindPeaks 3.1: a tool for identifying areas of enrichment from massively parallel shortread sequencing technology. Bioinformatics 2008, 24(15):1729â€“1730. 10.1093/bioinformatics/btn305
The ENCODE (ENCyclopedia Of DNA Elements) Project Science 2004, 306(5696):636â€“640. 10.1126/science.1105136
Ji H, Jiang H, Ma W, Johnson DS, Myers RM, Wong WH: An integrated software system for analyzing ChIPchip and ChIPseq data. Nat Biotechnol 2008, 26(11):1293â€“1300. 10.1038/nbt.1505
Kim TH, Abdullaev ZK, Smith AD, Ching KA, Loukinov DI, Green RD, Zhang MQ, Lobanenkov VV, Ren B: Analysis of the vertebrate insulator protein CTCFbinding sites in the human genome. Cell 2007, 128(6):1231â€“1245. 10.1016/j.cell.2006.12.048
Barski A, Cuddapah S, Cui K, Roh TY, Schones DE, Wang Z, Wei G, Chepelev I, Zhao K: HighResolution Profiling of Histone Methylations in the Human Genome. Cell 2007, 129(4):823â€“837. 10.1016/j.cell.2007.05.009
Giresi PG, Lieb JD: Isolation of active regulatory elements from eukaryotic chromatin using FAIRE (Formaldehyde Assisted Isolation of Regulatory Elements). Methods 2009, in press.
Crawford GE, Davis S, Scacheri PC, Renaud G, Halawi MJ, Erdos MR, Green R, Meltzer PS, Wolfsberg TG, Collins FS: DNasechip: a highresolution method to identify DNase I hypersensitive sites using tiled microarrays. Nat Methods 2006, 3(7):503â€“509. 10.1038/nmeth888
Sabo PJ, Kuehn MS, Thurman R, Johnson BE, Johnson EM, Cao H, Yu M, Rosenzweig E, Goldy J, Haydock A, et al.: Genomescale mapping of DNase I sensitivity in vivo using tiling DNA microarrays. Nat Methods 2006, 3(7):511â€“518. 10.1038/nmeth890
Rozowsky J, Euskirchen G, Auerbach RK, Zhang ZD, Gibson T, Bjornson R, Carriero N, Snyder M, Gerstein MB: PeakSeq enables systematic scoring of ChIPseq experiments relative to controls. Nat Biotechnol 2009, 27(1):66â€“75. 10.1038/nbt.1518
Acknowledgements
We thank Paul G. Giresi and Jason D. Lieb for providing the FAIRE data. WS is supported, in part, by the United States Environmental protection Agency grant (RD833825). However, the research described in this article has not been subjected to the Agency's peer review and policy review and therefore does not necessarily reflect the views of the Agency and no official endorsement should be inferred. IJD is supported in part by the National Cancer Institute (K08CA100400), the V Foundation for Cancer Research, the Rita Allen Foundation, and the CornHammond Fund for Pediatric Oncology.
Author information
Authors and Affiliations
Corresponding authors
Additional information
Authors' contributions
All authors have read and approved the final manuscript. WS, IJD and MJB conceived this study. WS implemented the methods and analyzed the data. WS, IJD, MJB and MP wrote the paper.
Electronic supplementary material
12859_2008_2903_MOESM1_ESM.pdf
Additional File 1: Supplementary Materials for "Improved ChIPchip analysis by mixture model approach". Supplementary results demonstrating different data normalization methods. (PDF 1 MB)
Authorsâ€™ original submitted files for images
Below are the links to the authorsâ€™ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Sun, W., Buck, M.J., Patel, M. et al. Improved ChIPchip analysis by a mixture model approach. BMC Bioinformatics 10, 173 (2009). https://doi.org/10.1186/1471210510173
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1471210510173
Keywords
 False Discovery Rate
 Null Distribution
 Mixture Distribution
 Peak Region
 Poisson Point Process