Differential expression analysis for paired RNA-seq data

Background RNA-Seq technology measures the transcript abundance by generating sequence reads and counting their frequencies across different biological conditions. To identify differentially expressed genes between two conditions, it is important to consider the experimental design as well as the distributional property of the data. In many RNA-Seq studies, the expression data are obtained as multiple pairs, e.g., pre- vs. post-treatment samples from the same individual. We seek to incorporate paired structure into analysis. Results We present a Bayesian hierarchical mixture model for RNA-Seq data to separately account for the variability within and between individuals from a paired data structure. The method assumes a Poisson distribution for the data mixed with a gamma distribution to account variability between pairs. The effect of differential expression is modeled by two-component mixture model. The performance of this approach is examined by simulated and real data. Conclusions In this setting, our proposed model provides higher sensitivity than existing methods to detect differential expression. Application to real RNA-Seq data demonstrates the usefulness of this method for detecting expression alteration for genes with low average expression levels or shorter transcript length.


Background
Gene expression profiles are routinely collected to identify differentially expressed genes and pathways across various individuals and cellular states. Sequencing-based technologies offer more accurate quantification of expression levels compared to other technologies. Early sequencebased expression measured transcript abundance by counting short segments, known as tags, generated from the 3' end of a transcript. Tag-based methods include the Serial Analysis of Gene Expression (SAGE, [1]), Cap Analysis of Gene Expression (CAGE), LongSAGE, and massively parallel signature sequencing (MPSS). The development of deep sequencing technology enables simultaneous sequencing of millions of molecules and has led to advanced approaches for expression measurement [2,3]. Digital gene expression -tag profiling [4] adapted the tag-based approach for use with the 'next-generation' sequencing platform. RNA-Seq is an alternative approach, *Correspondence: lisa.chung@yale.edu; hongyu.zhao@yale.edu 1 Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA Full list of author information is available at the end of the article that is an application of 'whole genome shotgun sequencing' . Briefly, it entails generating a cDNA library by random priming off of fragmented RNA. The cDNA library is then subject to next-generation sequencing to generate short nucleotide sequences (reads) that correspond to the ends of the cDNA fragments. RNA-Seq aims to measure the entire transcriptome and is preferable to microarrays and tag-based approaches since it provides more information such as alternative splicing and isoform-specific gene expression with very low background signal and a wider dynamic range of quantification [5]. Moreover, recent experiments revealed that the RNA-Seq measures expression level with high accuracy and reproducibility [6][7][8][9].
Sequence-based approaches quantify gene expression as a 'digital' count and require modeling suitable for a count random variable. The Poisson distribution has been central in modelling expression data [10][11][12] and commonly applied to RNA-Seq data [6,13]. In particular, Li et al. (2012) proposed a permutation-based approach to generate the null distribution [14]. However, Poisson-based approaches may not take all the variations between biological samples into account. The Beta-http://www.biomedcentral.com/1471-2105/ 14/110 Binomial hierarchical model [15,16], overdispersed logistic [17], and overdispersed log-linear models [18] were proposed to capture extra variance for each gene separately. Negative Bionomial models have been proposed to estimate the overdispersion parameter by shrinkage estimation [19][20][21], mean-dependent local regression [22], or empirically derived prior distribution [23]. Alternatively, beta-binomial [24] and Poisson mixture [25] models were proposed under the Bayesian modeling framework. Nonparametric method with resampling was also considered [26]. These approaches generally assume that samples under two groups are obtained independently. Recently, some of these approaches have been extended to deal with multi-factor design structures [14,16,21,22].
Many practical RNA sequencing studies collect data with a paired structure, where the global expression profiles are measured before and after a treatment is applied to the same individual. Appropriate modeling of such data requires taking this design structure as well as the distributional property of the data into account. The Poisson model has been used to test the effect of drugs when the observation occurs as paired data, such as predrug and postdrug counts [27]. Lee [28] considered a mixture model to account for extra variance among individuals over the level that would be expected under the Poisson model. These approaches assume independence of the paired observation conditional on the individual mean. Bivariate Poisson or negative binomial distribution are alternative choices to model correlations between observations [29,30].
In this paper, we propose a Bayesian hierarchical approach to modeling paired count data that separately accounts for the within and between individual variability from a paired data structure. Our work adopts the Poisson-Gamma mixture model [28] and utilizes a Bayesian approach to evaluate the expression difference. We note that the Bayesian models are widely utilized in microarray studies and have improved sensitivity to detect differential gene expression by sharing information among genes [31]. Mixture models are also commonly used to model differential expression, where non-differentially expressed and differently expressed genes correspond to different mixture components. Various mixture model specifications have been considered in the literature. The gamma and log-normal distribution were used to model the expression levels [32,33]. Smyth [34] assumed a point mass at zero for log scaled fold change for null genes and a normal distribution centered at zero for non-null genes. Lonnstedt et al. [35] and Gottardo et al. [36] proposed a mixture of two (null and nonnull) or three normal (null, over, and under expression) distributions. Non-parametric approaches have also been utilized [31,37]. Lewin et al. [38] discussed various choices of mixture component priors and model checking.
The rest of this manuscript is organized as follows. Data Section introduces the biological problem and data that motivated this study. Methods Section presents our parametric model and the Bayesian method to identify genes with differential expression levels. The performance of the proposed model is examined by Simulations. Two sets of simulation studies are conducted: (1) those based on the model assumption to investigate the accuracy of the proposed method on parameter estimation, and (2) those based on mimicking the motivating data set to examine the robustness of the proposed method. Finally, the proposed method is applied to real data with detailed discussion of the results and comparisons with other methods.

Data
Qian et al. (Qian F. et al.: Identification of genes critical for resistance to infection by West Nile virus using RNA-Seq analysis, submitted) designed an RNA-Seq experiment to study human West Nile virus (WNV) infection. One objective of this study was to identify altered genes/transcripts from viral infection of primary human macrophages in comparison to uninfected samples. This study naturally has a paired design structure. A total of 10 healthy donors were recruited according to the guidelines of the human research protection program of Yale University and cells were isolated from fresh heparinized blood samples for infection with WNV (strain CT 2741, MOI=1, for 24 hours) as described previously [39]. PolyA+ RNA was prepared from uninfected and WNVinfected primary macrophages, fragmented, and subjected to sequencing using the Illumina Genome Analyzer 2. Approximately 50 million quality filtered reads were obtained from each sample, and about 85% were mapped to the human transcriptome (hg19) with ENSEMBL transcript annotations (Release 57) using TOPHAT v.1.1.4 [40]. Genes and transcript isoforms were scored for expression by a maximum likelihood based method implemented in Cufflinks v.0.9.3 [41]. To analyze differential expression, the data were first converted from the FPKM unit (fragments per kilobase of exon per million fragments mapped) to the number of reads originated from each transcript isoform. The trimmed-mean method [42] was applied to further normalize the count expression values. The processed data contains transcript-level expression counts from a total of 20 samples consisting of 10 pairs of uninfected and virus infected samples. For differential expression analysis, we removed transcripts with less than 10 total counts across 10 uninfected samples or no observed count from 6 or more individuals in the uninfected conditions. After these steps, 37,111 transcripts were considered for data analysis. http://www.biomedcentral.com/1471-2105/14/110

Bayesian mixture model for paired counts
We now describe our Bayesian hierarchical mixture model to identify differentially expressed genes/transcripts from paired RNA-seq data. As noted above, such data arise naturally from experiments measuring the biological change from treatments. We start with an overdispersed count model [28]. The observations are denoted by a pair (Y gi1 , Y gi2 ), for gene g = 1, . . . , G and individual i = 1, . . . , n, where Y gi1 is the observed baseline expression level and Y gi2 is the observed level after treatment. The sizes of the libraries are denoted as N i1 and N i2 , respectively. Let λ gi denote the true baseline expression relative to the library size. Then, Y gi1 can be modeled as a Poisson random variable with mean λ gi N i1 . Let χ g denote the expression level fold change after treatment so the true expression level is χ g λ gi N i2 , then Y gi2 can be modeled as a Poisson random variable with mean χ g λ gi N i2 . Our goal is to test whether there is any treatment effect, i.e., It has been shown that the variability among technical replicates for RNA-Seq data can be captured by the Poisson distribution [6]. However, greater variance can be expected, if observations are collected from individuals with differences in the underlying biological system. One way to model the overdispersion among the Poisson counts is to mix it with a Gamma distribution [28]. In this model, we use a Gamma distribution to model the baseline expected expression, λ gi , across individuals with shape parameter α g and rate β g ; This model allows us to obtain a simpler form of the predictive density, i.e., the λ gi 's can be integrated out (see Appendix).
Assuming independence between the baseline expression and treatment effect, we use a two-component mixture model to characterize the fold change distribution, where the expression change state of each gene is defined by a latent variable z g , with z g = 0 corresponding to no change and z g = 1 otherwise. We assume that z g has a probability of π 0 for equal expression, i.e., z g = 0, and a probability of π 1 = 1 − π 0 for differential expression. Given a state, 0 or 1, the log-scaled fold change is assumed to follow a normal distribution. Under equal expression, the log-fold change is assumed to arise from a normal distribution centered at zero and variance σ 2 0 . For genes with differential expressions, if we assume their logfold changes follow a normal distribution centered around zero, we implicitly assume that there is equal chance for a gene to be either over or under expressed. However, more genes were under-expressed after the viral infection for the data set described earlier, with 3.2% of transcripts showing increased expression by more than 4 fold after the infection whereas 4.3% showing reduction by more than 4 folds. To accommodate this asymmetry, we assume the log-fold change for non-null genes arises from a normal distribution with mean μ 1 , which may be different from 0, and variance σ 2 1 .
Collecting all the components discussed so far, the model can be summarized in Figure 1. Under this set-up, the goal is to estimate the posterior probability that a specific gene is differentially expressed after treatment, i.e., Pr(z g = 1|data). Genes can then be inferred to DE (Differential Expression) or EE (Equal Expression) according to these probabilities.

Parameter estimation via Markov chain Monte-Carlo (MCMC)
In this section, we describe the Gibbs sampling algorithm [43] that we use to iteratively sample model parameters from their conditional distributions given the other parameters and the observed data. First, we evaluate the conditional distribution of parameters (α g , β g ) characterizing the baseline expression distribution (λ gi ). These parameters are separately updated using the Metropolis-Hastings algorithm. For the latent state z g and expression level change χ g , the state z g is first proposed and then χ g is sampled given the state. Lewin et al. [38] discussed this type of move with various choices of the mixture distribution. Details of our updates on the pair of (χ g , z g ) are described in the Appendix. Mixing proportions (π 0 , π 1 ) and hyper-parameters for the mixture distribution (σ 2 0 , σ 2 1 , μ 1 ) are sampled from their conditional posterior distributions which can be derived in closed forms.

DE classification and false discovery rate estimation
The MCMC algorithm generates random samples from the joint posterior distribution of all model parameters. These samples are then used to infer the status of differential expression. One way to select a set of interesting genes is to rank genes using estimated posterior-mean fold change where T is the number of iterations used for inference after the burn-in period and χ (t) g is the sampled value for the fold change on iteration t of the Gibbs sampling algorithm. Another way to select DE genes is to consider the latent variable, z g . During the MCMC iteration, the expression state is sampled along with the fold change estimates. These MCMC samples can be used to approximate the posterior probability of differential expression by counting the proportion of sampled states being differentially expressed: The Bayes' rule assigns a gene's expression status according to the largest posterior probability. An alternative is to classify a gene if the posterior probability of being non-null is greater than a threshold (p thres ) : p g > p thres . For example, one choice would be p thres = 0.5. The false discovery rate can be estimated from these posterior probabilities [31]: The method was implemented in R and is available at http://bioinformatics.med.yale.edu.

Simulations based on the model assumptions
The first part of the simulation was conducted to examine the performance of the proposed approach when the data are generated under the model assumptions. For 10,000 genes and 10 individuals, we simulate expression counts both before and after treatment according to Equation 1. Library sizes are sampled uniformly from 7 to 18 millions and relative expected baseline expression λ gi are drawn from a Gamma distribution with shape 0.1 and rate 1,000. For simplicity, we consider a two-component log-normal mixture model for effect size. For the null genes (90%), the log-scaled effect is sampled from a normal distribution with a mean 0 and a standard deviation (σ 0 ) 0.1, whereas the log-effects are sampled from a normal distribution with mean (μ 1 ) of 1.5 and standard deviation (σ 1 ) of 0.5 for the non-null genes. For the simulation studies, the true library sizes are used for the parameter estimation.
Results in Table 1 show that the proposed approach estimates the model paramters well. With a posterior probability cutoff of 0.5, the algorithm identified more than 97% of true DE genes with an FDR of approximately 1%. Figure 2 illustrates the estimated fold changes showing the good performance of our algorithm.

Simulations based on the empirical data
In the second part of the simulation, we assume that the expression abundance is measured for 5,000 genes simultaneously before and after a given treatment. The number of individuals is set to be 10 for the relatively larger sample case (cases 1 and 4), 5 for the medium (cases 2 and 5), and 3 for the relatively smaller sample case (cases 3 and 6). The size of each library is randomly sampled from 1.8 to 3 million to have simulated count distribution compatible with the real data distribution. The infected set of the RNA-seq data (Data Section, Qian F. et al. for details) was used as the expected baseline count data to mimic the observed mean-specific dispersion. First, we sample 5,000 gene indices with replacement to get the expected baseline expression. Expression counts from the selected indices are summarized by a matrix where rows from this data matrix correspond to the selected genes in the  original data matrix and columns correspond to individuals. Then, the relative expression (λ gi,i=1,...,N , Equation 1) is computed proportional to the total counts in each sample. Among 5,000 genes, the first 4,000 are assumed to have no change (z g = 0) and their log-fold changes, log(χ g ), are sampled from a normal distribution with a mean of 0 and a variance of 4 × 10 −4 . For the rest of non-null genes, we considered the following two scenarios. An empirical set-up (cases 1, 2, 3) utilizes nominal fold change from the uninfected data set. Cases 4, 5, and 6 consider a theoretical setup, where the log-scaled fold change is drawn from a normal distribution with a mean of zero and a variance of 1. We further filter out non-null genes whose true fold changes are less than 1.4.
Each case was repeated 100 times. We compare the performance of our approach with DESeq (version 1.8.3) [22] and edgeR (version 2.6.10) [21], two widely used methods for RNA-seq data for the purpose of identifying differentially expressed genes. These two methods assume a negative binomial distribution to explain the variance due to the replicate. DESeq utilizes a smoothing curve to compute the overdispersion as a function of the average expression level. An option 'pooled-CR' is used to estimate the overdispersion parameter [44]. In edgeR, a common dispersion setting is used which assumes a consistent overdispersion across all the features and estimates the parameter using a common likelihood function. A paired design can be incorporated by utilizing generalized linear model. For each application, the true library sizes are used as the library size inputs. Table 2 summarizes the results of our approach. Overall, we see excellent performance of our method in inferring the expression change status (reflected in a high correlation with the true status) as well as the parameters characterizing the distributions for the null and non-null genes. Since true expression states are known in the simulation, we call a feature to be differentially expressed if p g > p thres and compare the estimated false discovery rate with the true value ( Figure 3). The FDR is estimated well for cases with large sample sizes as p thres increases, while it is slightly under-estimated for small sample sizes. Figure 4 illustrates the receiver operating characteristics averaged across 100 simulations under four different simulation settings. For each setting, the true positive rate is plotted against the false positive rate. The corresponding rates are computed by ranking genes from the largest posterior probability by the Bayesian approach (then, the largest fold change, if tied) or from the smallest p-value by each of the other methods. The Bayesian approach shows higher sensitivity at the same level of false positive rates than the edgeR and DESeq. Especially, the Bayesian model achieves better performance for smaller sample size and empirical fold change setting (case 2 or 3).
We further considered a simulation scenario similar with the real data. As shown in the data application, the log-scaled fold change estimated from the data has larger variance under null component. We set the null component variance to be 0.35 and repeated the simulation 50 times. For features in the non-null group, log-fold change was sampled from a normal distribution with a mean of -0. 45  Similarly with the cases 1 through 6, the estimated false discovery rate is examined ( Figure 3) and performance of the proposed approach is compared with two existing methods (Figure 4).

Differential expression analysis with the Bayesian modeling
In this section, we apply our method to the motivating data set described in the Data Section. Initial values of the model parameters are calculated directly from the data. The MCMC sampling is run 4,000 iterations after discarding the first 8,000 iterations. On average, computational time was around 5 minutes per every 100 iterations. The number of total iterations and burn-in period are http://www.biomedcentral.com/1471-2105/14/110

Comparisons with existing methods
In this section, we compare DE analysis results between our approach and existing methods. The DESeq or edgeR is applied to the same data set and top 2,352 DE transcripts are selected by their p-values. The edgeR shows higher consistency with our Bayesian model with 63.5% of overlap than the DESeq having 34.3% of overlapping transcripts. Specifically, 832, 632, and 1,364 transcripts are detected uniquely by the Bayes, edgeR, and DESeq, respectively ( Figure 6). Our approach detects those having low average expression and high fold change. In contrast, other approaches tend to identify more transcripts with high expression level and low fold change (Figure 7). Transcripts which have evidence of differential expression only by the proposed model often have large inter-individual variation. Their fold changes are high after the treatment except a few low expressed individuals. Figure 8 illustrates an example of uniquely identified transcript by our proposed approach. This transcript is a product of SLAMF7, which is known to play a role in natural killer cell activation [45]. Another interesting feature of the proposed method is that the proportion of DE genes is consistent across transcript length. Among the bottom 10% of the short transcripts, 4.6% are detected by the proposed approach while 2.4% are found by other methods. Among the top 10% of the long transcripts, 6.5% are detected by the proposed method whereas 7.4 and 8.9% are detected by DESeq and edgeR, respectively. To investigate more details, Figure 9 illustrates the DE proportion when the transcripts partitioned into 10 equal-sized bins based on their length.

Bioinformatics annotations of the results
Pathway-level analysis is one effective way to summarize biological relevance of differentially expressed genes. We perform gene enrichment analysis using DAVID (http:// david.abcc.ncifcrf.gov/). 2,352 DE transcripts inferred from our approach are mapped to 1,518 DAVID IDs for functional annotation clustering. Cluster 1 (DAVID enrichment score: 11.39) represents cellular response to the WNV infection. Specifically, pathways in cluster 10 (score: 2.72) are related to the activation of the . The induction of apoptosis by WNV is essential in the regulation of pro-inflammatory responses, and has been previously reported in cell lines and neuronal cell types [46,47]. These clusters and related top pathways are reported on Table 3 with enrichment scores and p-values.

Conclusions
In this paper, we have presented a hierarchical mixture model for the identification of differential gene expression from RNA-Seq data motivated by a West Nile Virus study, which collected samples as multiple pairs, i.e. pre-vs. post-treatment for each individual. While such design is common in biological investigations, few existing methods analyze such data appropriately. With a hierarchical Bayesian mixture model coupled with inference through MCMC, our approach incorporates variability across genes, individuals, and treatment effects in the context of a paired experiment. Application to both simulated and real data demonstrates that our model and implementation is suitable for paired design, having distinct advantages compared to the existing methods.
Simulation study suggests that our Bayesian setting can have better power to detect differential gene expression. In the real data application, our proposed is able to identify transcripts with large treatment effects but low expression levels, whereas these transcripts were not inferred to be differentially expressed by other approaches. This is likely due to the more flexible and adaptable modeling of variance across individuals in our approach. Further examination of the characteristics of these top-ranked transcripts shows that the proportion of top-ranked transcripts in the short transcript group is consistent with the proportion in the long transcript group. On the other hand, the gene sets detected by the existing approaches show a bias towards longer transcripts, as has been noted in the literature before [48,49]. Our model reduces this bias and as a result facilitates detection of some shortlength differentially expressed transcripts that the other approaches miss.
We have assumed that the log-fold change arises from a mixture of two normal distributions. Under DE, the model allows the mean of log-fold change distribution not to be restricted at zero. By doing so, our proposed model can be applied to the data showing asymmetry between over and under expression. A normal distributional assumption is shown to be robust from simulation study under empirical fold change scenarios. Other possible choices for the null genes include a point mass at 0 [50], uniform distribution around 0, and a log-Gamma distribution with a mean 0. Similar distributional assumptions can be made for the non-null genes under the two-component mixture set-up. Alternatively, one can consider a mixture of three components consisting of equal, over, and under expression states. Further extension can be considered by allowing variation in the magnitude of expression change across individuals.

Variability across individuals
The Poisson-Gamma setting (Equation 1 and 2) allows extra variance among count expression values [28]. The variance of the count is given as
After integrating over the expected gene-and individual-specific relative baseline expression (λgi's), the posterior density of unknown parameters is proportional to the product of likelihood and prior density.
We use the non-informative prior distributions for the unknown model parameters specified in the Methods Section.

Parameter estimates by the Metropolis-Hastings algorithm (MCMC)
We infer the posterior distributions using the Gibbs sampling [43], which iteratively samples model paramters from the conditional distribution of each patermter given the other parameters. In this section, we describe the procedure for the posterior inference.

Step1
Update α g . The conditional distribution for α g does not have a closed form expression. We use the Metropolis-Hastings algorithm to sample this parameter. More specifically, we update the parameter by proposing α new g ∼ N(α old g , σ 2 α ) at each iteration, where σ α is set to be 0.1. The proposal is accepted with probability min{1, r}, where r is the acceptance ratio.
where θ old −α g is the current values of the parameters except α g and If the proposal is accepted, we replace the old α g with the new one. Otherwise, α g stays at the current value. http://www.biomedcentral.com/1471-2105/14/110 Step2 Update β g . Similar to sample α g , we propose β new ∼ N(β old , σ 2 β ), where σ β is set to be 1. The acceptance ratio is calculated as Similarly, θ −β g is the vector of parameters except β g . For the evaluation of the acceptance probability, updated value of α g in the Step 1 will be used.

Step3
Update (χ g , z g ) by utilizing generalized Metropolis-Hastings. Lewin et al. [38] pointed out that χ g and z g have to be jointly estimated since the supporting space of χ g depends on the choice of z g . For example, χ g is a point mass at one if z g = 0. To estimate a pair of (χ g , z g ), they proposed the state z g first and then updated χ g |z g . By utilizing this approach, we adopt the following steps to sample (χ g , z g ). ( Step 3-1) Generate z new g from the Bernoulli distribution, with P(z new g = 0) = π old 0 . (Step 3-2) Then, χ new g is proposed from LogNormal(0, V g ) if z new g = 0. Otherwise, it is sampled from LogNormal(M g , V g ). The mean and variance of the log-normal proposal distribution are computed from the observed counts. First, we collect individuals whose pre-and post-treatment counts are non-zero for each gene, separately. Then, M g is computed as a median of log( y gi1 N i1 y gi2 N i2 ) for such individuals. The variance of these values can be used as V g , however, this estimate often gives an extreme value. In data analysis, we trim the estimates at 25th and 75th percentiles when the sample size is 10. For small sample case, the median of V g 's is used as the proposal variance.

Alternative description
Define Q(χ new g , z new g |χ old g , z old g ) to be a proposal density from the current values (χ old g , z old g ) to the proposed values. In our approach, the proposal density does not depend on the current values, i.e., we use the independence chain Metropolis-Hastings. The proposal distribution is given by The acceptance probabilty is min{1, r} where r is one of followings: where t(χ g ) = i χ y gi2 g (β g +N i1 +χ g N i2 ) y gi1 +y gi2 +αg , LN 0 is a probability density function for log-normal distribution with mean zero and variance σ 2,old 0 . Similarly, LN 1 is a lognormal density centered at μ new 1 and variance σ 2,old 1 .

Step4
Update σ 2 0 , μ 1 , σ 2 1 , which are hyper-paramaters from the distribution of χ g . Since it has a closed form for the posterior density conditional on all other parameters, we can directly sample those parameters.