### RNA-Seq multiread expression estimation

As we seek to extend the prevalent generative model of RNA-Seq [7–11], we begin by reviewing the basic elements of that model. Let *G* = (*G*_{1}, *...,G*_{
M
}) be the set of M transcribed regions considered and *P* = (*P*_{1}, *...,P*_{
M
}) be the proportions of RNA bases attributed to each transcript out of the total number of transcribed bases in a sequenced sample. Regions may be either genes or transcripts, depending on the level of transcription being investigated. We require P to satisfy ∑_{
gϵG
}*P*_{
g
}= 1 and ∀*gϵG*, 0 ≤ *P*_{
g
}≤ 1.

The model describes an RNA sequencing experiment where regions in G are randomly chosen according to the distribution P, start positions in these regions are chosen uniformly, and reads of length *ℓ* are generated by copying *ℓ* consecutive bases from each chosen region to produce a set of reads *R* = (*r*_{1},..., *r*_{
ρ
}). Sequencing is assumed to be error prone, leading to a certain probability of error for each read base. Based on the repetitions present in the set of regions and errors in alignment, reads may fail to map to the region from which they originate or may map to additional locations. Thus, we assign a probability of obtaining read *r*_{
j
}given that it originated from region {G}_{k},P\left({r}_{j}|{G}_{k}\right)\equiv \frac{{\left(1-\epsilon \right)}^{\left(\ell -erro{r}_{jk}\right)}{\epsilon}^{erro{r}_{jk}}}{{\ell}_{k}}. In this case we rely on the alignment of *r*_{
j
}to *G*_{
k
}to afford us the best match position instead of summing over all possible starting positions. *ℓ*_{
k
}is the effective length of *G*_{
k
}(i.e., the number of start positions from which a full length read can be derived) as defined in [11], *ϵ* is taken to be a constant per-base error rate, errors are assumed to be independent, and *error*_{
jk
}is the number of mismatches in the best alignment of *r*_{
j
}to *G*_{
k
}.

This formulation leads to the likelihood of observing the data:

L\left(P;R\right)=\prod _{j=1}^{\rho}P\left({r}_{j}|G,P\right)=\prod _{j=1}^{\rho}\sum _{k=1}^{M}P\left({G}_{k}\right)P\left({r}_{j}|{G}_{k}\right)

(1)

This likelihood function is used to estimate P given the read alignments. Typically, one will use expectation maximization to find the P for which the likelihood is maximized. It is assumed that *P*(*r*_{
j
}|*G*_{
k
}) is zero for all regions to which *r*_{
j
}is not aligned.

### Common population extension

To estimate expression levels in N individuals from a defined population, we modify the above model by assuming that samples are drawn from a common population. This is imposed by having *P* = [(*P*_{11}, ...,*P*_{1M}), .., (*P*_{N 1}*...*,*P*_{
N M
})] be probability densities drawn from a common Dirichlet distribution, defined by a set of hyper-parameters specific to the population: ∀*iϵ*[1, *N*], **p**_{
i
}= (*P*_{i 1},..., *P*_{
iM
}) ~ *Dirichlet* (*α*_{1},..., *α*_{
M
}).

For sample i, we denote the set of reads as {R}_{i}=\left({r}_{i1},\dots ,{r}_{i{\rho}_{i}}\right), where each *r*_{
ij
}is mapped to one or more regions in G. The output of a read alignment program defines the set of accepted regions for the read (in practice only alignments with up to 2 errors are accepted) and provides the number of errors in alignment for each read-region pair. This allows us to calculate *P*(*r*_{
ij
}|*G*_{
k
}) as done above for one sample. For convenience we denote *P*(*r*_{
ij
}*|G*_{
k
}) = *q*_{
ijk
}(taken to be zero for all regions not mapped to), which is independent of *α* and *P*.

As before, our objective is to estimate P, but in this case we must optimize by estimating P and *α* together. We begin by writing the likelihood function:

L\left({\mathbf{p}}_{\mathbf{1}},\dots ,{\mathbf{p}}_{\mathbf{N}},\alpha ;R\right)=Pr\left({\mathbf{p}}_{1},\dots ,{\mathbf{p}}_{\mathbf{N}}|\alpha \right)Pr\left(R|{\mathbf{p}}_{1},\dots ,{\text{p}}_{\mathbf{N}}\right)

(2)

Since expression values are sampled from the Dirichlet distribution,

Pr\left({\mathbf{p}}_{\mathbf{1}},\dots ,{\mathbf{p}}_{\mathbf{N}}|\alpha \right)=\prod _{i=1}^{N}P\left({\mathbf{p}}_{i}|\alpha \right)=\prod _{i=1}^{N}C\left(\alpha \right)\prod _{k=1}^{M}{P}_{ik}^{{\alpha}_{k}-1}

(3)

Where

C\left(\alpha \right)\equiv \frac{\Gamma \left(\sum _{k}{\alpha}_{k}\right)}{\prod _{k}\Gamma \left({\alpha}_{k}\right)}

(4)

and similar to (1) above,

Pr\left(R|{\mathbf{p}}_{\mathbf{1}},\dots ,{\mathbf{p}}_{\mathbf{N}}\right)=\prod _{i=1}^{N}\prod _{j=1}^{{\rho}_{i}}\sum _{k=1}^{M}{P}_{ik}{q}_{ijk}

(5)

This leads to

L\left({\mathbf{p}}_{1},\dots ,{\mathbf{p}}_{\mathbf{N}},\alpha ;R\right)=\left[\prod _{i=1}^{N}C\left(\alpha \right)\prod _{k=1}^{M}{P}_{ik}^{{\alpha}_{k}-1}\right]\left[\prod _{i=1}^{N}\prod _{j=1}^{{\rho}_{i}}\sum _{k=1}^{M}{P}_{ik}{q}_{ijk}\right]

(6)

Taking the log, we get

\begin{array}{ll}\hfill log\left[L\left({p}_{1},\dots ,{p}_{N},\alpha ;R\right)\right]& =NlogC\left(\alpha \right)+\sum _{k=1}^{M}\left({\alpha}_{k}-1\right)\sum _{i=1}^{N}log{P}_{ik}\phantom{\rule{2em}{0ex}}\\ +\sum _{i=1}^{N}\sum _{j=1}^{{p}_{i}}\left(log\left(\sum _{k=1}^{M}{P}_{ik}{q}_{ijk}\right)\right)\phantom{\rule{2em}{0ex}}\end{array}

(7)

### Multi-Genome Multi-Read (MGMR) algorithm

We wish to estimate *α* and **p**_{
1
},...,**p**_{
n
}to maximize equation (7) above. For this purpose, we adopt an alternating iterative procedure of estimating *α* given the current estimate of **p**_{
1
},...,**p**_{
n
}and vice-versa until the total change in *α* becomes sufficiently small (or until a pre-set number of iterations have been executed).

Although for EM-based estimation methods convexity guarantees an optimal solution will be obtained, here (as shall be seen below) we have no such guarantee. Thus, we confine updates to be local by performing only one update for P and one for *α*. By one MGMR iteration, we refer to one EM-based P update followed by one *α* update.

#### Estimating P given α

If we assume *α* is given, we can write the EM steps to find **p**_{
1
},...,**p**_{
N
}:

**E step** Letting *Match* signify a matching between reads and regions, and *Match(j)* be the region from which read j originates, we get:

\begin{array}{ll}\hfill log\left[L\left(\mathbf{P},\alpha ;\mathbf{R},\mathbf{Match}\right)\right]& =NlogC\left(\alpha \right)+\sum _{k=1}^{M}\left({\alpha}_{k}-1\right)\sum _{i=1}^{N}log{P}_{ik}\phantom{\rule{2em}{0ex}}\\ +\sum _{i=1}^{N}\sum _{j=1}^{{\rho}_{i}}\left(log\left({P}_{iMatch\left(j\right)}{{q}_{ij}}_{Match\left(j\right)}\right)\right)\phantom{\rule{2em}{0ex}}\end{array}

(8)

which leads to

Q\left(\mathbf{P},\alpha |{\mathbf{P}}^{\left(t\right)},{\alpha}^{\left(t\right)}\right)={E}_{M\phantom{\rule{0.3em}{0ex}}atch|R,{\mathbf{P}}^{\left(t\right)},{\alpha}^{\left(t\right)}}\left[log\left(L\right)\right]

(9)

\begin{array}{ll}\hfill =NlogC\left({\alpha}^{\left(t\right)}\right)& +\sum _{k=1}^{M}\left({\alpha}_{k}^{\left(t\right)}-1\right)\sum _{i=1}^{N}log{P}_{ik}\phantom{\rule{2em}{0ex}}\\ +\sum _{i=1}^{N}\sum _{j=1}^{{\rho}_{i}}\sum _{k=1}^{M}\left(log{P}_{ik}+log{q}_{ijk}\right)*\frac{{P}_{ik}^{\left(t\right)}{q}_{ijk}}{{\sum}_{k=1}^{M}{P}_{ik}^{\left(t\right)}{q}_{ijk}}\phantom{\rule{2em}{0ex}}\end{array}

(10)

\begin{array}{ll}\hfill =NlogC\left({\alpha}^{\left(t\right)}\right)+\sum _{k=1}^{M}\left({\alpha}_{k}^{\left(t\right)}-1\right)\sum _{i=1}^{N}log{P}_{ik}& +\sum _{i=1}^{N}\sum _{j=1}^{{\rho}_{i}}\sum _{k=1}^{M}{a}_{ijk}log{P}_{ik}\phantom{\rule{2em}{0ex}}\\ +\sum _{i=1}^{N}\sum _{j=1}^{{\rho}_{i}}\sum _{k=1}^{M}{a}_{ijk}log{q}_{ijk}\phantom{\rule{2em}{0ex}}\end{array}

(11)

where

{a}_{ijk}\equiv \frac{{P}_{ik}^{\left(t\right)}{q}_{ijk}}{{\sum}_{k=1}^{M}{P}_{ik}^{\left(t\right)}{q}_{ijk}}

(12)

**M step** Given that each *q*_{
ijk
}is fixed, the above reduces to maximizing

\begin{array}{ll}\hfill NlogC\left({\alpha}^{\left(t\right)}\right)& +\sum _{k=1}^{M}\left({\alpha}_{k}^{\left(t\right)}-1\right)\sum _{i=1}^{N}log{P}_{ik}\phantom{\rule{2em}{0ex}}\\ +\sum _{i=1}^{N}\sum _{j=1}^{{\rho}_{i}}\sum _{k=1}^{M}{a}_{ijk}*log{P}_{ik}\phantom{\rule{2em}{0ex}}\end{array}

(13)

Using Lagrange multipliers and differentiating, we see that this is maximized with

{P}_{ik}^{\left(t+1\right)}=\frac{{\alpha}_{k}^{\left(t\right)}-1+{\sum}_{j}{a}_{ijk}}{{\sum}_{k}\left({\alpha}_{k}^{\left(t\right)}-1+{\sum}_{j}{a}_{ijk}\right)}

(14)

#### Estimating α given P

Given a new estimate for *P*^{(t)}, we can use a fixed point iteration [12] to get a new estimate of *α*

\begin{array}{ll}\hfill F\left(\alpha \right)& =N\left[log\Gamma \left(\sum _{k}{\alpha}_{k}\right)-\sum _{k}log\Gamma \left({\alpha}_{k}\right)\right]\phantom{\rule{2em}{0ex}}\\ +\sum _{k=1}^{M}\left({\alpha}_{k}-1\right)\sum _{i=1}^{N}log{P}_{ik}^{\left(t\right)}+Const\left({P}^{\left(t\right)}\right)\phantom{\rule{2em}{0ex}}\end{array}

(15)

By using the known bound \Gamma \left(x\right)\ge \Gamma \left(\widehat{x}\right)exp\left(\left(x-\widehat{x}\right)\Psi \left(x\right)\right)\phantom{\rule{2.77695pt}{0ex}}\left(\text{having}\Psi \left(x\right)=\frac{dlog\Gamma \left(x\right)}{dx}\right), we can get a lower bound on F(*α*):

\begin{array}{l}F(\alpha )\ge N[({\displaystyle \sum _{k}{\alpha}_{k})}\Psi ({\displaystyle \sum _{k}{\alpha}_{k}^{(t)}})-{\displaystyle \sum _{k=1}^{M}log\Gamma ({\alpha}_{k})}\\ +{\displaystyle \sum _{k=1}^{M}{\alpha}_{k}log{\overline{P}}_{k}^{(t)}}+Const({P}^{(t)})\end{array}

(16)

where log{\stackrel{\u0304}{P}}_{k}=\frac{1}{N}*{\sum}_{i}log{P}_{ik}.

We maximize this bound with a fixed point iteration similar to EM, noting that for fixed values of *P* convergence is guaranteed, and that for the Dirichlet distribution the maximum is the only stationary point [12]. This leads to the update

{\alpha}_{k}^{\left(t+\right)}={\Psi}^{-1}\left[\Psi \left(\sum \underset{k}{\overset{\left(t\right)}{\alpha}}\right)+log\underset{k}{\overset{\left(t\right)}{\stackrel{\u0304}{P}}}\right]

(17)

#### Heuristics/Implementation

As we have found *F*(**α**) presented in equation (15) is non-convex even in 2 dimensions (Figure 1), we confine updates to be local by allowing only one update for both the *α* and *P* estimation steps at each MGMR iteration. For genes with EM expression estimates equaling zero in all samples we substitute 10^{-20} for their values in MGMR to avoid taking the log of zero. For P updates (e.g., equation 14), we avoid potentially negative P values by adding one to each alpha (thus ignoring -1 in the numerator and denominator). We implemented the algorithm in MATLAB, where the inputs required are read-gene map files for each sample as in SEQEM [7], and an initial P estimate matrix. Alphas are initialized as an M-length vector of ones.