The impact of measurement errors in the identification of regulatory networks
- André Fujita^{1}Email author,
- Alexandre G Patriota^{2},
- João R Sato^{3} and
- Satoru Miyano^{1, 4}
https://doi.org/10.1186/1471-2105-10-412
© Fujita et al; licensee BioMed Central Ltd. 2009
Received: 16 July 2009
Accepted: 13 December 2009
Published: 13 December 2009
Abstract
Background
There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise.
Results
This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data.
Conclusions
Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Background
There has been an increasing interest among bioinformaticians in the problem of quantifying correctly gene expression in a given sample. It is well accepted that the observed gene expression value is a combination of the "true" gene expression signal with intrinsic biological variation (natural fluctuation) and a variation caused by the measuring process, also known as measurement error. Studies have documented the presence of sizable measurement error in data collected mainly from microarrays and also by other approaches such as Real Time RT-PCR, Northern blot, CAGE, SAGE, etc [1, 2]. This measurement error can be easily observed when two technical replicates are plotted in a MA (M is the logarithm of the intensity ratio and A is the mean of the logged intensities for a dot in the plot) or scatter plots. Frequently, a considerable dispersion can be observed. This dispersion is due to the measurement error, since, in theory, technical replicates (same samples) must present the same quantifications. In general, these fluctuations are derived from probe sequence, hybridization problems, high background fluorescence, signal quantification procedures (image analysis), etc [3, 4]. In the last few years, a considerable number of reports on the problem of quantifying and separating "true" gene expression signal from noise [5–7] has been published with the main aim to find differentially expressed genes [8, 9]. Despite these results in gene expression analysis and a large amount of research performed in modeling regulatory networks (Bayesian networks [10, 11], Boolean networks [12, 13], Relevance networks [14], Graphical Gaussian models [15], Differential equations [16], etc), only a fraction of the statistical studies use procedures designed for modeling networks taking into account measurement error.
Frequently, Ordinary Least Squares (OLS) and methods related to it, such as Pearson and Spearman correlations [17], ridge, lasso and elastic net regressions [18] are widely used as estimators to quantify the strength of association between gene expression signals and model regulatory network structures. In the time series context, estimation process of Autoregressive (AR) [19–22] models also use OLS to identify which gene is or is not Granger causing another gene. Generally, a regression is carried out between the target gene and its potential predictors in order to test which predictor gene has, at a gene expression level, association with the target gene.
It is well known in the statistical literature that, when the measurement errors are ignored in the estimation process, OLS and its variants become inconsistent (i.e., even increasing the sample size the estimates do not converge to the true values). More precisely, the estimation of the slope parameters is attenuated [23] and consequently, regulatory network models become biased. Moreover, there is no control of type I error since standard OLS was not designed to treat measurement error. In this context, an adequate inference treatment must be considered for the model parameters in order to avoid inconsistent estimators. Usually, measurement equations are added to the model to capture the measurement errors effect, therefore, producing consistent estimators, efficient and asymptotically normally distributed. A careful and deep exposition on the inferential process is presented in [23] and the references therein. Although there are studies referring to problems caused by measurement errors in the statistical literature, there is a gap in the network modeling theory which must be filled in to avoid misinterpretation and distort conclusions from the inferential process. Here, we focus on the development and present some important statistical tools to be applied in OLS-based and VAR network models taking into account the measurement errors effect. We also conduct simulation studies in order to evaluate the impact of the measurement error in the identification of gene regulatory networks using the standard OLS in both conditions, time series and non time series data. Surprisingly, both the simulations and theory described that, in the presence of measurement error, the estimated coefficients are biased even increasing the amount of observations, and the statistical tests are not controlling the rate of false positives properly. These results were also observed in time series context, where the autoregressive coefficients were strongly affected. Thus, a corrected version of the OLS estimator for independent (in the regression context) and dependent (in the autoregressive context) data containing measurement error were developed. Results in both, simulated and actual biological data are illustrated. Moreover, two procedures to estimate measurement error in microarrays are presented.
Results and discussions
In order to evaluate the performance of conventional OLS and VAR methods in practice, simulations were carried out in artificial data with absence and presence of measurement error. Noise was added at different rates, and sample size was increased in order to evaluate the consistence of conventional and proposed approaches.
In the following we give a brief explanation about the usual and proposed methods. Let x and y be variables (gene expression values) with the following relationship y = α + βx + ε, where ε is the random error (intrinsic biological variation) of the model with zero mean and finite variance. In general, we are interested in estimating the parameters α and β to make inferences about them. In practice, we take a sample x_{ i }, y_{ i }for i = 1,..., n and use these quantities to obtain estimates for the parameters of interest. However, it is not always possible to observe directly the values of x and y because sometimes they are latent values, i.e., they are masked by measurement errors derived by the measurement process in microarrays, for example. Then, instead of observing the true variables, we observe surrogate variables X and Y which carry an error, that is X = x + ϵ_{1} and Y = y + ϵ_{2}, where ϵ_{1} and ϵ_{2} are measurement errors. Generally, what is done in practice is a naive solution, since it simply replaces x with X and y with Y in the regression equation and uses the OLS approach to estimate the parameters. That is, based on the equation Y = α + βX + ε, estimators are built. On the other hand, the proposed approach is slightly different. The latter considers three equations, namely: y = α + βx + ε, X = x + ϵ_{1} and Y = y + ϵ_{2} and uses them to estimate the model parameters. This little difference can result great impact in the estimators properties of each approach. Notice that the former produces inconsistent estimators and the latter produces consistent estimators when the data contains measurement error. The same idea can be applied in the time series context.
Ordinary least squares.
EM | n | β _{1} | β _{2} | β _{3} | β _{4} | β _{5} | β _{6} | β _{7} | β _{8} | β _{9} |
---|---|---|---|---|---|---|---|---|---|---|
0 | -0.1 | -0.2 | -0.3 | -0.4 | 0.5 | 0.6 | 0.7 | 0.8 | ||
0 | 50 | 0.00 | -0.10 | -0.20 | -0.30 | -0.40 | 0.50 | 0.60 | 0.70 | 0.80 |
100 | 0.00 | -0.10 | -0.20 | -0.30 | -0.40 | 0.50 | 0.60 | 0.70 | 0.80 | |
200 | 0.00 | -0.10 | -0.20 | -0.30 | -0.40 | 0.50 | 0.60 | 0.70 | 0.80 | |
400 | 0.00 | -0.10 | -0.20 | -0.30 | -0.40 | 0.50 | 0.60 | 0.70 | 0.80 | |
0.2 | 50 | 0.01 (0.00) | -0.09 (-0.11) | -0.18 (-0.20) | -0.28 (-0.30) | -0.37 (-0.41) | 0.48 (0.50) | 0.58 (0.61) | 0.67 (0.71) | 0.76 (0.81) |
100 | 0.01 (0.00) | -0.09 (-0.10) | -0.18 (-0.20) | -0.28 (-0.30) | -0.38 (-0.40) | 0.48 (0.50) | 0.58 (0.60) | 0.67 (0.70) | 0.77 (0.80) | |
200 | 0.01 (0.00) | -0.09 (-0.10) | -0.19 (-0.20) | -0.28 (-0.30) | -0.38 (-0.40) | 0.48 (0.50) | 0.58 (0.60) | 0.67 (0.70) | 0.77 (0.80) | |
400 | 0.01 (0.00) | -0.09 (-0.10) | -0.18 (-0.20) | -0.28 (-0.30) | -0.37 (-0.40) | 0.48 (0.50) | 0.58 (0.60) | 0.67 (0.70) | 0.77 (0.80) | |
0.4 | 50 | - | - | - | - | - | - | - | - | - |
100 | 0.02 (0.00) | -0.07 (-0.11) | -0.15 (-0.21) | -0.23 (-0.31) | -0.31 (-0.42) | 0.44 (0.51) | 0.52 (0.62) | 0.60 (0.72) | 0.69 (0.82) | |
200 | 0.02 (0.00) | -0.06 (-0.10) | -0.15 (-0.20) | -0.23 (-0.31) | -0.31 (-0.40) | 0.44 (0.51) | 0.52 (0.61) | 0.60 (0.71) | 0.69 (0.81) | |
400 | 0.02 (0.00) | -0.06 (-0.10) | -0.15 (-0.20) | -0.23 (-0.30) | -0.31 (-0.40) | 0.44 (0.50) | 0.52 (0.60) | 0.60 (0.70) | 0.69 (0.80) | |
0.6 | 50 | - | - | - | - | - | - | - | - | - |
100 | - | - | - | - | - | - | - | - | - | |
200 | 0.03 (-0.01) | -0.04 (-0.11) | -0.10 (-0.21) | -0.17 (-0.32) | -0.24 (-0.42) | 0.38 (0.52) | 0.45 (0.62) | 0.52 (0.72) | 0.58 (0.82) | |
400 | 0.03 (0.00) | -0.04 (-0.10) | -0.10 (-0.20) | -0.17 (-0.31) | -0.24 (-0.41) | 0.38 (0.51) | 0.45 (0.61) | 0.52 (0.71) | 0.58 (0.81) | |
0.8 | 50 | - | - | - | - | - | - | - | - | - |
100 | - | - | - | - | - | - | - | - | - | |
200 | - | - | - | - | - | - | - | - | - | |
400 | 0.05 (0.00) | -0.01 (-0.11) | -0.07 (-0.21) | -0.12 (-0.32) | -0.18 (-0.42) | 0.32 (0.51) | 0.38 (0.62) | 0.43 (0.72) | 0.49 (0.83) |
Ordinary least squares.
EM | n | β _{1} | β _{2} | β _{3} | β _{4} | β _{5} | β _{6} | β _{7} | β _{8} | β _{9} |
---|---|---|---|---|---|---|---|---|---|---|
0 | -0.1 | -0.2 | -0.3 | -0.4 | 0.5 | 0.6 | 0.7 | 0.8 | ||
0 | 50 | 4.94 | 9.06 | 21.57 | 41.73 | 63.19 | 81.09 | 92.15 | 97.24 | 99.03 |
100 | 4.90 | 13.82 | 42.31 | 74.11 | 93.20 | 98.83 | 99.87 | 100.0 | 100.0 | |
200 | 4.99 | 25.62 | 72.24 | 96.64 | 99.95 | 99.99 | 100.0 | 100.0 | 100.0 | |
400 | 5.17 | 44.31 | 95.53 | 99.98 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
0.2 | 50 | 4.81 (4.73) | 8.24 (8.45) | 17.52 (18.12) | 34.39 (34.94) | 54.86 (55.38) | 76.09 (73.71) | 88.70 (86.95) | 95.40 (94.61) | 98.12 (97.69) |
100 | 5.30 (5.20) | 11.35 (12.27) | 34.69 (36.27) | 65.53 (67.16) | 88.92 (89.57) | 98.22 (97.81) | 99.76 (99.66) | 99.97 (99.96) | 100.0 (100.0) | |
200 | 5.23 (5.25) | 19.93 (22.02) | 62.73 (65.22) | 92.67 (93.53) | 99.50 (99.58) | 99.99 (99.99) | 100.0 (100.0) | 100.0 (100.0) | 100.0 (100.0) | |
400 | 5.05 (5.09) | 36.11 (40.14) | 90.44 (92.05) | 99.86 (99.92) | 100.0 (100.0) | 100.0 (100.0) | 100.0 (100.0) | 100.0 (100.0) | 100.0 (100.0) | |
0.4 | 50 | - | - | - | - | - | - | - | - | - |
100 | 5.87 (5.17) | 7.91 (9.77) | 21.92 (25.30) | 45.15 (48.55) | 70.62 (72.95) | 93.44 (88.64) | 98.29 (96.46) | 99.63 (99.13) | 99.96 (99.82) | |
200 | 5.59 (5.13) | 11.58 (16.32) | 40.43 (47.45) | 76.71 (81.23) | 95.15 (96.48) | 99.88 (99.58) | 100.0 (99.99) | 100.0 (100.0) | 100.0 (100.0) | |
400 | 5.84 (4.76) | 19.00 (28.10) | 68.88 (77.88) | 98.88 (98.36) | 99.93 (99.98) | 100.0 (100.0) | 100.0 (100.0) | 100.0 (100.0) | 100.0 (100.0) | |
0.6 | 50 | - | - | - | - | - | - | - | - | - |
100 | - | - | - | - | - | - | - | - | - | |
200 | 6.79 (4.71) | 6.75 (10.73) | 20.87 (28.78) | 48.56 (57.07) | 77.02 (81.95) | 98.53 (93.71) | 99.81 (98.52) | 99.99 (99.76) | 100.0 (99.99) | |
400 | 8.42 (4.48) | 8.88 (18.17) | 38.42 (54.88) | 78.94 (88.15) | 97.28 (98.78) | 99.99 (99.91) | 100.0 (100.0) | 100.0 (100.0) | 100.0 (100.0) | |
0.8 | 50 | - | - | - | - | - | - | - | - | - |
100 | - | - | - | - | - | - | - | - | - | |
200 | - | - | - | - | - | - | - | - | - | |
400 | 10.95 (4.40) | 5.22 (10.97) | 17.99 (33.05) | 48.43 (63.97) | 79.65 (87.84) | 99.88 (96.46) | 99.99 (99.35) | 100.0 (99.95) | 100.0 (100.0) |
Vector autoregressive model.
EM | n | β _{0} | β _{1} | β _{2} | β _{3} | β _{4} | β _{5} | β _{6} | β _{7} | β _{8} | β _{9} |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | -0.1 | -0.2 | -0.3 | -0.4 | 0.5 | 0.6 | 0.7 | 0.8 | ||
0 | 50 | -0.04 | 0.00 | -0.10 | -0.20 | -0.30 | -0.41 | 0.51 | 0.61 | 0.71 | 0.81 |
100 | -0.02 | 0.00 | -0.10 | -0.20 | -0.30 | -0.40 | 0.50 | 0.61 | 0.71 | 0.80 | |
200 | -0.01 | 0.00 | -0.10 | -0.20 | -0.30 | -0.40 | 0.50 | 0.60 | 0.70 | 0.80 | |
400 | 0.00 | 0.00 | -0.10 | -0.20 | -0.30 | -0.40 | 0.50 | 0.60 | 0.70 | 0.80 | |
0.2 | 50 | -0.03 (-0.04) | 0.01 (0.00) | -0.09 (-0.10) | -0.19 (-0.21) | -0.28 (-0.31) | -0.38 (-0.42) | 0.49 (0.51) | 0.58 (0.61) | 0.69 (0.72) | 0.78 (0.82) |
100 | -0.01 (-0.02) | 0.00 (0.00) | -0.09 (-0.10) | -0.19 (-0.20) | -0.28 (-0.31) | -0.38 (-0.41) | 0.48 (0.50) | 0.58 (0.61) | 0.68 (0.71) | 0.78 (0.81) | |
200 | 0.00 (-0.01) | 0.00 (0.00) | -0.09 (-0.10) | -0.19 (-0.20) | -0.28 (-0.30) | -0.38 (-0.40) | 0.49 (0.50) | 0.58 (0.60) | 0.68 (0.71) | 0.77 (0.81) | |
400 | 0.01 (0.00) | 0.00 (0.00) | -0.09 (-0.10) | -0.19 (-0.20) | -0.28 (-0.30) | -0.38 (-0.40) | 0.48 (0.50) | 0.58 (0.60) | 0.68 (0.70) | 0.77 (0.80) | |
0.4 | 50 | - (-) | - (-) | - (-) | - (-) | - (-) | - (-) | - (-) | - (-) | - (-) | - (-) |
100 | 0.02 (-0.03) | 0.02 (0.00) | -0.07 (-0.11) | -0.16 (-0.21) | -0.24 (-0.32) | -0.32 (-0.42) | 0.44 (0.52) | 0.53 (0.62) | 0.61 (0.72) | 0.70 (0.83) | |
200 | 0.03 (-0.01) | 0.02 (0.00) | -0.07 (-0.10) | -0.16 (-0.21) | -0.24 (-0.31) | -0.32 (-0.41) | 0.44 (0.51) | 0.53 (0.61) | 0.61 (0.71) | 0.70 (0.81) | |
400 | 0.04 (-0.01) | 0.02 (0.00) | -0.07 (-0.10) | -0.15 (-0.20) | -0.24 (-0.30) | -0.33 (-0.41) | 0.44 (0.50) | 0.53 (0.61) | 0.61 (0.71) | 0.70 (0.81) | |
0.6 | 50 | - | - | - | - | - | - | - | - | - | - |
100 | - | - | - | - | - | - | - | - | - | - | |
200 | 0.06 (-0.02) | 0.03 (-0.01) | -0.04 (-0.11) | -0.12 (-0.21) | -0.19 (-0.32) | -0.26 (-0.42) | 0.39 (0.52) | 0.46 (0.62) | 0.53 (0.73) | 0.60 (0.83) | |
400 | 0.07 (-0.01) | 0.03 (0.00) | -0.04 (-0.10) | -0.12 (-0.20) | -0.19 (-0.31) | -0.26 (-0.41) | 0.39 (0.51) | 0.46 (0.61) | 0.53 (0.71) | 0.60 (0.81) | |
0.8 | 50 | - | - | - | - | - | - | - | - | - | - |
100 | - | - | - | - | - | - | - | - | - | - | |
200 | - | - | - | - | - | - | - | - | - | - | |
400 | 0.10 (-0.02) | 0.04 (0.00) | -0.02 (-0.11) | -0.08 (-0.21) | -0.14 (-0.32) | -0.20 (-0.42) | 0.33 (0.52) | 0.39 (0.62) | 0.45 (0.72) | 0.51 (0.83) |
Vector autoregressive model.
EM | n | β _{0} | β _{1} | β _{2} | β _{3} | β _{4} | β _{5} | β _{6} | β _{7} |
---|---|---|---|---|---|---|---|---|---|
0 | 0 | -0.1 | -0.2 | -0.3 | -0.4 | 0.5 | 0.6 | ||
0 | 50 | 6.48 | 5.66 | 10.47 | 23.78 | 44.69 | 66.62 | 83.62 | 93.13 |
100 | 5.86 | 5.44 | 16.09 | 49.39 | 81.50 | 96.45 | 99.37 | 99.97 | |
200 | 5.72 | 5.07 | 30.90 | 81.55 | 98.90 | 100.00 | 100.00 | 100.00 | |
400 | 5.19 | 5.21 | 54.80 | 98.49 | 100.00 | 100.00 | 100.00 | 100.00 | |
0.2 | 50 | 5.39 (6.64) | 5.08 (5.40) | 8.13 (8.80) | 20.17 (20.78) | 37.99 (38.41) | 59.73 (59.74) | 78.68 (75.96) | 89.67 (87.72) |
100 | 5.26 (6.17) | 5.20 (5.20) | 13.41 (14.65) | 41.15 (42.79) | 74.12 (75.29) | 93.39 (93.78) | 98.84 (98.60) | 99.91 (99.91) | |
200 | 4.84 (5.50) | 5.22 (5.46) | 24.19 (26.85) | 73.89 (76.10) | 97.15 (97.57) | 99.91 (99.92) | 100.0 (100.0) | 100.0 (100.0) | |
400 | 5.72 (5.09) | 5.53 (5.44) | 44.73 (48.95) | 96.59 (97.29) | 99.97 (99.98) | 100.0 (100.0) | 100.0 (100.0) | 100.0 (100.0) | |
0.4 | 50 | - | - | - | - | - | - | - | - |
100 | 6.02 (6.48) | 5.03 (5.15) | 9.59 (11.39) | 27.88 (31.93) | 54.11 (57.93) | 79.25 (81.32) | 95.92 (93.17) | 99.12 (98.03) | |
200 | 10.20 (5.79) | 5.45 (4.97) | 14.37 (19.86) | 51.19 (58.67) | 86.74 (90.40) | 98.45 (98.90) | 99.96 (99.88) | 100.0 (100.0) | |
400 | 20.49 (5.52) | 5.64 (5.06) | 25.14 (36.21) | 82.24 (88.42) | 99.29 (99.66) | 99.98 (100.0) | 100.0 (100.0) | 100.0 (100.0) | |
0.6 | 50 | - | - | - | - | - | - | - | - |
100 | - | - | - | - | - | - | - | - | |
200 | 22.79 (5.39) | 5.98 (5.14) | 8.13 (13.65) | 29.25 (39.74) | 61.55 (70.80) | 87.76 (91.46) | 99.65 (98.34) | 99.93 (99.68) | |
400 | 49.01 (5.36) | 7.56 (4.97) | 12.29 (24.33) | 52.77 (69.83) | 90.96 (95.68) | 99.52 (99.83) | 100.0 (100.0) | 100.0 (100.0) | |
0.8 | 50 | - | - | - | - | - | - | - | - |
100 | - | - | - | - | - | - | - | - | |
200 | - | - | - | - | - | - | - | - | |
400 | 70.58 (5.49) | 9.73 (5.29) | 6.26 (15.25) | 27.24 (46.89) | 65.41 (81.53) | 91.82 (96.48) | 99.98 (99.45) | 100.0 (99.45) | |
EM | n | β _{ 8 } | β _{ 9 } | ||||||
0.7 | 0.8 | ||||||||
0 | 50 | 97.52 | 99.20 | ||||||
100 | 100.00 | 100.00 | |||||||
200 | 100.00 | 100.00 | |||||||
400 | 100.00 | 100.00 | |||||||
0.2 | 50 | 96.11 (94.87) | 98.44 (98.89) | ||||||
100 | 100.0 (99.99) | 100.0 (100.0) | |||||||
200 | 100.0 (100.0) | 100.0 (100.0) | |||||||
400 | 100.0 (100.0) | 100.0 (100.0) | |||||||
0.4 | 50 | - | - | ||||||
100 | 99.89 (99.65) | 99.99 (99.97) | |||||||
200 | 100.0 (100.0) | 100.0 (100.0) | |||||||
400 | 100.0 (100.0) | 100.0 (100.0) | |||||||
0.6 | 50 | - | - | ||||||
100 | - | - | |||||||
200 | 100.0 (100.0) | 100.0 (100.0) | |||||||
400 | 100.0 (100.0) | 100.0 (100.0) | |||||||
0.8 | 50 | - | - | ||||||
100 | - | - | |||||||
200 | - | - | |||||||
400 | 100.0 (99.99) | 100.0 (100.0) |
where α is the adopted type I error nominal level, P(a(α)) is the power using the true probability of the type I error, namely a(α). Notice that the corrected power is just the power penalized by the distance between a(α) and α. This correction in the power is necessary because under the null hypothesis, the power has to be the nominal level, and for comparing powers from different statistics it must be done using the same nominal level.
For a good statistic, notice that under an alternative hypothesis and when n → ∞, the corrected power P^{ c }(α) converges to one because and . On the one hand, for a statistic that does not control the rate of false positives, for example, when α is set to 5% and the true probability of the type I error is a(α) = 0.08, since a(α)/α is greater than one, P^{ c }(α) will not converge to one. On the other hand, for a good statistic, the rate a(α)/α converges to one when n → ∞, then P^{ c }(α) will converge to one. Analyzing Figures 3 and 4, it is possible to verify that, for standard OLS and VAR approaches (dashed lines), the ratio a(α)/α increases faster than the corresponding powers P(a(α)), i.e., the dashed lines is decreasing as n increases. Notice on Tables 2 and 4 that the rates of false positives (a(α)) increase as n increases, and consequently, in our specific case, the ratio a(α)/α increases and the corrected power P^{ c }(a(α)) converges to zero. On the other hand, the proposed methods (full lines) keep the false positive rates controlled while the corrected power increases as n increases. It can be observed by the full lines converging to one (Figures 2 and 4) and also on Tables 2 and 4. The variations present in the curves are probably due to variations in Monte Carlo simulations, since these variations decreased (become smoother) when the number of simulations was increased from 5,000 to 10,000 and from 10,000 to 15,000. In order to illustrate the performance of standard and corrected OLS and VAR approaches in actual biological data, firstly, the measurement error was estimated using the method described in the Measurement error estimation section (No technical replicates subsection). Then, the TP53 network was constructed using a dataset composed of 400 microarrays.
Gene TP53 (lung cancer data).
Association | t(β_{ standard }) | t(β_{ corrected }) | t(β_{ standard }) -t(β_{ corrected }) |
---|---|---|---|
p53 → mdm2 | -2.2550 | -2.1250 | -0.1299 |
p53 → fas | -3.3547 | -3.0059 | -0.3487 |
p53 → bax | 5.2148 | 4.5290 | 0.6859 |
p53 → map4 | 2.8486 | 3.0243 | -0.1757 |
mdm2 → fas | -1.5495 | -1.5002 | -0.0493 |
mdm2 → bax | 0.1880 | 0.4716 | -0.2836 |
mdm2 → map4 | -0.8153 | -0.2766 | -0.5387 |
fas → bax | 0.0987 | 0.5746 | -0.4759 |
fas → map4 | 2.5776 | 2.6374 | -0.0598 |
bax → map4 | -0.3538 | -0.7187 | 0.3650 |
Gene CLOCK (actual data).
Association | t(β_{ standard }) | t(β_{ corrected }) | t(β_{ standard }) -t(β_{ corrected }) |
---|---|---|---|
clock → clock | -2.5462 | -2.3086 | -0.2376 |
clock → cry2 | 1.4255 | 1.4165 | 0.0090 |
clock → per2 | -0.1459 | 0.2372 | -0.3830 |
clock → per3 | 0.5827 | 0.5320 | 0.0507 |
clock → dbp | -1.6838 | -1.6204 | -0.0634 |
cry2 → clock | -0.8201 | -0.9070 | 0.0869 |
cry2 → cry2 | -3.0326 | -2.9813 | -0.0513 |
cry2 → per2 | 0.7007 | -0.0915 | 0.7921 |
cry2 → per3 | 0.8740 | 0.5134 | 0.3606 |
cry2 → dbp | 0.5087 | 0.7566 | -0.2479 |
per2 → clock | 2.3427 | 2.3032 | 0.0394 |
per2 → cry2 | 0.8596 | 0.9123 | -0.0527 |
per2 → per2 | -1.7977 | -1.6259 | -0.1718 |
per2 → per3 | -0.4415 | -0.5319 | 0.0904 |
per2 → dbp | -0.7264 | -0.7320 | 0.0056 |
per3 → clock | -1.3426 | -1.3651 | 0.0225 |
per3 → cry2 | -0.0824 | -0.0569 | -0.0255 |
per3 → per2 | 0.0492 | -0.0515 | 0.1007 |
per3 → per3 | -1.9925 | 1.6944 | -3.6869 |
per3 → dbp | 0.4787 | 0.4176 | 0.0611 |
dbp → clock | 0.1788 | 0.1420 | 0.0368 |
dbp → cry2 | 0.4228 | 0.3759 | 0.0469 |
dbp → per2 | 1.0039 | 0.8547 | 0.1492 |
dbp → per3 | -0.6207 | -0.2896 | -0.3311 |
dbp → dbp | -1.0694 | -1.1063 | 0.0369 |
Comparison of the usual and proposed methods in actual biological data is a difficult task since no one knows the "true" values. However, as observed in the simulation results, it is possible to conclude that the corrected approaches provide more reasonable results than biased standard methods.
- 1.
in both, independent and time series data, standard OLS does not work correctly in the presence of measurement error and correlated residues;
- 2.
in the presence of measurement error and no correlation among all predictors of independent data, the t-test built, under the standard OLS approach, to test H_{0} : β_{ j }= m for j = 1,..., p works perfectly only if m = 0 (for other null hypothesis this t-test does not work correctly). This happens because, under this hypothesis, there is no covariate effect and, consequently, there is no measurement error effect associated with the covariate. The same behavior can be seen in Patriota et al. (2009) [25];
- 3.
in the time series case, the t-test (or Wald's test) does not control the type I error rate in the presence of measurement error, independent whether there is or not correlation between time series;
- 4.
in the presence of measurement error, the estimates obtained by standard OLS are always attenuated.
Therefore, these results demonstrate that improved methods to construct regulatory networks become necessary, since it is known that genes belong to an intrincate network, i.e., the covariates may be correlated and, moreover, gene expression quantification processes such as microarray technology measure with considerable error. If these conditions are ignored, one may obtain distort results and, consequently, conclude that there is a relationship between gene expressions where there is no association.
Construction of large networks is a challenge in bioinformatics. The methods proposed here do not allow the identification of networks when the number of variables is larger than the number of observations. Increasing the number of variables, the estimates become imprecise and the chances of obtaining multicollinearity problems also increases. In the presence of multicollinearity, one may use a feature selection procedure such as a stepwise (forward or backward, for example) in order to choose the optimum set of predictors.
Analyzing Pearson correlation coefficient, one can observe that it is simply a normalized linear regression coefficient (OLS) between -1 and 1. Therefore, Pearson correlation-based methods such as Relevance networks [14] or Graphical Gaussian models [26] need further studies in order to evaluate if they are also super-estimating the rate of false positives and attenuating the coefficients like OLS. Moreover, Pearson correlation is widely used in order to test linear correlation between a certain gene expression signal and another characteristic such as prognostic, phenotype, tumor grade etc. Since these covariates may be measured with error, it is also crucial to develop a corrected Pearson correlation.
where β should be estimated by using the corrected OLS (i.e., by simultaneously considering the three equations: y = α + βx + ε, X = x + ϵ_{1} and Y = y + ϵ_{2}), σ_{ X }and σ_{ Y }are the standard deviations of the observed variables X and Y, respectively, and and are the standard deviations of the error of measure ϵ_{1} and ϵ_{2}, respectively. In this way, the estimate of the corrected version of the Pearson correlation is consistent (the larger the sample size, the smaller estimation error tends to be). Notice that, the difference between the corrected and uncorrected version of the Pearson correlation is that we are removing the excess of variability from the estimated variances of the latent x and y, since the sample variances of X and Y always over-estimate them due to the measurement errors ϵ_{1} and ϵ_{2} (note that, and ), where and are the variances of x and y, respectively.
Although the examples provided here only treat regulatory network models, the proposed approaches can be applied in a straightforward manner also to estimate linear relationships between random variables measured with error.
Conclusions
Unfortunately, avoiding measurement error in a complete manner is a very difficult task, however, it can be minimized in the measuring (experimental) process and treated during the data analysis step. Here, we have shown evidence that presence of the measurement errors has a high impact in regulatory network models. In order to overcome this problem, approaches in both major data conditions, independent and time series data were proposed in addition to measurement error estimation procedures. Further studies are necessary in order to verify how is the performance of other regulatory networks (Bayesian networks, Structural Equation models, Graphical Gaussian models, Relevance networks, etc) in the presence of measurement error.
Methods
In this section, standard Ordinary Least Squares and Vector Autoregressive models will be described. Furthermore, corrected methods for measurement error will also be presented. Finally, the model used in the simulations will be detailed.
Ordinary least squares
In a multivariate regression model, let x_{1}, x_{2},..., x_{ p }be p predictor variables (genes) possibly being related to a response variable y (gene). The conventional linear regression model states that gene y is composed of an intercept or constant a which is the basal expression level of y, the predictors or gene expressions x_{ j }'s (j = 1,..., p) which relationship with y is represented by β = (β_{1},..., β_{ p })^{⊤} (the sign of β_{ j }represents the relationship between y and x_{ j }, i.e., positive or negative association), and a random error ε, which accounts for an intrinsic biological variation (this is not the measurement error).
where ⊗ is the Kronecker product and (non biased estimator). Notice that, the diagonal elements of are the variances of the elements of , say for j = 1,..., p.
Hypothesis testing
The main interest in a simple regression model (y_{ i }= α + β x_{ i }+ ε_{ i }) lies in testing the strength of the relationship between the predictor variable (gene) x and the response variable (gene) y, in other words, if β is equal to a certain value m (in general, m = 0, i.e., there is or not linear relationship between genes x and y).
where C is a matrix of contrasts (usually, C = I). For more details about the matrix of contrasts, see [27]. Under the null hypothesis, (20) has a limit χ^{2}(d) distribution, where d = rank(C) gives the number of linear restrictions.
Ordinary least squares with measurement error
where ϵ_{1} ~ N(0, ) independent of ϵ_{2} ~ N(0, ) with and known are called as measurement errors, i.e., the variation derived by the measurement process (for example, the measurement error introduced when analyzing microarrays), ε ~ N(0, Σ_{ ε }) is the random error (intrinsic biological variation) and x ~ N (μ_{ x }, Σ_{ xx }), y ~ N(μ_{ y }, Σ_{ yy }) with μ_{ y }= α + βμ_{ x }and Σ_{ yy }= β Σ_{ xx }β^{⊤} + Σ_{ε}.
i.e., the measurement errors may be different for each variable. Notice that the components of the measurement error's vector may be correlated but the entire vectors are independent.
Notice that is estimated using equation (10) and must be known a priori (it can be estimated using the procedures described in the section "Measurement errors estimation").
Notice that, in the absence of measurement error, i.e., the corrected OLS is exactly equal to standard OLS. Furthermore, it is noteworthy that this asymptotic variance is similar to the one presented by [23] but in a multivariate manner with no correlation in the errors.
Hypothesis testing
where C is a matrix of contrasts. Under the null hypothesis, (33) follows a χ^{2} distribution with rank(C) degrees of freedom.
Vector autoregressive model
Here we define the usual VAR model as defined in Lütkepohl (2006) [28].
where I_{ p }denotes the p × p identity matrix.
where β = (β_{1}β_{2}... β_{ r }) is a p × pr matrix and .
Vector autoregressive model with measurement error
Now, the VAR model with measurement error will be presented.
where Z_{ t }= (Z_{1t}, Z_{2t},..., Z_{ pt })^{⊤} is the surrogate vector and ϵ_{ t }= (ϵ_{1t}, ϵ_{2t},..., ϵ_{ pt })^{⊤} is the measurement error vector. In most cases, if the usual conditional ML estimator is adopted for the observations subject to errors, i.e., replacing z_{ t }with Z_{ t }in the equation (34), the estimator of β will be biased as well as its asymptotic variance. Therefore, in order to overcome this limitation the measurement errors should be included in the estimation procedure. Nevertheless, the model (34) plus the equation (48) is not identifiable, since the covariance matrices of ε_{ t }and ϵ_{ t }are confounded. This problem can be avoided considering known the variance of ϵ_{ t }.
where Σ_{ v }= Σ_{ ε }+ Σ_{ϵ} + β(I_{ r }⊗ Σ_{ϵ})β^{⊤} and J_{ l }is a (r × r) matrix of zeros with one's in the |l|^{ th }diagonal above (below) the main diagonal if l > 0 (l < 0) and J_{0} is a (r × r) matrix of zeros.
where C is a matrix of contrasts (C = I, for instance) and m is usually a (p × 1) vector or zeros.
Under the null hypothesis, (59) has a limiting χ^{2}(d) distribution where d = rank(C) gives the number of linear restrictions. This test is useful to identify, in a statistical sense (controlling the rate of false positives), which gene (predictor variable) is Granger causing another gene (response variable).
Measurement error estimation
Here, two methods to estimate measurement error are proposed. One when technical replicates are available and another one in the case when they are not available.
Technical replicates
When technical replicates are available, measurement error estimation may be performed by applying a strategy extending the methods described by Dahlberg (1940) [30] (more details about Dahlberg's method in the Appendix). For microarray data, it is known that the variance varies along the spots (heteroscedasticity) due to variations in experimental conditions (efficiency of dye incorporation, washing process, etc) [31]. Moreover, it is known that Dahlberg's approach is not suitable in the presence of systematic errors. Therefore, the application of the Dahlberg's formula is not straightforward. In order to overcome this problem, we suggest the following algorithm [32].
- 1.
Perform a non-linear regression such as splines smoothing between log(W) and log(W'), i.e., log(W') = f(log(W)) + ε_{1}. Notice that the logarithm was calculated as a variance stabilizer (due to the high variance observed in microarray data). This is a common practice in microarray analysis;
- 2.
- 3.
Calculate . This is a possible estimate for the standard deviation of the measurement error. Notice that with this process, we obtain one for each spot i = 1,..., m, where m is the number of spots in the microarray, also in the presence of heteroscedasticity.
No technical replicates
- 1.
Let S be the set of all probes in the microarray and H be the set of housekeeping genes and negative controls. Calculate the mean and variance for each probe of S and H;
- 2.Perform a splines smoothing in both sets of probes separately, i.e., a splines smoothing var(H) = f(mean(H)) + ε_{1} and var(S\{H}) = g(mean(S\{H})) + ε_{2}, where H is a matrix containing the expression values of each housekeeping gene and negative controls in each row and S\{H} is a matrix containing the expression values of the remaining set of probes in each row. The functions f and g may be represented by a linear combination of spline functions ϕ_{ j }(·), i.e., they may be written as
- 3.
Divide the smoothed curve (obtained in step 2) by the other smoothed curve . Notice that this ratio ( ) is the estimation of measurement error in percentage of the total variance for each probe. With this percentage, it is possible to estimate the variance of the measurement error for each probe.
Simulations
In order to evaluate the behavior of both, standard and proposed methods, we have conducted two simulations in small, moderate and large samples sizes (50, 100, 200 and 400). Computations were performed on the R software (a free software environment for statistical computing and graphics) [34]. For each group of simulation, 10,000 Monte Carlo samples were generated. Simulation I is for independent data and Simulation II for time series data.
Simulation I - independent data
In order to become the simulation more realistic (since actual biological gene expression signals are generally quite correlated), notice that Σ_{ ε }is not a diagonal matrix, i.e., there are little correlations between the predictors. The sample's size varied from 50 to 400.
Simulation II - time series data
The time series length varied from 50 to 400.
Notice that β_{0} is the autoregressive coefficient and all time series X_{ i }for i = 1,...,9 are autocorrelated and also contemporaneously correlated (Σ_{ ε }is not a diagonal matrix).
Actual biological data
The standard and proposed OLS methods were applied to lung cancer gene expression data collected by [35]. This dataset is composed of 400 microarrays, each of which constructed using a different cDNA obtained from a different patient. Standard and corrected VAR approaches were applied to mouse liver time series data collected by [36]. This data is composed by 48 time points distributed at intervals of 1 hour.
Appendix
Proof of the asymptotic variance of β- equation (29)
Here, we proof equation (29), i.e., the asymptotic variance of β in the multivariate case with no correlated errors.
The proof idea, similar to presented in [23], has two main steps. The first step consists in showing that vec( ) - vec(β^{⊤}) can be written as linear combinations of a vectorial mean. In the second one, we must demonstrate that this vectorial mean has an asymptotic normal distribution. Therefore, we need some auxiliar results for proving the asymptotic result, which are exposed in two propositions below.
with W_{ i }= (ε_{ i }+ ϵ_{2i}- β ϵ_{1i}) ⊗ (x_{ i }- μ_{ x }+ ϵ_{1i}) - Ψ, Ψ = (I_{ q }⊗ )vec(β^{⊤}) and b_{ n }= means that nb_{ n }is limited in probability when n diverges. It implies that, _{ prob }(n^{-1}) goes to zero when n increases.
where ϵ_{2i}= (ϵ_{2,1i},...,ϵ_{2, qi})^{⊤}.
with W_{ i }= (ε_{ i }+ ϵ_{2i}- β ϵ_{1i}) ⊗ (x_{ i }- μ_{ x }+ ϵ_{1i}) - Ψ and Ψ = (I_{ q } ⊗ ) vec(β^{⊤}).
Dahlberg's error
where Z_{ ij }is the measure obtained in one experiment (microarray), i is the sample index i = 1,..., m, m is the number of spost in the microarray, j is the replicate number (j = 1, 2 in the case of duplicates), μ_{ i }is the unknown true value of the measure and ϵ_{ ij }is the error of measure.
Then, assume that E(ϵ_{ ij }) = 0 and Var(ϵ_{ ij }) = . Thus, one quantification of the quality of measure is the standard deviation of ϵ_{ ij }, i.e., δ_{ϵ}. Notice that the lower is the standard deviation of the error of measure (δ_{ϵ}), the lower is the measurement error.
The quantity is exactly the Dahlberg's formula proposed in [30].
Declarations
Acknowledgements
This research was supported by grants from RIKEN and FAPESP.
Authors’ Affiliations
References
- Mar JC, Kimura Y, Schroder K, Irvine KM, Hayashizaki Y, Suzuki H, Hume D, Quackenbush J: Data-driven normalization strategies for high-throughput quantitative RT-PCR. BMC Bioinformatics 2009, 10: 110. 10.1186/1471-2105-10-110PubMed CentralView ArticlePubMedGoogle Scholar
- Fontaine L, Even S, Soucaille P, Lindley ND, Cocaign-Bousquet M: Transcript quantification based on chemical labeling of RNA associated with fluorescent detection. Anal Biochem 2001, 298(2):246–52. 10.1006/abio.2001.5390View ArticlePubMedGoogle Scholar
- Yuk FL, Cavalieri D: Fundamentals of cDNA microarray data analysis. Trends in Genetics 2003, 19: 649–659. 10.1016/j.tig.2003.09.015View ArticleGoogle Scholar
- Yang YH, Buckley MJ, Dudoit S, Speed TP: Comparison of methos for image analysis on cDNA microarray data. Journal of Computational and Graphical Statistics 2002, 11: 108–136. 10.1198/106186002317375640View ArticleGoogle Scholar
- Karakacj TK, Wentzell PD: Methods for estimating and mitigating errors in spotted, dual-coloer DNA microarrays. OMICS 2007, 11(2):186–99. 10.1089/omi.2007.0008View ArticleGoogle Scholar
- Kim K, Page GP, Beasley TM, Barnes S, Scheirer KE, Allison DB: A proposed metric for assessing the measurement quality of individual microarrays. BMC Bioinformatics 2006., 7(35):Google Scholar
- Strimmer K: Modeling gene expression measurement error: a quasi-likelihood approach. BMC Bioinformatics 2003., 4(10):Google Scholar
- Liu X, Milo M, Lawrence ND, Rattray M: Probe-level measurement error improves accuracy in detecting differential gene expression. Bioinformatics 2006, 22(17):2107–13. 10.1093/bioinformatics/btl361View ArticlePubMedGoogle Scholar
- Zhang D, Wells MT, Smart CD, Fry WE: Bayesian normalization and identification for differential gene expression data. Journal of Computational Biology 2005, 12: 391–406. 10.1089/cmb.2005.12.391View ArticlePubMedGoogle Scholar
- Dojer N, Gambim A, Mizera A, Wilczński B, Tiuryn J: Applying dynamic Bayesian networks to perturbed gene expression data. BMC Bioinformatics 2006, 7: 249. 10.1186/1471-2105-7-249PubMed CentralView ArticlePubMedGoogle Scholar
- Friedman N, Linial M, Nachman I, Pe'er D: Using Bayesian networks to analyze expression data. Journal of Computational Biology 2000, 7: 601–620. 10.1089/106652700750050961View ArticlePubMedGoogle Scholar
- Akutsu T, Miyano S, Kuhara S: Algorithms for identifying Boolean networks and related biological networks based on matrix multiplication and fingerprint function. Journal of Computational Biology 2000, 7: 331–343. 10.1089/106652700750050817View ArticlePubMedGoogle Scholar
- Pal R, Datta A, Bittner M, Dougherty E: Intervention in context sensitive probabilistic Boolean networks. Bioinformatics 2005, 21: 1211–1218. 10.1093/bioinformatics/bti131View ArticlePubMedGoogle Scholar
- Moriyama M, Hoshida Y, Otsuka M, Nishimura S, Kato N, Goto T, Taniguchi H, Shiratori Y, Seki N, Omata M: Relevance network between chemosensitivity and transcriptome in human hepatoma cells. Molecular Cancer Therapeutics 2003, 2: 199–205.PubMedGoogle Scholar
- Shchäffier J, Strimmer K: An empirical Bayes approach to inferring large-scale gene association networks. Bioinformatics 2005, 21: 754–764.View ArticleGoogle Scholar
- Chen KC, Wang TY, Tseng HH, Huang CYF, K CY: A stochastic differential equation model for quantifying transcriptional regulatory network in Saccharomyces cerevisiae. Bioinformatics 2005, 21(12):2283–2890.Google Scholar
- Fujita A, Sato J, Demasi M, Sogayar M, Ferreira C, miyano S: Comparing Pearson, Spearman and Hoeffding's D measure for gene expression association analysis. Journal of Bioinformatics and Computational Biology 2009, 7(4):663–684. 10.1142/S0219720009004230View ArticlePubMedGoogle Scholar
- Shimamura T, Imoto S, Yamaguchi R, Fujita A, Nagasaki M, miyano S: Recursive regularization for inferring gene networks from time-course gene expression profiles. BMC Systems Biology 2009., 3(41):Google Scholar
- Mukhopadhyay ND, Chatterjee S: Causality and pathway search in microarray time series experiment. Bioinformatics 2007, 23: 442–449. 10.1093/bioinformatics/btl598View ArticlePubMedGoogle Scholar
- Fujita A, Sato J, Garay-Malpartida H, Morettin P, Sogayar M, Ferreira C: Time-varying modeling of gene expression regulatory networks using the wavelet dynamic vector autoregressive method. Bioinformatics 2007, 23(13):1623–1630. 10.1093/bioinformatics/btm151View ArticlePubMedGoogle Scholar
- Fujita A, Sato J, Yamaguchi Garay-MalpartidaR, Miyano S, Sogayar M, Ferreira C: Modeling gene expression regulatory networks with the sparse vector autoregressive model. BMC Systems Biology 2007, 1: 39. 10.1186/1752-0509-1-39PubMed CentralView ArticlePubMedGoogle Scholar
- Fujita A, Sato J, Garay-Malpartida H, Sogayar M, Ferreira C, Miyano S: Modeling nonlinear gene regulatory networks from time series gene expression data. Journal of Bioinformatics and Computational Biology 2008, 6(5):961–979. 10.1142/S0219720008003746View ArticlePubMedGoogle Scholar
- Fuller W: Measurement error models. New York: Wiley; 1987.View ArticleGoogle Scholar
- Edery I: Circadian rhythms in a nutshell. Physiol Genomics 2000, 3(2):59–74.PubMedGoogle Scholar
- Patriota AG, Bolfarine H, Castro M: A heteroscedastic structural errors-in-variables model with equation error. Statistical Methodology 2009, 6(4):408–423. 10.1016/j.stamet.2009.02.003View ArticleGoogle Scholar
- Wille A, Zimmermann P, Vranová E, Fürholz A, Laule O, Bleuler S, Hennig L, Prelić A, von Rohr P, Thiele L, Zitzler E, Gruissem W, Bühlmann P: Sparse graphical Gaussian modeling of the isoprenoid gene network in Arabidopsis thaliana . Genome Biology 2004, 5: r92. 10.1186/gb-2004-5-11-r92PubMed CentralView ArticlePubMedGoogle Scholar
- Graybill F: Theory and application of the linear model. Massachusetts: Duxubury Press; 1976.Google Scholar
- Lütkepohl H: New introduction to multiple time series analysis. Berlin: Springer; 2006.Google Scholar
- Patriota AG, Sato JR, Blas BG: Vector autoregressive models with measurement errors for testing Granger causality. arXiv:0911.5628v1 arXiv:0911.5628v1Google Scholar
- Dahlberg G: Statistical methods for medical and biological students. New York: Interscience Publications; 1940.Google Scholar
- Fan J, Tam P, Woude GV, Ren Y: Normalization and analysis of cDNA microarrays using within-array replications applied to neuroblastoma cell response to a cytokine. PNAS 2004, 101(5):1135–1140. 10.1073/pnas.0307557100PubMed CentralView ArticlePubMedGoogle Scholar
- Fujita A, Sato J, da Silva F, Galvão M, Sogayar M, Miyano S: Quality control and reproducibility in DNA microarray experiments. Genome Informatics, in press.Google Scholar
- Eisenberg E, Levanon EY: Human housekeeping genes are compact. Trends in Genetics 2003, 19(7):362–365. 10.1016/S0168-9525(03)00140-9View ArticlePubMedGoogle Scholar
- The R Project for Statistical Computing[http://www.r-project.org/]
- Director's Challenge Consortium for the Molecular Classification of Lung Adenocarcinoma S Shedden, Taylor JMG, Enkemann SA, Tsao MS, Yeatman TJ, Gerald WL, Eschrich S, Jurisica I, Giordano TJ, Misek DE, Chang AC, Zhu CQ, Strumpf D, Hanash S, Shepherd FA, Ding K, Seymour L, Naoki K, Pennell N, Weir B, Verhaak R, Ladd-Acosta C, Golub T, Gruid M, Sharma A, Szoke J, Zakowski M, Rusch V, Kris M, Viale A, Motoi N, Travis W, Conley B, Seshan VE, Meyerson M, Kuick R, Dobbin KK, Lively T, Jacobson JW, Beer DG: Gene expression based survival prediction in lung adenocarcinoma: a multi-site, blinded validation study. Nature Medicine 2008, 14: 822–827. 10.1038/nm.1790View ArticlePubMedGoogle Scholar
- Hughes ME, DiTacchio L, Hayes KR, Vollmers C, Pulivarthy S, Baggs JE, Panda S, Hogenesch JB: Harmonics of circadian gene transcription in mammals. PLoS Genetics 2009, 5(4):e1000442. 10.1371/journal.pgen.1000442PubMed CentralView ArticlePubMedGoogle Scholar
- Athreya K, Lahiri S: Measure theory and probability theory. Berlin: Springer; 2006.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.