Volume 13 Supplement 10
Selected articles from the 7th International Symposium on Bioinformatics Research and Applications (ISBRA'11)
Gene network modularbased classification of microarray samples
 Pingzhao Hu^{1}Email author,
 Shelley B Bull^{2} and
 Hui Jiang^{1}
DOI: 10.1186/1471210513S10S17
© Hu et al.; licensee BioMed Central Ltd. 2012
Published: 25 June 2012
Abstract
Background
Molecular predictor is a new tool for disease diagnosis, which uses gene expression to classify diagnostic category of a patient. The statistical challenge for constructing such a predictor is that there are thousands of genes to predict for the disease categories, but only a small number of samples are available.
Results
We proposed a gene network modularbased linear discriminant analysis approach by integrating 'essential' correlation structure among genes into the predictor in order that the modules or cluster structures of genes, which are related to the diagnostic classes we look for, can have potential biological interpretation. We evaluated performance of the new method with other established classification methods using three real data sets.
Conclusions
Our results show that the new approach has the advantage of computational simplicity and efficiency with relatively lower classification error rates than the compared methods in many cases. The modularbased linear discriminant analysis approach induced in the study has the potential to increase the power of discriminant analysis for which sample sizes are small and there are large number of genes in the microarray studies.
Background
With the development of microarrays technology, more and more statistical methods have been developed and applied to the disease classification using microarray gene expression data. For example, Golub et al. developed a "weighted voting method" to classify two types of human acute leukemias [1]. Radmacher et al. constructed a 'compound covariate prediction' to predict the BRCA1 and BRCA2 mutation status of breast cancer [2]. The family of linear discriminant analysis (LDA) has been widely applied in such highdimensional data [3–6]. LDA computes the optimal transformation, which minimizes the withinclass distance and maximizes the betweenclass distance simultaneously, thus achieving maximum discrimination. Many other works have also extended the LDA framework for handling the large p (number of genes) and small n (sample size) problem. For example, Shen et al. developed an eigengene based linear discriminant model by using a modified rotated spectral decomposition approach to select 'hub' genes [5]. Pang et al. proposed an improved diagonal discriminant method through shrinkage and regularization of variance, a method to borrow information across genes to improve the estimation of genespecific variance [6].
Studies have shown that given the same set of selected genes, different classification methods often perform quite similarly and simple methods like diagonal linear discriminant analysis (DLDA) and k nearest neighbor (kNN) normally work remarkably well [3]. However, because the data points in microarray data sets are often from a very highdimensional space and in general the sample size does not exceed this dimension, which presents unique challenges to feature selection and predictive modeling. Thus, finding the most informative genes is a crucial task in building predictive models from microarray gene expression data to handle the large p (number of genes) and small n (sample size ) problem. To tackle this issue, different clusteringbased classification approaches were proposed to reduce the data dimensions.
Li et al. developed clusterRasch models, in which a modelbased clustering approach was first used to cluster genes and then the discretized gene expression values were input into a Rasch model to estimate a latent factor associated with disease classes for each gene cluster [7]. The estimated latent factors were finally used in a regression analysis for disease classification. They demonstrated that their results were comparable to those previously obtained, but the discretization of continuous gene expression levels usually results in a loss of information. Hastie et al. proposed a tree harvest procedure for find additive and interaction structure among gene clusters, in their relation to an outcome measure [8]. They found that the advantage of the method could not be demonstrated due to the lack of rich samples. Dettling et al. presented an algorithm to search for gene clusters in a supervised way. The average expression profile of each cluster was considered as a predictor for traditional supervised classification methods [9]. Similar idea was further explored by Park et al. [10]. They took a twostep procedure: 1) hierarchical clustering and 2) Lasso. In the first step, they defined supergenes by averaging the genes within the clusters; In the second step, they used the supergene expression profiles to fit regression models. However, using simple averages will discard information about the relative prediction strength of different genes in the same gene cluster [9]. Yu also compared different approaches to form gene clusters and the resulting information was used for providing sets of genes as predictors in regression [11]. However, clustering approaches are often subjective, and usually neglect the detailed relationship among genes.
Recently, gene coexpression networks have become a more and more active research area [12–15]. A gene coexpression network is essentially a graph where nodes in the graph correspond to genes, and edges between genes represent their coexpression relationship. The gene neighbor relations (such as topology) in the networks are usually neglected in traditional cluster analysis [14]. One of the major applications of gene coexpression network has been centered in identifying functional modules in an unsupervised way [12, 13], which may be hard to distinguish members of different sample classes. Recent studies have shown that prognostic signature that could be used to classify the gene expression profiles from individual patients can be identified from network modules in a supervised way [15].
In this study, we propose a network modularbased LDA (named as MLDA) method for improving the prediction performances of DLDA, DQDA and among others. The major difference between our method and other LDAbased methods is that MLDA incorporates the gene network modules into LDA in a supervised way. We built the MLD prediction model using modularspecific features. As a comparison, we also implement a variant of supergene based regression models [10]. We first define supergenes by extracting the first principal component (PC) within the network modules. We then use the supergene expression profiles to fit a logistic regression (LR) model. We named the method as MPCLR.
Materials and methods
Data sets
Seedbased networkmodule identification
 1:
Build a coexpression network using Pearson correlation coefficient (r) [21].
 2:
Compute test statistic T_{ i }(i = 1,2,..., p)for each gene i in the coexpression network using the standard tstatistics or a modified tstatistics, such as significance of microarrays (SAM) [22].
 3:
Rank the absolute test statistic values from the largest one to the smallest one and select the top m genes as seed genes.
 4:
Find the module membership s for each selected seed gene i* in the coexpression network. The module assignments can be characterized by a many to one mapping. That is, one seeks a particular encoder C_{ r }(i*) that maximizes
Where ${C}_{r}\left(i*\right)=\left\{s:abs\left(corr\left({x}_{{i}^{*}},{x}_{s}\right)\right)\ge r\right\}$. The set of genes s for each seed gene i* is an adaptively chosen module, which maximizes the average (ave) differential expression signal around gene i*. The set of identified genes s should have absolute (abs) correlation (corr) with i* larger than or equal to r.
MLDA algorithm
We propose a new formulization of the traditional linear discriminant analysis. Specifically, we first use the seedbased approach to identify gene network modules. Then we perform LDA in each module. The linear predictors in all the identified modules are then summed up. The new modularbased classification approach returns signature components of tight coexpression with good predictive performance.
Let assume there are A and B two sample groups (such as disease and normal groups), which have n_{ A } and n_{ B } samples, respectively. The data for each sample j consists of a gene expression profile x_{ j } = (x_{1j},x_{2j},...,x_{ pj }), where x_{ ij } be the log ratio expression measurement for gene i = 1,2,...,p and sample j = 1,2,...,n, n = n_{ A }+n_{ B }. We assume that expression profiles x from group k (k ∈ {A,B}) are distributed as N(μ_{ k },∑_{ k }). The multivariate normal distribution has mean vector μ_{ k } and covariance matrix ∑_{ k }.
where C is the number of blocks (gene modules) and${\widehat{\sum}}_{c}$ is the estimated covariance matrix for block c(c = 1,2,...,C).
Where ${x}_{c}^{T}$ is the expression measurements of the genes in module c for a new sample to be predicted and ${\mu}_{k}^{c}(k\in \left\{A,B\right\}$ is the mean vector of the genes in module c. Obviously, linear discriminant analysis (LDA) and diagonal linear discriminant analysis (DLDA) [3] are the special cases of MLDA. That is, when C = 1, $LP={\left[x\frac{1}{2}\left({\mu}_{A}+{\mu}_{B}\right)\right]}^{T}{\widehat{\sum}}^{1}\left({\mu}_{A}{\mu}_{B}\right)$, where x^{ T } is the expression measurements of p genes for a new sample to be predicted, so MLDA is simplified to LDA; when C = p (that is, each module has only one gene), $LP=\sum _{i=1}^{p}{\left[{x}_{i}\frac{1}{2}\left({\widehat{\mu}}_{A}^{i}+{\widehat{\mu}}_{B}^{i}\right)\right]}^{T}\left\{\left({\widehat{\mu}}_{A}^{i}{\widehat{\mu}}_{B}^{i}\right)/{\sigma}_{i}^{2}\right\}$, where x_{ i } is the expression measurement of gene i for a new sample to be predicted, so MLDA is simplified to DLDA.
Where ${\widehat{r}}_{c}=median\left\{{\widehat{r}}_{i{i}^{\prime}}\right\}$ i,i' = 1,2...,p_{ c } and i ≠ i', ${\widehat{r}}_{i{i}^{\prime}}$ is the correlation estimate between gene i and gene i' in module c of sample group k.
However, in some modules (say module c), it is possible that n <p_{ c }. In this case, ∑_{ c } is not inversible. We apply singular value decomposition (SVD) technology [23] to solve the problem. Assume ∑_{ c } is a p_{ c } ×p_{ c } covariance matrix, which can be discomposed uniquely as ∑_{ c } = UDV^{ T }, where U and V are orthogonal, and $D=diag\left({\sigma}_{1},{\sigma}_{2},...,{{\sigma}_{p}}_{{}_{c}}\right)$ with ${\sigma}_{1}\ge {\sigma}_{2}\ge ,...,\ge {\sigma}_{{p}_{c}}\ge 0$. If ∑_{ c } is a p_{ c } × p_{ c } nonsingular matrix (iff σ_{ i } ≠ 0 for all i(i = 1,2,...,p_{ c })), then its inverse is given by ${\sum}_{c}^{1}=V{D}^{1}{U}^{T}$ where ${D}^{1}=diag\left(1/{\sigma}_{1},1/{\sigma}_{2},...,1/{\sigma}_{{p}_{c}}\right)$.
The rule to assign a new sample j to group k is, thus, based on:$LP>=log\left(\frac{{n}_{B}}{{n}_{A}}\right)$, sample j is assigned to group A; otherwise, it is assigned to group B.
MPCLR algorithm
In order to compare MLDA with other supergens based classification approaches, we also implement a variant of supergene based regression models [10]. MPCLR classification algorithm includes three stages: 1) construct correlationsharing based gene network modules; 2) extract metagene expression profiles from the constructed modules using principal component analysis (PCA); 3) classify samples using PCAbased logistic regression model. Here we briefly described each of the three stages:
Stage 1: Construct seedbased gene network modules. This can be done using the same approach as used in MLDA algorithm described above.
Stage 2: Principal component analysis of correlationshared expression profiles: To do this, for each of the seedbased gene network modules, we perform principal component analysis. Specifically, for a given gene module with p_{ c } genes, we assume ${x}_{j}=\left({x}_{1j},{x}_{2,j},...,{x}_{{p}_{c}j}\right)$ be expression indices of p_{ c } genes in the j th sample. Let ∑ be covariance matrix of x with dimension p_{ c }xp_{ c }. All positive eigenvalue of ∑ are denoted by ${\lambda}_{1}>{\lambda}_{2}>...>{\lambda}_{{p}_{c}}$ . The first PC score of the j th sample is given by ${x}_{j}^{*}={e}_{1}^{t}{x}_{j}$, where e_{1} is the eigenvector associated with λ_{1}. Therefore, we can define the supergene expression profile for n samples in a seedbased gene module as ${x}^{*}\phantom{\rule{0.3em}{0ex}}=\left({x}_{1}^{*},{x}_{2}^{*},...,{x}_{n}^{*}\right)$. The estimated values for the coefficient ${e}_{1}^{t}$(eigenvector) of the first PC can be computed using singular value decomposition (SVD) [23]. Briefly, assume E be an nxp_{ c } matrix with normalized gene expression values of p_{ c } genes in a given module, so we can express the SVD of E as E = UDA^{ T }, where U = {u_{1},u_{2},...,u_{ d }} is a nxd matrix (d = rank(E)), $D=diag\left\{{d}_{1}^{1/2},{d}_{2}^{1/2},...,{d}_{d}^{1/2}\right\}$ is a d × d diagonal matrix where d_{ k } is k th eigenvalue of E^{ t } E, A = {e_{1},e_{2},...,e_{d}} is a p_{ c }xd matrix where e_{ k } is eigenvector of associated with λ_{ k } and coefficients for defining PC scores. Magnitude of loadings for the first principal component score can be viewed as an estimate of the amount of contribution from the module genes.
Where ${p}_{j}=\mathrm{Pr}\left({Y}_{j}=1PC{1}_{{i}^{*}j},{i}^{*}=1,2,\mathrm{...},C\right)$. PC 1_{ i }_{*j}is the first principal component score estimated from the seed gene module i* for sample j and represents the latent variable for the underlying biological process associated with this group of genes. The model was fitted using GLM function in stats R package.
Comparisons of different supervised classification methods
We compared the prediction performances of MLDA with other established supervised classification methods, which include diagonal quadratic discriminant analysis (DQDA), DLDA, one nearest neighbor method (1NN), support vector machines (SVM) with linear kernel and recursive partitioning and regression trees (Trees). We used the implementation of these methods in different R packages http://cran.rproject.org/, which are sma for DQDA and DLDA, class for 1NN, e1071 for SVM and rpart for Trees. Default parameters in e1071 and rpart for SVM and Tree were used, respectively. For other methods (DQDA, DLDA, 1NN, MPCLR and MLDA), there are no tuning parameters to be selected. In the comparisons, seed genes were selected using ttest and SAM, respectively. We evaluated the performances of DQDA, DLDA, 1NN, SVM and Trees based on different number of the selected seed genes and those of MPCLR and MLDA based on different number of gene modules, which were built on the selected seed genes.
Crossvalidation
We performed 10fold crossvalidation to evaluate the performance of these classification methods. The basic principle is that we split all samples in a study into 10 subsets of (approximately) equal size, set aside one of the subsets from training and carried out seed gene selection, gene module construction and classifier fitting by the remaining 9 subsets. We then predicted the class label of the samples in the omitted subset based on the constructed classification rule. We repeated this process 10 times so that each sample is predicted exactly once. We determined the classification error rate as the proportion of the number of incorrectly predicted samples to the total number of samples in a given study. This 10fold crossvalidation procedure was repeated 10 times and the averaged error rate was reported.
Results
Mean error rates of classification methods applied to colon cancer data set
No. genes  DQDA  DLDA  1NN  Tree  SVM  MPCLR  MLDA 

5  0.113  0.113  0.210  0.226  0.113  0.097  0.097 
10  0.177  0.177  0.161  0.290  0.129  0.097  0.129 
15  0.113  0.129  0.129  0.242  0.145  0.113  0.113 
20  0.145  0.129  0.161  0.258  0.129  0.113  0.129 
30  0.145  0.129  0.161  0.194  0.145  0.129  0.113 
40  0.145  0.129  0.145  0.210  0.145  0.129  0.129 
50  0.145  0.145  0.194  0.226  0.145  0.145  0.113 
Mean error rates of classification methods applied to prostate cancer data set
No. genes  DQDA  DLDA  1NN  Tree  SVM  MPCLR  MLDA 

5  0.227  0.239  0.261  0.227  0.216  0.216  0.193 
10  0.205  0.193  0.284  0.318  0.170  0.193  0.182 
15  0.250  0.227  0.261  0.295  0.261  0.239  0.227 
20  0.216  0.227  0.250  0.273  0.193  0.205  0.205 
30  0.205  0.216  0.239  0.295  0.216  0.216  0.205 
40  0.261  0.250  0.295  0.318  0.250  0.261  0.227 
50  0.227  0.227  0.341  0.330  0.216  0.250  0.193 
Mean error rates of classification methods applied to lung cancer data set
No. genes  DQDA  DLDA  1NN  Tree  SVM  MPCLR  MLDA 

5  0.170  0.170  0.186  0.201  0.162  0.170  0.162 
10  0.170  0.147  0.186  0.193  0.170  0.147  0.147 
15  0.162  0.162  0.201  0.178  0.132  0.155  0.147 
20  0.147  0.162  0.170  0.193  0.178  0.155  0.132 
30  0.132  0.125  0.132  0.193  0.147  0.125  0.116 
40  0.178  0.147  0.162  0.186  0.132  0.132  0.132 
50  0.125  0.125  0.147  0.178  0.147  0.125  0.125 
As we can see, the proposed MLDA has relatively better or comparable classification performances among all being compared classification methods in the three data sets. The performance of MPCLR is not consistent in the three data sets. This is likely that the variation in the given data captured by the first PC may be different. Other methods with better classification performances are DLDA and SVM. In general, all these methods except Tree works well for both colon and lung cancer data sets. The performances of these methods in prostate cancer data are slightly worse than those in colon and lung cancer data sets, which may be due to clinical heterogeneity among samples.
Mean error rates of classification methods applied to lung cancer data set
No. genes  DQDA  DLDA  1NN  Tree  SVM  MPCLR  MLDA 

5  0.178  0.170  0.193  0.225  0.162  0.178  0.170 
10  0.170  0.170  0.209  0.193  0.178  0.155  0.147 
15  0.186  0.147  0.201  0.225  0.146  0.132  0.116 
20  0.147  0.162  0.186  0.178  0.186  0.155  0.132 
30  0.147  0.178  0.132  0.193  0.101  0.101  0.101 
40  0.178  0.132  0.178  0.186  0.132  0.132  0.132 
50  0.162  0.132  0.162  0.186  0.132  0.155  0.147 
In many cases, we found that the simple method DLDA works well. Its performance is comparable with the advanced methods, such as SVM. We also observed that the performances of predictors with more genes are not necessarily better than those of the predictors with fewer genes. For example, when ttest was used to select the seed genes, the best performance was obtained with only 5 genes for MPCLR and MLDA predictors in colon cancer data set (Table 2), 10 genes for SVM predictor in prostate cancer data set (Table 3) and 30 genes for MLDA predictor in lung cancer data set (Table 4). When SAM was used to select the seed genes, the best performance was also obtained with 30 genes for SVM, MPCLR and MLDA predictors in lung cancer data set (Table 5).
Discussion and conclusions
In this study we developed a network modularbased approach for disease classification using microarray gene expression data. The core idea of the methods is to incorporate 'essential' correlation structure among genes into a supervised classification procedure, which has been neglected or inefficiently applied in many benchmark classifiers. Our method takes into account the fact that genes act in networks and the modules identified from the networks act as the features in constructing a classifier. The rationale is that we usually expect tightly coexpressed genes to have a meaningful biological explanation. For example, if gene A and gene B has high correlation, which sometimes hints that the two genes belong to the same pathway or functional module. The advantage of the method over other methods has been demonstrated by three real data sets. Our results show that the algorithm MLDA works well for small sample size classification. It performs relatively better than DLDA, 1NN, SVM and other classifiers in many situations. The modular LDA approach induced in the study have the potential to increase the power of discriminant analysis for which sample sizes are small and there are large number of genes in the microarray studies.
Our results are consistent with previous findings: The simple methods have comparable or better classification results than the more advanced or complicated methods [3]. This is likely due to the fact that there are more parameters to be estimated in the advanced methods than in the simple methods, while our data sets usually have much smaller number of samples than features/genes. We also tried to use more top genes (up to 100) in the classification models and similar result patterns (results were not shown) were observed as shown in Tables 2, 3, 4, 5. Although some previous studies showed that better results can be obtained when the number of top genes used in the prediction models are much larger than the number of samples, the improved performance may be due to over fitting effect. Moreover, for clinical purpose, it is better to include fewer number of genes rather than larger number of genes in the prediction models due to cost issues.
Previous studies have shown that the topological structure of a node (gene product) in a protein network is informative for functional module inference [21, 24, 25]. Moreover, some useful approaches have been developed to measure the topology similarity of pairs of nodes in weighted networks [21]. It will be interesting to explore the network topologysharing based method rather than the correlationsharing approach to identify seedbased gene network modules and place them into our networkbased classification framework. The MLDA framework can be further extended in many ways. For example, it is possible to directly incorporate the modularspecific features in other advanced discriminant learning approaches (such as SVM). In the future we will explore these ideas in details.
List of abbreviations
 DLDA:

diagonal linear discriminant analysis
 DQDA:

diagonal quadratic linear discriminant analysis
 KNN:

k nearest neighbor
 LDA:

linear discriminant analysis
 LR:

logistic regression
 MLDA:

modularbased linear discriminant analysis
 MPCLR:

Modularprincipal component based logistic regression
 PC:

Principal component
 RMA:

robust multiarray average
 SAM:

significance of microarrays
 SVD:

singular value decomposition
 SVM:

support vector machines.
Declarations
Acknowledgements
This article has been published as part of BMC Bioinformatics Volume 13 Supplement 10, 2012: "Selected articles from the 7th International Symposium on Bioinformatics Research and Applications (ISBRA'11)". The full contents of the supplement are available online at http://www.biomedcentral.com/bmcbioinformatics/supplements/13/S10
The authors thank Dr. W He and S Colby for their helpful discussions and comments.
Authors’ Affiliations
References
 Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science. 1999, 286: 531536. 10.1126/science.286.5439.531.View ArticlePubMedGoogle Scholar
 Radmacher MD, McShane LM, Simon R: A paradigm for class prediction using gene expression profiles. J Comput Biol. 2002, 9: 505512. 10.1089/106652702760138592.View ArticlePubMedGoogle Scholar
 Dudoit S, Fridlyand J, Speed TP: Comparison of discrimination methods for the classification of tumors using gene expression data. J Am Stat Assoc. 2002, 97: 7787. 10.1198/016214502753479248.View ArticleGoogle Scholar
 Guo Y, Hastie T, Tibshirani R: Regularized linear discriminant analysis and its application in microarrays. Biostatistics. 2007, 8: 86100. 10.1093/biostatistics/kxj035.View ArticlePubMedGoogle Scholar
 Shen R, Ghosh D, Chinnaiyan AM, Meng Z: Eigengene based linear discriminant model for gene expression data analysis. Bioinformatics. 2006, 22: 26352642. 10.1093/bioinformatics/btl442.View ArticlePubMedGoogle Scholar
 Pang H, Tong T, Zhao H: Shrinkagebased diagonal discriminant analysis and its applications in highdimensional data. Biometrics. 2009, 65: 10211029. 10.1111/j.15410420.2009.01200.x.PubMed CentralView ArticlePubMedGoogle Scholar
 Li H, Hong F: ClusterRasch models for microarray gene expression data. Genome Biol. 2001, 2: RESEARCH0031PubMed CentralPubMedGoogle Scholar
 Hastie T, Tibshirani R, Botstein D, Brown P: Supervised harvesting of expression trees. Genome Biol. 2001, 2: RESEARCH0003PubMed CentralView ArticlePubMedGoogle Scholar
 Dettling D, Bühlmann P: Supervised clustering of genes. Genome Biol. 2002, 3: RESEARCH0069PubMed CentralView ArticlePubMedGoogle Scholar
 Park MY, Hastie T, Tibshirani R: Averaged gene expressions for regression. Biostatistics. 2007, 8: 212227.View ArticlePubMedGoogle Scholar
 Yu X: Regression methods for microarray data. PhD thesis. 2005, Stanford UniversityGoogle Scholar
 Elo L, Jarvenpaa H, Oresic M, Lahesmaa R, Aittokallio T: Systematic construction of gene coexpression networks with applications to human T helper cell differentiation process. Bioinformatics. 2007, 23: 20962103. 10.1093/bioinformatics/btm309.View ArticlePubMedGoogle Scholar
 Presson A, Sobel E, Papp J, Suarez C, Whistler T, Rajeevan M: Integrated weighted gene coexpression network analysis with an application to chronic fatigue syndrome. BMC Syst Biol. 2008, 2: 9510.1186/17520509295.PubMed CentralView ArticlePubMedGoogle Scholar
 Horvath S, Dong J: Geometric interpretation of gene coexpression network analysis. PLoS Comput Biol. 2008, 4: e100011710.1371/journal.pcbi.1000117.PubMed CentralView ArticlePubMedGoogle Scholar
 Taylor IW, Linding R, WardeFarley D, Liu Y, Pesquita C, Faria D: Dynamic modularity in protein interaction networks predicts breast cancer outcome. Nat Biotechnol. 2009, 27: 199204. 10.1038/nbt.1522.View ArticlePubMedGoogle Scholar
 Irizarry RA, Bolstad BM, Collin F, Cope LM, Hobbs B, Speed TP: Summaries of Affymetrix GeneChip probe level data. Nucleic Acids Res. 2003, 31: e1510.1093/nar/gng015.PubMed CentralView ArticlePubMedGoogle Scholar
 Alon U, Barkai N, Notterman DA, Gish K, Ybarra S, Mack D: Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc Natl Acad Sci USA. 1999, 96: 67456750. 10.1073/pnas.96.12.6745.PubMed CentralView ArticlePubMedGoogle Scholar
 Stuart RO, Wachsman W, Berry CC, WangRodriguez J, Wasserman L, Klacansky I: In silico dissection of celltypeassociated patterns of gene expression in prostate cancer. Proc Natl Acad Sci USA. 2004, 101: 615620. 10.1073/pnas.2536479100.PubMed CentralView ArticlePubMedGoogle Scholar
 Spira A, Beane JE, Shah V, Steiling K, Liu G, Schembri F: Airway epithelial gene expression in the diagnostic evaluation of smokers with suspect lung cancer. Nat Med. 2007, 13: 361366. 10.1038/nm1556.View ArticlePubMedGoogle Scholar
 Tibshirani R, Wasserman L: Correlationsharing for detection of differential gene expression. 2006, arXivmath.STmath/0608061Google Scholar
 Zhang B, Horvath S: A general framework for weighted gene coexpression network analysis. Stat Appl Genet Mol Biol. 2005, 4: Article17PubMedGoogle Scholar
 Tusher V, Tibshirani R, Chu G: Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci USA. 2001, 98: 51165121. 10.1073/pnas.091062498.PubMed CentralView ArticlePubMedGoogle Scholar
 Jolliffe IT: Principal Component Analysis. 2002, New York: SpringerGoogle Scholar
 Lubovac Z, Gamalielsson J, Olsson B: Combining functional and topological properties to identify core modules in protein interaction networks. Proteins. 2006, 64: 948959. 10.1002/prot.21071.View ArticlePubMedGoogle Scholar
 Chua HN, Sung WK, Wong L: Exploiting indirect neighbours and topological weight to predict protein function from proteinprotein interactions. Bioinformatics. 2006, 22: 16231630. 10.1093/bioinformatics/btl145.View ArticlePubMedGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.