Research  Open  Published:
Hierarchical Naive Bayes for genetic association studies
BMC Bioinformaticsvolume 13, Article number: S6 (2012)
Abstract
Background
Genome Wide Association Studies represent powerful approaches that aim at disentangling the genetic and molecular mechanisms underlying complex traits. The usual "oneSNPatthetime" testing strategy cannot capture the multifactorial nature of this kind of disorders. We propose a Hierarchical Naïve Bayes classification model for taking into account associations in SNPs data characterized by Linkage Disequilibrium. Validation shows that our model reaches classification performances superior to those obtained by the standard Naïve Bayes classifier for simulated and real datasets.
Methods
In the Hierarchical Naïve Bayes implemented, the SNPs mapping to the same region of Linkage Disequilibrium are considered as "details" or "replicates" of the locus, each contributing to the overall effect of the region on the phenotype. A latent variable for each block, which models the "population" of correlated SNPs, can be then used to summarize the available information. The classification is thus performed relying on the latent variables conditional probability distributions and on the SNPs data available.
Results
The developed methodology has been tested on simulated datasets, each composed by 300 cases, 300 controls and a variable number of SNPs. Our approach has been also applied to two real datasets on the genetic bases of Type 1 Diabetes and Type 2 Diabetes generated by the Wellcome Trust Case Control Consortium.
Conclusions
The approach proposed in this paper, called Hierarchical Naïve Bayes, allows dealing with classification of examples for which genetic information of structurally correlated SNPs are available. It improves the Naïve Bayes performances by properly handling the withinloci variability.
Background
In the last few years, the advent of massive genotyping technologies allowed researchers to define the individual genetic characteristics on a wholegenome scale. These advances boosted the diffusion of Genome Wide Association Studies (GWASs) and transformed them from expensive instruments of investigation into relatively affordable, popular and powerful research tools. For this reason, they have been extensively applied to the study of the most prevalent disorders.
As a matter of fact, most of the common diseases (e.g. diabetes mellitus, obesity, arterial hypertension, etc.) belong to the category of complex traits [1] which expression results from the additive contribution of a large spectrum of environmental determinants (exposure to external factors), behavioural factors (diet, lifestyle, smoke,...) and genetic variants (point mutations, single nucleotide polymorphisms  SNPs, large scale structural variations) [2]. Moreover, complex interactions among genetic variants, environmental factors and external influences are supposed to modulate not only the expression of the disease, but also the effectiveness of pharmacological treatments [3, 4]. In this context, the identification of the molecular mechanisms underlying a certain disease could help researchers in forecasting the individuallevel probability of developing specific disorders and thus in defining personalized pharmacological interventions. GWASs seem thus an interesting approach to cope with such issues by deepening the insight about the contribution of the genetic makeup of an individual to the probability of developing a certain disease or trait [2].
To date, from the statistical viewpoint, the main limitations to the full exploitation of the GWAS results are mostly represented by the lack of appropriate multivariate tools, which can replace the usual univariate testing strategies, commonly used during for the discovery phase of a GWAS. In standard univariate analyses, rules for defining statistically significant associations are usually based on the application of overconservative significance thresholds, imposed to minimize the probability of false positive associations. The main drawback of these approaches is that they tend to discard potentially informative signals, resumed by genetic loci characterized by small effects on the trait [5].
In this context, multivariate models could overcome the limitations of the usual "oneSNPatatime" testing strategies, offering the possibility of exploring and integrating the huge amount of information deriving both from whole genome screenings and from clinical/phenotypic measurements.
Beside logistic regression (LR), which represents the most common approach for building multivariate models from SNPs data [6], several standard and alternative machine learning approaches such Naïve Bayes (NB), Support Vectors Machines (SVM), Random Forests (RF), Least Absolute Shrinkage and Selection Operator (LASSO) and modelaveraged Naïve Bayes (MANB) have been proposed and applied for dealing with GWAS data. NB represents a machinelearning method that has been used for over 50 years in biomedical informatics [7]. NB is computationally inexpensive and it has often been shown to reach optimal classification performances, even when compared to much more advanced and complex methods [8]. However, NB loses accuracy in presence of large amounts of attributes to be analyzed, since it tends to make predictions with posterior probabilities close to 0 and 1 [9]. SVMs are one of the most popular classifiers in the field of machine learning and achieves stateoftheart accuracy in many computational biology applications [10]. Thanks to their performances, they have been applied recently in the context of GWAS [11, 12]. Classification and Regression Trees (CART) represent machine learning algorithms that allow for the identification of predictive stratifications and functional interactions within data [13]. In the context of CART family of algorithms, RFs [14] allow analysing complex discrete traits using dense genetic information deriving from large sets of markers. In this context RFs are widely employed to the analysis of candidate genes association studies and GWAS for human binary traits [15]. Further, alternative approaches such logistic and Bayesian LASSO have been recently proposed and successfully applied for performing multivariate features selection in a genomewide context [16–18], offering an appealing alternative to the standard univariate SNPs ranking and selection strategies.
Recently, Lee et al. [19] and Yang et al. [5] proposed two multivariate approaches based on the simultaneous fitting of a genome wide set of SNPs. In particular, Yang et al. [5] showed that about 45% of variance of the human height could be explained by considering simultaneously a whole  genome set of SNPs instead of focusing on a small fraction of highly significant hits. In a Bayesian framework, Wei et al. [20] proposed a modelaveraged Naïve Bayes (MANB) to predict late onset Alzheimer's disease using about 310,000 polymorphic markers. These observations suggest that the genetic signature of an individual is represented by the information contained in its whole genome sequence more than in candidate loci.
Multivariate models, however, can be hardly learned from GWASs data due to the socalled "smalln largep problem": the number of variables in the model, i.e. the genotype loci, is much larger than the number of available individuals. This may cause major problems in model selection and model parameters fitting, instability and overfitting.
Bayesian methods, and in particular Bayesian Hierarchical Models (BHMs), represent a promising framework for deriving information from large sets of variables by exploiting available prior knowledge.
In our paper, we will exploit the capability of such models to use the knowledge about the correlation structure of such variables. Chromosome regions, represented by sequences of nearby SNPs, are often characterized by strong pairwise correlation, making the information available redundant and thus difficult to be analyzed. Hierarchical models (multilevel models) provide a way of pooling the information of correlated variables without assuming that they can be modelled as a unique variable [21]. Data coming from the same population are split in homogeneous subgroups, to which individuallevel parameters are associated. The link/correlation among different individual parameters is expressed by population level parameters  or hyperparameters. In this way it is possible to take into account for both withingroup heterogeneity (thanks to the presence of individual level parameters) and betweengroups variability (thanks to the presence of the population parameters).
BHMs have been already applied in a variety of biomedical contexts. They have been proposed as a fundamental tool to analyze next generation genomics data [22]. Moreover, Demichelis et al. applied such methods to tissue microarray data coming from tumor biopsies [21].
In the context of GWASs, we propose a Hierarchical Naïve Bayes (HNB) classification model that allows capturing the uncertainty of the information deriving from a set of genetic markers that are functionally/structurally correlated and to use this information to classify new examples. SNPs that do not fall within such regions as well as clinically relevant variables (e.g.: gender, smoke, therapies, candidate markers) can be also included in the model (Figure 1).
The following sections describe the main methodological aspects of the algorithm implemented as well as the results obtained on both simulated datasets and two real GWASs on the genetic bases of Type 1 Diabetes (T1D) and Type 2 Diabetes (T2D) by the Wellcome Trust Case Control Consortium (WTCCC) [23].
Methods
The Hierarchical Naïve Bayes classifier (HBN) is an extension of the wellknown Naïve Bayes classifiers (NB). NB assumes that, given a class variable C that we aim at predicting (say disease yes/disease no) on the basis of a set of n_{ f } features X = {x_{1},..., x_{ nf }}, the posterior probability of the class given the data P(CX) is proportional to the product of the prior probability of the class and the conditional probability, $P\left(XC\right)\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{\prod}_{f\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}1}^{{n}_{f}}P\left({X}_{f}C\right)$, i.e. that the features are independent among each other given the class. NB is a simple and robust classifier, which may be conveniently used also in the context of large number of features, due to its strong bias.
HBN assumes that the measurements are stochastic variables with a hierarchical structure in terms of their probability distributions. We suppose that we can collect a number n_{ rep } of observations, or replicates on each example, and that an example belongs to one of a set of given classes. Let us suppose that is a stochastic variable representing the replicates, whose probability distribution is dependent on a vector of parameters θ, which corresponds to the single example, and may represent, for example, the mean and variance of the probability distribution of replicates; if we consider the ith example, with i in 1,..., N, the probability distribution of the vector of the replicates is given by ${p}_{\left({X}_{i}{\theta}_{i}\right)}$, with, ${X}_{i}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\left\{{x}_{i1},\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}},\phantom{\rule{0.3em}{0ex}}{x}_{ij},\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}},\phantom{\rule{0.3em}{0ex}}{x}_{i{n}_{rep}}\right\}$, while the probability distribution of the individual parameters is ${p}_{\left({\theta}_{i}{{\xi}_{c}}_{{}_{k}}\right)}$, where ${\xi}_{{C}_{k}}$is a set of population hyperparameters that depends on the class C_{ k } in the set C = {C_{ 1 },... C_{ h }} to which the example belongs, and is thus the same for all the examples of the same class. Figure 2 shows the representation of the problems though a graphical model with plates [21].
In a Bayesian framework, the classification step is therefore performed by finding the class with the highest posterior probability distribution. Thanks to the conditional independence assumptions of the hierarchical model described above, we can write ${P}_{\left({C}_{k}X\right)}\propto {P}_{\left(X{\xi}_{{C}_{k}}\right)}{P}_{\left({\xi}_{{C}_{k}}{C}_{k}\right)}{P}_{\left({C}_{k}\right)}$. Since the population parameters ${\xi}_{{C}_{k}}$ are determined by the knowledge of the class C_{ k } with probability one, the equation can be simplified as ${P}_{\left({C}_{k}X\right)}\propto {P}_{\left(X{\xi}_{{C}_{k}}\right)}{P}_{\left({C}_{k}\right)}$. The posterior is thus dependent on the socalled marginal likelihood,${P}_{\left(X{\xi}_{{C}_{k}}\right)}$, which can be calculated by integrating out the vector of parameters θ.
Many replicates are available for each example. The examples are characterized by an individual vector of parameters θ; moreover, the examples belonging to the same class have a common set of parameters ξ.
where Ω_{ θ } is the support of θ.
The learning problem will therefore consist in estimating the population parameters ${\xi}_{{C}_{k}}$ for each class, while the classification problem is mainly related to the calculation of the marginal likelihood. To deal with multivariate problems, we resort to the Naïve Bayes algorithm (NB), which assumes that each attribute is conditionally independent from the others given the class.
From the computational viewpoint, this will allow us to compute separately the marginal likelihood for each variable to perform classification and to learn a collection of independent univariate models. In the following we will show how HNB deals with the classification and learning problems when the variables are discrete with multinomial distribution.
Hierarchical Naïve Bayes for discrete variables
In a SNPs based casecontrol GWAS, the individuallevel information is represented by genotype configurations (aa/aA/AA). For sake of readability we have omitted the dependence of the vectors to the class k. We assume that the vector of the occurrences (counts) of the i th example is ${X}_{i}=\left\{{x}_{i1},\phantom{\rule{0.3em}{0ex}}\dots \phantom{\rule{0.3em}{0ex}},\phantom{\rule{0.3em}{0ex}}{x}_{ij},\phantom{\rule{0.3em}{0ex}}\dots \phantom{\rule{0.3em}{0ex}},\phantom{\rule{0.3em}{0ex}}{x}_{{i}_{S}}\right\}$, where ${x}_{{i}_{j}}$is the number of occurrences of the jth discrete value, or state, of the ith example and S is the number of states of the variable x. The number of replicates of each example is given by ${n}_{re{p}_{i}}={\sum}_{j}^{S}{x}_{ij}$.
We also assume that the relationship between the data X_{ i } and the example parameters θ_{ i } is expressed by a multinomial distribution:
Therefore θ_{ i } is an Sdimensional vector, where θ_{ ij } represents the probability of the occurrence of the jth event in the example i. The parameters θ_{ i }, for i = 1, 2,..., ${N}_{{C}_{k}}$, are characterized by the same prior Dirichlet distribution:
with probability density:
where 0 <α < ∞, ξ_{ j } < 1 ∀j = 1,..., S and ${\sum}_{j=1}^{s}\xi j\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}1$. Following the hierarchical model reported in the previous section, the individual example parameters θ_{ i }, are independent from each other given ξ = {ξ_{ 1 },...,ξ_{ S }} and α. In the following we will assume that the parameter α will be fixed, and it will be thus treated as a design parameters of the algorithm. α represents the prior assumption on the degree of similarity of all examples belonging to the same class. A proper setting of the parameter α allows finding a compromise between a pooling strategy, where all replicates are assumed to belong to the same example and a full hierarchical strategy where all examples are assumed to be different.
Classification
As described in the previous section, the classification problem requires the computation of the marginal likelihood (1). We assume that an estimate of the population parameters ξ is available and that α, β and γ are known. Given an example with counts distributed on different states X = {x_{ 1 },..., x_{ S }}, where ${n}_{rep}=\phantom{\rule{0.3em}{0ex}}{\sum}_{j=1}^{s}xj$, we must compute:
where θ = {θ_{ 1 },..., θ_{ S }} is the vector of the individual example parameters, with ${\sum}_{j=1}^{s}\theta j=1$ and Ω_{ θ } the support of θ. This integral can be solved by noting that it contains the product of a Multinomial and a Dirichlet distribution.
The marginal likelihood can be thus computed as:
The NB approach allows to exploit this equation for each variable in the problem at hand, and then to apply the equation (2) to perform the classification. The marginal likelihood however requires the estimate of the population parameters ξ from the data.
Learning with collapsing
The task of learning the population parameters can be performed by resorting to approximated techniques. Herein we will describe a strategy previously presented by [24] and [25].
We suppose that a data set X = {X_{1},..., X_{ N }} is available for each class where X_{ i } = {x_{i 1},..., x_{ is }} and N is the number of examples within each class (the number of examples can differ between the classes). Such vector is transformed into a new vector X*where the ith element ${X}_{i}^{*}=\left\{{\tau}_{i}{x}_{i1},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{\tau}_{i}{x}_{ij},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{\tau}_{i}{x}_{is}\right\}$ with:
τ_{ i } is a suitable weight that allows to take into account the prior assumptions on the heterogeneity of the example belonging to the class. The hierarchical model is then collapsed into a new model, where the vector of the measurements ${X}_{i}^{*}$ is assumed to have a multinomial distribution with parameters ξ and.${\tau}_{i}{x}_{i{n}_{re{p}_{i}}}$.
Such assumption can be justified by the calculation of the first and second moment of P_{(X*ξ)}which is computed by approximating the distribution of the parameters θ given ξ with its average value [25].
The Maximum Likelihood (ML) estimate of the parameters ξ can be thus obtained for each state of the discrete variable as:
Within this framework we can also provide a Bayesian estimate of the population parameters ξ. We assume that ξ is a stochastic vector with a Dirichlet prior distribution: ξ ~ Dirichlet(β_{ γ1 },...., β_{ γS }), where 0 <β < ∞, γ_{ j } < 1 ∀ j = 1,..., S and ${\sum}_{j=1}^{s}{\gamma}_{j}=1$.
After collapsing, we may derive the posterior distribution of ξ is still a Dirichlet with expected value of the probability of the jth state of the discrete variable:
In this setting, the parameter vector γ and β assume the same meaning of the parameters usually specified in the Bayesian learning strategies applied in many Machine Learning algorithms. In particular, if we assume γ = 1/S and β = 1 we obtain an estimate which is close to the Laplace estimate, while different choices of γ and β lead to estimates which are similar to the mestimate, where β plays the role of m.
Building the model
The HBN machinery can be conveniently exploited to build a multivariate model for SNPs coming from a GWAS. In presence of regions in which nonrandom association of alleles at two or more loci or Linkage Disequilibrium (LD) is observed [26], a new variable X is generated, and all the SNPs belonging to the same block are considered as replicate of the same variable (see Figure 1). On the contrary, if the SNPs are not in LD, they are treated as independent variables in equation (2). For this reason, the model needs a convenient preprocessing step, in which blocks of SNPs characterized by LD are identified and the variables extracted.
Figure 3 reports a graphical representation of how SNPs data can be mapped using the plates notation. According to this representation, each individual is characterized by a vector (or individual parameter θ) reporting the genotypes corresponding to set of SNPs mapping to the same LD region. The set of individual parameters are then employed to estimate a latent variable ξ: each latent variable resumes the individual level information deriving from a different LD region. Thus, the complete set of latent variables (along with potentially informative covariates) is used in turns to estimate the probability of being affected or healthy by the Bayes' theorem.
Results
Datasets simulation
A total number of 9 independent datasets each composed by 300 cases, 300 controls and approximately 34,000 SNPs (representing the whole chromosome 22) have been simulated by the by the Hapgen software [23], according to the patterns of LD that characterize the HapMap CEU b36 reference population (http://hapmap.ncbi.nlm.nih.gov/). Three simulation scenarios have been evaluated, by imposing different genotype relative risk for causative loci:

Scenario 1: heterozygote relative risk = 1.5, homozygote relative risk = 3.0

Scenario 2: heterozygote relative risk = 2.0, homozygote relative risk = 4.0

Scenario 3: heterozygote relative risk = 3.0, homozygote relative risk = 6.0
Three simulated datasets have been generated according to each scenario, by imposing Minor Allele Frequency (MAF) ≥ 0.05.
Experimental datasets
The experimental case control datasets were represented by two genomewide scans on T1D and T2D generated by the WTCCC consortium [23]. Individuallevel genotypes determination has been performed with the Affymetrix GeneChip 500 K Mapping Array Set (http://www.affymetrix.com), which comprises 500,568 SNPs, while genotypes have been estimated from raw intensity signals by the Chiamo software tool [23].
Genotyped samples underwent a preliminary phase of data quality control (QC) which comprised the removal of cases and controls showing: i) missing data fraction > 3%;
ii) heterozygote genotypes fraction > 0.3 OR heterozygote genotypes fraction < 0.225;
iii) discordances or lack in terms of phenotype vs. laboratory information; iv) notEuropean ancestry; v) 1^{st}/2^{nd} degree relatives; vi) duplicated samples. Analogously, SNPs QC consisted in removing markers characterized by: i) studywise missing data proportion > 5% OR studywise minor allele frequency < 5% AND studywise missing data proportion > 1%; ii) statistically significant deviations from the HardyWeinberg Equilibrium within controls (pHWE < 5.7 × 10^{7}); iii) 1 df Trend Test/2 df General test pvalue < 5.7 × 10^{7} comparing allele and genotype frequencies between control groups; iv) bad clustering quality.
For a more detailed description of samples selection, genotyping procedures and quality control filters applied, the reader may refer to [23].
T1D dataset. The final dataset was composed by 1,963 patients affected by T1D, 1,458 control individuals from the UK Blood Service and 458,868 autosomal SNPs (mapping to chromosomes 1 22) passing the quality control procedures.
T2D dataset. The final dataset was composed by 1,924 patients affected by T2D, 1,458 control individuals from the UK Blood Service and 458,868 autosomal SNPs (mapping to chromosomes 1 22) passing the quality control procedures.
Data preprocessing
Both simulated and experimental datasets underwent a preliminary phase of features selection and variables filtering aimed at i) reducing the space of the hypotheses to be tested and ii) isolating chromosome regions characterized by strong LD.
The main steps of the datasets preparation are reported below:

1.
The whole datasets have been split randomly into screening (representing 70% of the whole dataset) and replication sets (the remaining 30% of the whole dataset). The sampling procedure has been performed with stratification, so that each fold was represented by the same proportion of cases and controls.

2.
On each screening set: a. Selected the top 500 most significant markers, based on the results from univariate Pearson χ2 tests with 2 degrees of freedom (df), comparing genotypes distributions between cases and controls.

b.
Define chromosome regions characterized by the presence of nearby SNPs showing pairwise r^{2} ≥ x, where x represents arbitrary cutoff values corresponding to r^{2} = 0.6 (SNPs in moderatetostrong LD) and 0.8 (SNPs in strong LD) respectively.

i.
Group markers localized within the same LD block and build latentvariables.

ii.
Use the remaining SNPs falling outside the LDblocks as covariates.

c.
Split the whole screening set into 10 folds of equal sample size and characterized by cases/controls ratio = 1 according to the 10 Folds Cross Validation procedure (10 Folds CV) [27].

b.

3.
Apply the LDbased SNPs grouping schema learnt on the screening set to the corresponding replication set.
Both screening and replication sets have been employed for evaluating the generalization performances obtained by the HNB algorithm and to compare them with those obtained by the standard NB classifier on the same datasets.
Results from simulated datasets
The HNB algorithm has been validated on simulated datasets, which underwent the preprocessing phases described in the previous sections.
Descriptive analyses of the simulated datasets revealed that the number of blocks to be analyzed increased proportionally to the stringency of the r^{2} imposed for defining regions of correlation, while the median number of SNPs characterizing each block decreased. This is due to the fact that SNPs linked by strong correlation (r^{2} ≥ 8), are generally confined to small and fragmented regions due to structural recombination events. Table 1 resumes the characteristics of the nine simulated datasets.
The generalization performances of the two algorithms have been evaluated by comparing the Classification Accuracy (CA) and the Area Under the Curve (AUC) of the two models estimated by 10 Folds CV procedures and by testing the models learnt on single screening set on the corresponding independent replication set [27, 28]. Results are reported in Table 2 and show that the HNB reaches higher or equal generalization performances with respect to the standard NB when chromosome regions characterized by SNPs showing moderatetostrong (r^{2} > 0.6) or strong (r^{2} > 0.8) pairwise LD are analyzed.
No significant variations in terms of CA and AUC have been observed as function of the different genotype relative risks imposed for data simulations (p > 0.05), thus CA and AUC estimated from different simulations have been pooled and used for evaluating the differences in terms of classification performances between HNB and NB.
Results show that the median CA and AUC obtained by the HNB over the single results are higher to those reached by the standard NB for both LD thresholds that have been evaluated. The onetailed Wilcoxon signed rank test [29] has been used for testing the hypotheses that the CA and AUC obtained by the HNB were significantly higher than those estimated by the standard NB and by the majority classifier [30].
Results from the Wilcoxon signed rank test showed that:

The distribution of the AUC values estimated by the HNB over the complete set of simulations was significantly higher than the corresponding distribution of AUC estimated by the standard NB when r^{2} ≥ 0.8 was imposed as threshold for defining LDregions (AUC from 10 Folds CV: p < 0.05; AUC from independent replication set: p < 0.05).

The HNB algorithm reached CA and AUC estimates significantly higher than those obtained by the majority classifier: ○ by comparing the distribution of CA and AUC obtained by the HNB with those generated by the majority classifier on the corresponding folds (maj. CA = 0.50, maj. AUC = 0.50) for each screening set according to both LD thresholds (p < 0.01);
○ by comparing the distribution of CA and AUC estimated by the HNB over the 9 independent test sets with the corresponding distribution of CA and AUC obtained by the majority classifier (maj. CA = 0.50, maj. AUC = 0.50) on the corresponding test set according to both LD thresholds (p < 0.01).
Hierarchical Naïve Bayes for Type 1 and Type 2 Diabetes prediction
The HNB algorithm has been evaluated on two real genomewide datasets aimed at identifying the genetic bases of T1D and T2D respectively. The analyzed datasets have been generated by the WTCCC [23] and they are publicly available. The final datasets were each composed by 1,400 cases and 1,400 controls sampled randomly from the complete set of individuals passing the quality control filters as reported in the previous section. Thus, each final dataset has been split into a fist set of 2,100 individuals (1,050 cases and 1,050 controls) representing the screening cohort, while the replication set was composed by the remaining 350 cases and 350 controls. The preliminary phases of features selection and LDregions definition (using r^{2} ≥ 0.8 as threshold) have been performed as reported in methods section, SNPs that did not fall within conserved regions have been used as covariates.
The generalization performances of the proposed approach and of the NB have been estimated by i) 10 Folds CV performed on the each screening set and ii) by learning the models on the whole screening set and then testing the CA and AUC on the two corresponding replication cohorts.
Results are reported in Table 3 and confirm that the HNB algorithm is able to reach the highest generalization performances on both datasets, according to both 10 Folds CV and by testing the model learnt on the whole screening set on the corresponding independent replication cohort. Further, results from the Wilcoxon Signed Rank test evidenced that the distribution of CA and AUC obtained by the HNB by 10 Folds CV was significantly higher than the corresponding distributions obtained by the majority classifier on the same folds (p < 0.05).
Discussion
The approach proposed, called Hierarchical Naïve Bayes, represents an innovative strategy aimed at exploiting correlated information from genome wide datasets. The human genome is typically characterized by local patterns of strong LD that define blocks of SNPs showing low recombination rates. In this scenario, the HNB represents a suitable way of deriving genetic information with respect to standard multivariate models, since it is able to take into account for structural correlations existing between markers. These characteristics allow HNB to overcome the limitations of the standard NB algorithm, which oversimplistic assumptions of independence between attributes are rarely respected in the context of GWAS data. The results obtained by the HNB on both simulated and real datasets show that the proposed approach is able to achieve classification performances that are generally higher or equal to those obtained by multivariate models based on standard NB. In particular, the HNB represents a suitable alternative to the standard NB when analyzing genome regions characterized by strong LD, a typical condition in which the assumptions of independency between variables of the HNB are dramatically violated.
To be noted, even if the results obtained by the 10 Folds CV procedures are prone to overfitting for both simulated and real datasets, since the preliminary filtering phase heavily exploits the screening set for features selection and blocks determination, the results obtained on the replication sets are free from these limitations. These observations confirm how taking into account for structural correlation between markers offers substantial gain in terms of generalization capability with respect to the standard NB approach that does not consider the human genome structure.
Many research groups used the publicly available WTCCC datasets and private case/control cohorts on T1D and T2D for testing the predictive performances of several machine learning algorithms. As an example, Wei et al., explored an approach based on SVM for building risk models using SNPs data and tested their approach on different case/control datasets on T1D [11]. The authors reported AUC ranging from 0.86 to 0.89 by 5 Folds CV, using different SNPs inclusion thresholds on the WTCCC cohort, while AUC corresponding to 0.84 and 0.83 by training the algorithm on WTCCC data and testing the performances on CHOP/MontrealT1D and GoKinDT1D datasets respectively, representing independent cohorts of cases and controls. When the algorithm was trained on the CHOP/MontrealT1D and tested on the WTCCC and GoKinDT1D data, the algorithm reached comparable AUC estimates, corresponding to 0.84 and 0.82 respectively. Roshan et al. [31] studied the number of causal variants and associated regions identified by top SNPs in rankings given by the 1 df chisquared statistic, SVM and RF on real datasets on T1D from the WTCCC and GoKinD studies. SVM achieved the highest AUC of 0.83 with 21 SNPs followed by random forest and chisquare AUCs of 0.81 each with 29 and 17 SNPs, respectively. Clayton [32] discussed the impact of including interaction terms for predicting the probability of T1D and reported AUC estimated corresponding to 0.74 using pairwise interaction terms in logistic regression and 0.73 when no interaction were considered. These observations suggest how interaction between SNPs does not add substantial additional information to the correct classification of T1D subjects.
Lower CA and AUC estimates are generally obtained from the T2D datasets. As an example, van Hoek et al. investigated 18 polymorphisms from recent GWAS on T2D by logistic and Cox regression models in the Rotterdam Study cohort, reaching AUC corresponding to 0.60 [33]. HyoJeong Ban et al. [12] analyzed a Korean population of T2D patients and controls, reporting CA corresponding to 0.65 using a combination of 14 SNPs in 12 genes mapping to T2D related pathways by using the radial basis function (RBF)kernel SVM.
The performances obtained by the HNB on the independent test sets are generally comparable to those reported by other research groups for both T1D and T2D reported in this section. However, a direct comparison of the performances obtained by the HNB on the real datasets with those obtained by other previously published approaches on the same WTCCC cohorts can be hardly interpreted due to differences in terms of sample size of the control population (the analyzed dataset does not include the 1958 British Birth Cohort of controls, generated by the WTCCC and commonly used as reference population along with the UK Blood Service cohort). Further, the lack of covariates regarding T1D and T2D cases and controls (e.g., BMI, smoking history,.., etc.) limited the possibility to integrate genetic and clinical information, a key step for a deeper comprehension of complex trait diseases. Thus, the availability of GWAS datasets complete of detailed phenotype and clinical information will allow testing the HNB in a more realistic scenario. Beside these considerations, the proposed approach can be further improved to take into account also functional correlations, by using, for example, the Tree Augmented Naïve Bayes (TAN) approach on the latent variables, thus combining the two strategies [34].
References
 1.
Steinberger J, Daniels SR: Obesity, insulin resistance, diabetes, and cardiovascular risk in children: an American Heart Association scientific statement from the Atherosclerosis, Hypertension, and Obesity in the Young Committee (Council on Cardiovascular Disease in the Young) and the Diabetes Committee (Council on Nutrition, Physical Activity, and Metabolism). Circulation. 2003, 107 (10): 14481453. 10.1161/01.CIR.0000060923.07573.F2.
 2.
Mechanic LE, Chen HS, Amos CI, Chatterjee N, Cox NJ, Divi RL, Fan R, Harris EL, Jacobs K, Kraft P: Next generation analytic tools for large scale genetic epidemiology studies of complex diseases. Genetic epidemiology. 2011
 3.
Heilig M, Goldman D, Berrettini W, O'Brien CP: Pharmacogenetic approaches to the treatment of alcohol addiction. Nature reviews Neuroscience. 2011, 12 (11): 670684. 10.1038/nrn3110.
 4.
Kim K, Yang YJ, Kim K, Kim MK: Interactions of single nucleotide polymorphisms with dietary calcium intake on the risk of metabolic syndrome. The American journal of clinical nutrition. 2012, 95 (1): 231240. 10.3945/ajcn.111.022749.
 5.
Yang J, Benyamin B, McEvoy BP, Gordon S, Henders AK, Nyholt DR, Madden PA, Heath AC, Martin NG, Montgomery GW: Common SNPs explain a large proportion of the heritability for human height. Nature genetics. 2010, 42 (7): 565569. 10.1038/ng.608.
 6.
Chapman J, Whittaker J: Analysis of multiple SNPs in a candidate gene or region. Genetic epidemiology. 2008, 32 (6): 560566. 10.1002/gepi.20330.
 7.
Warner HR, Toronto AF, Veasey LG, Stephenson R: A mathematical approach to medical diagnosis. Application to congenital heart disease. JAMA: the journal of the American Medical Association. 1961, 177: 177183. 10.1001/jama.1961.03040290005002.
 8.
Domingos P, Pazzani M: On the optimality of the simple Bayesian classifier under zeroone loss. Machine Learning. 1997, 29 (29): 103130.
 9.
Bennett PN: Assessing the Calibration of Naive Bayes' Posterior Estimates. Pittsburgh, PA: Carnegie Mellon University, School of Computer Science 2000. 2000, vol. CMUCS00155
 10.
Noble WS: What is a support vector machine?. Nature biotechnology. 2006, 24 (12): 15651567. 10.1038/nbt12061565.
 11.
Wei Z, Wang K, Qu HQ, Zhang H, Bradfield J, Kim C, Frackleton E, Hou C, Glessner JT, Chiavacci R: From disease association to risk assessment: an optimistic view from genomewide association studies on type 1 diabetes. PLoS genetics. 2009, 5 (10): e100067810.1371/journal.pgen.1000678.
 12.
Ban HJ, Heo JY, Oh KS, Park KJ: Identification of type 2 diabetesassociated combination of SNPs using support vector machine. BMC genetics. 2010, 11: 26
 13.
Breiman L, Friedman J, Stone CJ, Olshen R: Classification and Regression Trees. 1984, New York  London: Chapman & Hall
 14.
Breiman L: Random Forests. Machine Learning. 2001, 45 (1): 532. 10.1023/A:1010933404324.
 15.
Goldstein BA, Hubbard AE, Cutler A, Barcellos LF: An application of Random Forests to a genomewide association dataset: methodological considerations & new findings. BMC genetics. 2010, 11: 49
 16.
Li J, Das K, Fu G, Li R, Wu R: The Bayesian lasso for genomewide association studies. Bioinformatics. 2011, 27 (4): 516523. 10.1093/bioinformatics/btq688.
 17.
Tibshirani R: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B. 1996, 58 (1): 267288.
 18.
Wu TT, Chen YF, Hastie T, Sobel E, Lange K: Genomewide association analysis by lasso penalized logistic regression. Bioinformatics. 2009, 25 (6): 714721. 10.1093/bioinformatics/btp041.
 19.
Lee SH, Wray NR, Goddard ME, Visscher PM: Estimating missing heritability for disease from genomewide association studies. American journal of human genetics. 2011, 88 (3): 294305. 10.1016/j.ajhg.2011.02.002.
 20.
Wei W, Visweswaran S, Cooper GF: The application of naive Bayes model averaging to predict Alzheimer's disease from genomewide data. Journal of the American Medical Informatics Association: JAMIA. 2011, 18 (4): 370375. 10.1136/amiajnl2011000101.
 21.
Demichelis F, Magni P, Piergiorgi P, Rubin MA, Bellazzi R: A hierarchical Naive Bayes Model for handling sample heterogeneity in classification problems: an application to tissue microarrays. BMC bioinformatics. 2006, 7: 51410.1186/147121057514.
 22.
Gompert Z, Buerkle CA: A hierarchical Bayesian model for nextgeneration population genomics. Genetics. 2011, 187 (3): 903917. 10.1534/genetics.110.124693.
 23.
Genomewide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature. 2007, 447 (7145): 661678. 10.1038/nature05911.
 24.
Leonard T: Bayesian simultaneous estimation for several multinomial experiments. Communications in Statistics  Theory and Methods. 1977, A6 (7): 619630.
 25.
Bellazzi R, Riva A: Learning Bayesian Networks probabilities from longitudinal data. IEEE transactions on systems, man and cybernetics. 1998, 28 (5): 629636.
 26.
Lewontin RC, Kojima K: The evolutionary dynamics of complex polymorphisms. Evolution. 1960, 14 (4): 458472. 10.2307/2405995.
 27.
Geisser S: Predictive Inference. 1993, New York: Chapman and Hall
 28.
Zhou XH, Obuchowsky N, McClish DK: Statistical Methods in Diagnostic Medicine. 2002, New York, USA: Wiley & Sons
 29.
Wilcoxon F: Individual comparisons by ranking methods. Biometrics Bulletin. 1945, 1 (6): 8083. 10.2307/3001968.
 30.
Demsar J: Statistical Comparisons of Classifiers over Multiple Data Sets. Journal of Machine Learning Research. 2006, 7:1: 30
 31.
Roshan U, Chikkagoudar S, Wei Z, Wang K, Hakonarson H: Ranking causal variants and associated regions in genomewide association studies by the support vector machine and random forest. Nucleic acids research. 2011, 39 (9): e6210.1093/nar/gkr064.
 32.
Clayton DG: Prediction and interaction in complex disease genetics: experience in type 1 diabetes. PLoS genetics. 2009, 5 (7): e100054010.1371/journal.pgen.1000540.
 33.
van Hoek M, Dehghan A, Witteman JC, van Duijn CM, Uitterlinden AG, Oostra BA, Hofman A, Sijbrands EJ, Janssens AC: Predicting type 2 diabetes based on polymorphisms from genomewide association studies: a populationbased study. Diabetes. 2008, 57 (11): 31223128. 10.2337/db080425.
 34.
Friedman N, Geiger D, Goldszmidt M: Bayesian Network Classifiers. Machine Learning. 1998, 29: 131161.
Acknowledgements
We are grateful to Andrea Demartini for the implementation of the HNB algorithm. The research was supported by the Innovative Medicine Initiative under grant agreement n° IMI/115006 (the SUMMIT consortium).
This study makes use of data generated by the Wellcome Trust Case Control Consortium. A full list of the investigators who contributed to the generation of the data is available from http://www.wtccc.org.uk. Funding for the project was provided by the Wellcome Trust under award 076113.
This article has been published as part of BMC Bioinformatics Volume 13 Supplement 14, 2012: Selected articles from Research from the Eleventh International Workshop on Network Tools and Applications in Biology (NETTAB 2011). The full contents of the supplement are available online at http://www.biomedcentral.com/bmcbioinformatics/supplements/13/S14
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
AM carried out the molecular genetic studies, performed the statistical analysis and drafted the paper. NB carried out software tools development and integrations, participated in study design and drafted the manuscript. RB conceived the study, participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
Rights and permissions
About this article
Published
DOI
Keywords
 Classification Accuracy
 Marginal Likelihood
 Wellcome Trust Case Control Consortium
 SNPs Data
 Linkage Disequilibrium Threshold